In this article, I will write about my motivation to start and advance my own projects. Looking back, an initially difficult path can often lead to insights tha remain hidden in our day-to-day work.
We are a small development team with four developers in Wiesbaden and work for a large German telecommunications service provider. Our task is to identify Internet products that are available at one address and to make them “orderable” in the future. What doesn’t sound exciting at first glance – after all, you aren’t developing anything that seems particularly “sexy” – only becomes exciting at second glance. If you look at the technical requirements that are necessary to develop and operate such a service, it quickly becomes clear that the exciting aspects are manifold and cannot be evaluated purely based on what is visible on the surface.
Our project started on the basis of the EFT-Framework in order to be able to provide a functional environment shortly after the start of the project. We implemented new requirements in a microservices architecture. This step was necessary to perform logical separations on the one hand and to be able to scale more granularly on the other. Another benefit owed to the granularity was the reduction of deployment times. As a result, the frontend in particular could be developed and tested in shorter cycles and quality assurance was also carried out in shorter cycles.
What led to high costs in the frontend for the development and manual testing of the interfaces and secondarily in the backend for testing the services in combination with other services, all of which were very linear, led to a feeling of monotonousness when using the frontend. In the end, this also had a negative influence on attentiveness, since a “routine process” had to be gone through repeatedly. In principle, this activity wasn’t very automated and was therefore highly dependent on the human factor.
In the quality assurance (QA) of our project, Selenium ensures that the function of the “Main Business Cases” is always given (Cross-Browser [IE 11, Edge, Chrome, Firefox]). It is automatically ensured that after updates or hotfixes (three times in two years so far) all areas of the application behave predictably when interacting with the frontend. Due to the amount of automation code to be written (Java/Selenium) as well as provisioning test attributes in the markup, the effort to ensure these quality requirements is not insignificant. However, this is necessary to ensure the functionality of the platforms across the various browsers.
If automation could be achieved in the development of the frontend without writing additional code, the proportion of manual tests could be reduced.
The requirements for the idea described above can be summarized as follows:
“Automate any interaction necessary to operate the booking interface and offer a high degree of flexibility so that the automation can be adapted with minimal effort after changes to the frontend code.”
puppeteer was used As API for the interaction with the browser. Its methods for accessing and controlling web pages were mapped in a JSON scheme.
An interpreter translates the scheme into executable code. This interpreter was released as an Open Source tool and called FetchBot – more on “Fetch” later.
FetchBot can be integrated as a library in your own projects or can be operated by means of a command line interface (CLI).
To complete a booking, 73 user interactions are necessary on average. It takes about two minutes to complete the booking process manually.
With Fetchbot ,the booking duration was reduced to one minute. After the first practical attempts, it became clear that manual tests cannot be completely dispensed with. The main reason for this is that usability cannot thus be “experienced”. Nevertheless, all interactions that are necessary to reach a specific step in the booking process can be automated.
If, for example, an adjustment is to be made in the last step of the checkout process, FetchBot can be used to automatically book up to exactly this point in order to be able to continue testing manually. This symbiosis of automated and manual interaction significantly accelerated frontend development.
With a JSON file (configuration) corresponding to the schema mentioned above, all interactions to complete a booking in our workflow can be mapped to 140 lines.
To avoid having to write a new configuration in every new sprint, we maintain a “MasterSchema”, which represents the “maximum specification”. Starting from this maximum value, an automation oriented to the application case can be created by removing all irrelevant elements.
Extracting data via configuration is another function of FetchBot, which ultimately also gives the tool its name.
If such a block is evaluated on a page, the content or attribute value is extracted from the selectors found.
With this technology, web pages – even if this data is loaded asynchronously – can be “API-fied” like an API. This opens up many possibilities for using classic websites – for example as data services.
In a new project, I can create the configurations automatically in order to be able to interact with larger pages contextually. As you can see, it remains exciting.
Developing FetchBot was an interesting challenge and significantly reduced the more tedious part of my working day. The endless repetitive filling out of forms can become very monotonous. In addition, I was able to exchange ideas with colleagues and promote a project that also makes everyday life easier for me in my private data projects.
FetchBot is not as powerful as an application-optimized crawler. Nevertheless, it is fast enough to automatically analyze many websites and therefore very easy to use. It is sufficient to master the Chrome developer tools and JSON to be able to address every website as an API.