It’s widely acknowledged that testing is a key aspect of software quality. Due to the complexity of modern software development, many firms have started using end-to-end testing procedures as part of their software release process.
What is end-to-end testing? Let’s start with a definition. “End-to-end testing is a technique used to test whether the flow of an application right from start to finish is behaving as expected. The purpose of performing end-to-end testing is to identify system dependencies and to ensure that the data integrity is maintained between various system components and systems.”1
A few months ago at Smaato, we decided to implement end-to-end tests inside our Publisher Platform (SPX). SPX is a Java application on the backend and a mix of Angular, JS, Primefaces and pure HTML pages on the frontend. We apply a continuous integration system based on Jenkins to build the application and run tests.
Running end-to-end tests requires setting up a robust application services structure composed of the following elements:
- Tomcat server to run the application (SPX)
- End-to-end test framework (Protractor)
Why Do We Use Protractor?
Protractor is an end-to-end test framework specifically for AngularJS apps. It was built by Google and released to open source. Protractor is built on top of WebDriverJS and includes important improvements tailored for AngularJS apps. We decided to use Protractor for the following reasons:
- You don’t need to add waits or sleeps to your test.
Protractor can communicate with your AngularJS app automatically and execute the next step in your test the moment the webpage finishes pending tasks, so you don’t have to worry about waiting for your test and webpage to sync.
- It supports Angular-specific locator strategies.
(e.g., binding, model, repeater) but also native WebDriver locator strategies (e.g., ID, CSS selector, XPath). This allows you to test Angular-specific elements without any setup effort on your part.
- It is easy to set up page objects.
Protractor does not execute WebDriver commands until an action is needed (e.g., get, sendKeys, click). This way you can set up page objects so that tests can manipulate page elements without touching the HTML.
- It uses Jasmine
Why Do We Use Docker?
Running all these processes manually is really tough for Jenkins, which is an open source continuous integration tool that already deals with different versions and origins. To solve that problem, we decided to run the setup inside a Docker environment on a dedicated Docker host. Docker is a fantastic tool for running multiple containers on the same machine, while connecting those containers and controlling the Docker host remotely from a Jenkins job. It is an ideal tool for running a clean environment, and it doesn’t hurt that Docker is very fast. We can access hundreds of official Docker containers and thousands of private containers.
Pulling It All Together
We use an out-of-the-box MySQL database image to ensure that the test environment is isolated. For our SPX application, we use a predefined Tomcat image which is built with the Spotify Maven Docker plugin and contains a helper application to initialize the database. The database setup starts once the Docker container is running.
In order to connect these containers to each other, we use Docker container links. This allows us to separate the multiple parallel instances of our end-to-end test setup. The containers and links have been named using the Jenkins build number parameter. Although we used Docker version 1.7 at the time, it’s now possible to use network keys.
As Docker-compose was not already installed on Jenkins at that time, we built everything from a shell-script inside the Jenkins job:
docker run --name spx_db_$BUILD_ID -d \
-e MYSQL_ROOT_PASSWORD="password" \
-e MYSQL_DATABASE="our_mysql_ddbb" \
-e MYSQL_USER="user" \
-e MYSQL_PASSWORD="password" \
docker run --name=spx_ui_$BUILD_ID -d --link=spx_db_$BUILD_ID:mysql \
-e CFG_CONFIG_SWITCH=linkeddb \
-e CFG_DB_USER=spx \
-e CFG_DB_PWD="password" \
-e CFG_DB_NAME="our_ddbb_name" \
-e CFG_SETUP_DB=yes \
docker run --name=spx_selenium_$BUILD_ID -d \
-e VNC_PASSWORD=pancakes \
-v /dev/shm:/dev/shm \
docker build -t spx-protractor:$BUILD_ID .
docker run --name=spx_protractor_$BUILD_ID \
-e E2E_SUITE \
-v /mnt/spx/e2e:/usr/local/test/config/output \
First, we set up a container with the database without data, then we run the container with SPX application. In the startup script of the container, we also set up the database schema and the initial data.
As you can understand, timing is crucial here: as long as those containers are feeding each other information, those machines should be properly connected. In the future, we want to trigger a mechanism to create events at the end of every script to run the next docker container.
Once our SPX application was up and running, we started with the test environment. To run the Protractor container, we needed a Selenium server so we decided to use an Elgalu Docker image for this purpose.
Next, comes Protractor. The Protractor image is compiled with the test folder inside. Therefore, if you run the container, all the tests are available and will be executed. Once all tests have been executed, it automatically creates an output HTML file with the results. To make this file available for Jenkins, we must move it from the Docker container to the host. The container mounts a host folder for this purpose, and Jenkins uses the wget command to retrieve this file (thanks to an Nginx Docker container) and show it via a HTML-output plugin.
The setup described in this article made a substantial contribution to our QA processes. We still have plenty of room for improvement - for example, our complex orchestration with the Bash script can lead to misunderstanding. Also, upgrading to the newest version of Docker will make networking setup easier via Docker-Compose, which offers user-defined bridged networks instead of linked containers. We hope you found this overview of Smaato’s automated end-to-end testing process both interesting and helpful.
Co-author: Jörn Zukowski