Selenium Cicd

Reading Time: 15 minutes As part of a series of blog articles on Continuous Testing in the Continuous Delivery environment, we start here with a blog on Selenium Grid with Docker. A crucial goal of Continuous Testing is to be able to deliver tested software adaptations to the customer faster and thus meet changing expectations. There are several ways to make this happen. One effective way is to combine Selenium Grid and Docker. In this article we explain

  • what the two tools do
  • why they make a lot of sense together
  • how to install them
  • first steps (starting Hub and Nodes)
  • what is Docker-compose
  • how does the implementation work (example)

what is Continuous Testing and Continuous Delivery

  • Continuous Testing

Continuous Testing is the process of running automated tests as part of the delivery pipeline (CI/CD) to continuously check all changes to the code to validate requirements (functional as well as non-functional (load testing and performance testing)).

  • Continuous Delivery

Continuous Delivery (CD) is a software development approach that uses a collection of processes, techniques and tools to largely automate and continuously improve the deployment (software delivery process). Techniques include continuous integration (CI), test automation, and continuous installation. Here is an illustration of a CI/CD pipeline: Image: Testing Pipeline (Click to enlarge) [Source: Qytera].

What is Selenium or Selenium Grid?


If you want to test web applications in an automated way, there are different options. A very good choice is the framework Selenium. This meanwhile widely used, free tool is one of the most popular open source testing tools. With Selenium you can record interactions with the web application, implement automation scripts and reuse them in tests as often as you like. You can also perform more complicated operations with it (e.g. logging into the backend, creating records, etc.).

Selenium Grid

Selenium Grid is part of the Selenium Suite and is designed for simultaneous testing on different browser instances. Thus, a lot of time can be saved by parallelism during the test execution. The architecture of Selenium Grid is called Hub and Node architecture. Here, the hub is the center and exists only on one computer and tests are loaded into it. The nodes correspond to the Selenium instances. They execute the tests loaded on the hub. The systems with the nodes can run on other platforms than the hub. Image: Selenium Hub. (Click to enlarge) [Source: Qytera].

What is Docker

The Docker tool makes it easy to create, deploy and run an application using containers. With such a container, a developer or test automation engineer can package up an application with all the necessary parts (e.g., libraries and other dependencies) and ship it as one package. In this way, the developer can ensure that the application can run on any other Linux operating system, regardless of the custom setting. Docker has a similar principle of a closed system as a virtual machine. Virtual machine have a complete operating system with its own memory management and associated device driver. Valuable resources for the guest operating system and hypervisor are emulated in a virtual machine, making it possible to run many instances of one or more operating systems in parallel on a single machine (or host). Each guest OS runs as a single entity from the host system. On the other hand, Docker containers are run with Docker Engine rather than the hypervisor. Containers are therefore smaller than VMs and allow faster startup with better performance. Compared to virtual machines, containers can be faster and less resource intensive. With a virtual machine, it can take several minutes to start applications, where a container can be created and started in a few seconds.

Installation (described for Windows)

For Docker for Windows to work properly, the machine needs:

  1. 64-bit operating system
  2. Windows 7 or higher version (also possible with macOS and Linux).
  3. Hyper-V installed and working (Normally already in function by default).
  4. Virtualization enabled (To be seen and enabled in BIOS if not set by default).

Link: Docker for Windows 10, MacOS, Linux The installation of Docker itself is simple and for more information you can follow the Docker installation documentation. However, care should be taken to select the “Windows container” option during the installation. The installation adds the following software to the computer: Docker Client for Windows Docker Machine for Windows Terminals Docker Compose Docker Toolbox management tool and ISO Oracle VM VirtualBox Kitematic, the Docker GUI Docker QuickStart shell Git MSYS-Git UNIX tools.

Verification of the installation

The installer adds Docker Toolbox, VirtualBox, and Kitematic to the application folder. To launch a preconfigured Docker Toolbox terminal, click the Docker QuickStart icon. If the system displays a user account control to allow Virtual-Box to make changes to your computer, select Yes. The terminal does several things to set up Docker Toolbox for you. When it’s done, the terminal displays the $ prompt. Image: Docker Toolbox. (Click to enlarge) [Source: Qytera] Let’s try a sample hello-world program. Enter the following command at the command prompt: docker run hello-world This should download the very small hello-world image and display a message saying “Hello from Docker”, meaning Docker is working fine. A lot has happened in the background to get Hello World up and running. Here are some of those steps.

  1. The Docker client application was invoked using the Docker command.
  2. The call “docker run hello-world” was executed.
  3. In this case, Docker Client was asked to create an instance of a Docker image titled hello-world. Where did Docker get this image? Docker Client searches the local repositories for the image titled “hello-world” but cannot find it. So Docker now searches the Internet in a public repository of Docker images hosted by Docker itself. Docker has now found it there and downloaded, installed and launched the instance.

Selenium Grid with Docker Containers.

Normally, when configuring the Selenium Grid, we need to create multiple virtual machines as nodes and connect each node to the hub. When we create a normal grid, we need to download the Selenium Server JAR file and run that JAR file on each machine where we want to set up the Selenium Grid. This is expensive and sometimes a time-consuming task for testers. However, Docker helps us solve costly and time-consuming problems. Image: Docker Mobile. (Click to enlarge) [Source: Qytera]

Install the Docker images

As with the normal grid, when configuring the Selenium Grid with Docker, we need to install the hub and browser nodes in our Docker container. Later, we can launch the hub and nodes from these Docker containers. Therefore, we need to install the hub and node images in Docker first.

  • Selenium/hub
  • Selenium/node-chrome
  • Selenium/node-firefox-debug (important for comprehension)
  • Selenium/node-chrome-debug

The next question is how to find these images. To find these images, you go to Docker Hub and search for an image with its name. You can also find these images by typing the search command, as shown below. docker search Selenium/hub Image: Selenium Hub Docker. (Click to enlarge) [Source: Qytera] One can find the following results on Docker Hub, when searching for Selenium/hub: Image: docker screenshot 1 (Click to enlarge) [Source: Qytera] This shows all the image repositories we have for Selenium Hub. Here we need to click on the image that has the largest number of pulls. It will help us to run our code without any errors. Once we click on an image, we will see a new window (see below). Image: Docker Screenshot 2 (Click to enlarge) [Source: Qytera] This way you can search for the other images in Docker Hub and get the corresponding info. Now here you can download individual images from Docker repository. Please execute these individual commands.

  1. docker pull selenium/hub
  2. docker pull selenium/node-firefox
  3. docker pull selenium/node-firefox-debug
  4. docker pull selenium/node-chrome
  5. docker pull selenium/node-chrome-debug

After we have downloaded all the images, we can check them with the following command. docker images

Launch Selenium Hub

We first launch Selenium Hub from the Docker container. For this we need the following command. docker run -d -p 4444:4444 –name selenium-hub selenium/hub To start a container in disconnected mode, we use the -d or -d=true option Selenium Hub is now started. To check this, we need to enter the following link in the browser. http://localhost:4444/grid/console Errors may occur when starting Selenium Hub. Use the commands “docker stop $(docker ps -aq)” and “docker rm $(docker ps -aq)” to stop and delete all containers. Then start again.

Start Selenium Nodes

It will now start Chrome Node and Firefox Node and connect to Selenium Hub. Commands to run Chrome Node from Docker: image: docker screenshot 5 (click to enlarge) [source: Qytera] Now we connect Debug Nodes to Hub.

  • docker run -d -P -p 5901:5900 –link selenium-hub:hub -v /dev/shm:/dev/shm selenium/node-chrome-debug
  • docker run -d -P -p 5902:5900 –link selenium-hub:hub -v /dev/shm:/dev/shm selenium/node-firefox-debug

After running the debug nodes in Chrome and Firefox, you can refresh the browser. Now you can find both nodes there. Image: Docker Screenshot 6 (Click to enlarge) [Source: Qytera] If there is an error while running images, one should reinstall the image and run it again from Docker. The next step is to observe the debug nodes using VNC viewer. For this we need a VNC viewer. realvnc download We need the port numbers of the Chrome and Firefox debug nodes to connect these nodes with VNC viewer. docker ps -a Port number 5901 for Chrome node and port 5902 for Firefox node are defined. We now call the installed VNC Viewer and enter the data in the following format: Hub URL:Port number e.g. (5901 for Chrome & 5902 for Firefox) The predefined password is secret After clicking the Connect button, the VNC Viewer will ask for a password. The password is secret. After that, we see a window just like a VM (see below) Image: Docker Screenshot 8 (click to enlarge) [Source: Qytera] Similarly, we can do the same in the Firefox browser, using the VNC viewer. This is how we completed our Docker installation with Selenium Grid.

Docker-compose YAML file

If our Docker application contains multiple containers (e.g. complex applications with a web server, a database), then it is cumbersome to execute the functions each time at command line level. Docker-compose solves this problem by defining in a single YAML file the multi container application with dependencies. One can start and stop it with a single command. We have the option to define the process in the YAML file and run it with the docker-compose command. As an example, a YAML file has been created: Image: docker screenshot 9 (click to enlarge) [Source: Qytera] For this, we create a new folder, under this folder we put the docker-compose.yml file along with dockerfile (you can use both .yml and .yaml as file extension, both are functional). With docker-compose you can execute this setting. By using the command docker-compose up -d, the images for Hub, Chrome and Firefox nodes are retrieved from Docker and an instance of each available browser is launched. By default, the latest versions are used if we omit the version number at the end of the image. Before running the following code, we need to make sure that none of the containers specified in the docker-compose.yaml file are already started. To do this, either each container can be deleted individually with the command “docker container kill ” followed by “docker container rm ” or Docker can be restarted so that all containers are also terminated. After that, the following command can be executed: $ docker-compose up -d

Selenium-Grid with Docker – An Example

Attached is an example code. This has been customized so that you can run the code on Selenium Nodes rather than locally. Image: Docker Screenshot 10 (Click to enlarge) [Source: Qytera] The changes made are listed and described below.


DesiredCapabilities provides the ability to set the properties of the browser. So we set browser name, platform and version of the browser:

  1. DesiredCapabilities capability =; // For Firefox –> DesiredCapabilities.firefox
  2. capability.setBrowserName(“chrome”); // For Firefox –> setBrowserName(“Firefox”)

If the tests are run on the local browser, then you can use these drivers e.g. firefoxdriver, iedriver, chromedriver. Because we run the tests on the remote computer’s browser, we use RemoteWebDriver.

  1. driver = new RemoteWebDriver(new URL(seleniumHubURL), capability); // seleniumHubURL–> Selenium hub URL e.g.
  2. driver.get(“”);

assertTrue Assertion is generally used for the boolean condition “True”. If it returns False, it will fail and skip software execution from that specific method. assertTrue(driver.getTitle().startsWith(title)); // title –> “Selenium”. Furthermore, it must be ensured that all browsers are closed again correctly after running the various tests. For each test a new RemoteWebDriver is created, which must be deleted (driver.quit()) after the execution of a test. If the driver is not deleted correctly, a second execution of the test is not possible, because an unclosed instance of the browser is already running in the container. A new execution is blocked by this. The code has been adjusted to close the driver after each test. Image: Docker screenshot 11 (click to enlarge) [Source: Qytera].


You are now ready for a successful start into the world of parallel automated testing. In further blogs, we plan to cover other interesting areas of Continuous Testing in the Continuous Delivery environment. It would be great to welcome you there again. By Josef Benken March 19, 2020 6 min read How do you streamline Selenium in your CI/CD pipeline? Here we show how you can modify the CI/CD pipeline to automatically resolve Selenium testing issues caused by CI/CD. Continuously running automated tests may seem like a no-brainer. A CI system is key to giving agile development teams instant visibility into the health of their applications. If automated tests pass, the application is bug-free. If the tests fail, the application is broken in some way. Or is it? The unfortunate reality is that automated tests don’t always give us such confidence, especially when we run Selenium tests. Selenium tests can fail for many different reasons, whether the application is broken in some way or has changed. These problems are nicely detailed in a previous blog post. Here I will explain how to modify the CI / CD pipeline to automatically fix these issues and improve confidence in CI system test results.

Selenium CI grief

CI / CD amplifies selenium testing problems. Let’s take a look at how this happens. First, the development leads to some code changes. The CI system detects the change and triggers the build pipeline. Modules are compiled, unit tests are run and passed (yay!), And then the application is packaged for deployment. The deployment pipeline is triggered next by deploying a test environment, deploying the application under test, and then starting the integration tests, many of which are written using Selenium. Not all of the Selenium tests pass. At this point, the CI system notifies various people of the test failures via email. These people react and sometimes burn a lot of time investigating and troubleshooting. Product owners, managers and other interested parties also see the bugs on their dashboards, which distract their attention from other things. If you know the selenium tests themselves are unstable, how can you trust that CI / CD has reported a real regression or defect? How much time is wasted on troubleshooting false positives? Simply improving selenium tests can be non-trivial and sometimes involve trial and error, especially when problems are intermittent or difficult to reproduce. From one build to another, there are errors that can shift when no single test is to blame. These can be systemic or environmental issues that come and go. The CI system may use VM or cloud-based executors with inconsistent performance characteristics. Tests execute and pass developer workstations, but fail in CI automation. Sound familiar?

Selenic CI solved

Parasoft Selenic is designed to combat such challenges. Selenic automates the analysis and correction of Selenium test issues, provides fixes or insights into those issues, and makes those results available when people are notified by their CI system. The burden is shifted to the CI system, away from people responding to notifications from the CI system. With just two simple steps, you can automate the process of correcting Selenium tests as part of the CI / CD pipeline using Parasoft Selenic.

Optimize Selenium Tests with Parasoft Selenic.

Inject the Selenic Agent

To integrate Parasoft Selenic, make a one-line change to your existing test execution script. Suppose I have a Maven execution step in the pipeline that drives my Java-based Selenium tests. Normally, I specify two Maven targets, “clean” and “test”, which are passed to Maven as command-line arguments. Here you just add an additional Maven command line argument: -DargLine = -javaagent: $ {SELENIC_HOME} /selenic_agent.jar=selfHealing=true,sessionId = $ {BUILD_TAG} In Jenkins CI, this might look something like this: The Selenic Agent is now added to the test execution process, assuming Selenic is installed and licensed in the location referenced by the “SELENIC_HOME” variable. The Selenic agent can do several things, but for test stabilization purposes, I only enable the self-healing feature that attempts to identify and fix Selenium test problems on the fly. One thing to check carefully is whether your Selenium tests are all running in one module or whether there are tests in multiple modules. It is common for tests to be split into multiple modules for organizational reasons. In order for Selenic to aggregate information from multiple test modules, a “sessionId” argument must be configured. In the example above, I use “BUILD_TAG”, a predefined variable in the Jenkins CI system that works well for this purpose. Regardless of which CI system you use, I recommend creating a unique session identifier using the variables provided by your CI system. After this one-line change, Selenic automatically detects and corrects test stability issues, giving everyone much more confidence in the test results published by the CI system. However, there is one more Selenic component that I recommend integrating.

Run the Selenic Analyzer

How do we know what issues Selenic has found or fixed? Are there changes I should make to the Selenic tests to avoid the problems Selenic finds? To get this information, run the Selenic Analyzer as another execution step in the pipeline. This is a second one-line change, but it’s still trivial: java -jar $ SELENIC_HOME / selenic_analyzer.jar -report target / selenic-reports -sessionId $ {BUILD_TAG} In Jenkins CI, this might look something like this: The Selenic Analyzer collects information previously logged by the Selenic Agent, performs some additional analysis on the data, and then generates some reports that can then be archived by the CI system. By default, the analyzer processes information from the last session. However, for robustness, I recommend explicitly configuring the “-sessionId” argument with the same value that was previously used when configuring Selenic Agent. By default, Selenic Analyzer writes the reports to the current working directory. However, to make it easier to archive reports, you should add a “-report” argument to instruct Selenic to write reports to a “selenic-reports” folder in your project’s build output directory. After that, you would add another step in the pipeline to archive “target / selenic-reports / **”: The report files are now available to all users of the CI system. In the case of Jenkins CI, reports are conveniently accessible via stable URLs: http://{ci_server}/job/{test_job}/lastSuccessfulBuild/artifact/target/selenic-reports/report.html http://{ci_server}/job/{test_job}/lastSuccessfulBuild/artifact/target/selenic-reports/report.json The first URL refers to the Selenic HTML report, which can be viewed directly in a web browser. The second URL refers to the JSON report that developers can provide to the Selenic IDE plugin to import the recommendations from the last automated test run.

Further considerations

Selenic offers many other options that you can enable depending on your test environment and other specific requirements. Selenic records data for each testing session. The recorded data serves as a knowledge base so that Selenic can learn over time and make better self-healing decisions and recommendations. By default, this data is stored on the computer running the test. So if you have a pool of performers, you want this data to be in a shared folder that all your performers can access. The data location can be explicitly configured by passing a parameter “data = {path}” to the Selenic Agent and an argument “-data {path}” to the Selenic Analyzer, where “{path}” is a file system path is a shared folder. On Windows, this can be a UNC path such as \ hostname \ SharedFolder. The recorded data will also be built indefinitely unless there are restrictions. This can eventually lead to high disk usage. A limit on the number of recorded test sessions can be configured by passing a parameter “maxSessionDaysToKeep = {num_days}” to the Selenic agent. “{Num_days}” is the number of days that data was recorded, not calendar days. In other words, Selenic does not count days when no test session data was recorded. This is useful if you run your tests infrequently or at inconsistent intervals, or have had server outages where you have not run tests for a few days. Selenic can also be configured to capture additional diagnostic information at different levels of granularity. For example, Selenic can be configured to capture screenshots for every Selenium action or only for failed actions. To enable screenshots for test failures, you can pass the “Screenshot = failure” parameter to the Selenic agent. Screenshots for errors are available in the HTML report generated by Selenic Analyzer.

CI / CD + Selenic acid =

Parasoft Selenic is designed to accelerate agile development using Selenium as part of CI / CD. Developers and testers can be more productive, spend less time chasing Selenium ghosts and more time working on real things. Increase confidence in your Selenium test results with Selenium. Download your free trial today.

Optimize selenium tests with Parasoft Selenic.

By Josef Benken Joseph is a senior software engineer at Parasoft. He has helped develop many core features and technologies used in various products, including SOAtest, Virtualize, and Selenic. He has been with Parasoft since 2006. Selenium has a proven track record in testing web-based interfaces and has been on the market for over eleven years. One of the most important components of Selenium is the WebDriver API, which is also used by many other test frameworks. As with any software project, there are some basic principles to keep in mind when automating tests with Selenium WebDriver. Furthermore, there are Selenium-specific peculiarities that the test automator must take into account. In this article, we will briefly discuss these principles and specifics.


In order to test software, it must first be testable. Even if it sounds logical, the software to be tested is not primarily developed for this goal, so special adaptations for test support of testability are rather the rule. If you want to introduce test automation in a software project that has been running for a long time, the software under test can make life very difficult for you because interface elements cannot be uniquely captured. >> NEW Test Automation with Selenium | Education, Training

The mutable test object

For the testers in a software project it is true: The test object changes. This realization is not only valid for projects in the development phase – also during the later maintenance phase large and small changes can be made to the program. Software updates can change paths and links, component names or functions can be modified. For automated tests to continue to work, they often need to be modified in turn. To ensure that this can be done without too much effort, certain design principles must be followed right from the start.

Page Objects

To keep the test project maintainable, it is important to maintain the individual components in separate modules. An effective and popular method for this is the Page Object Design Pattern. In this design pattern, the functions and elements of the test object are coded separately from the actual tests. For example, if a web page is to be tested, the Page Object would have the elements (buttons, text boxes, sliders, etc.) and the functions (search, login, etc.) defined. The separately written tests would access these elements and functions through the Page Object. The advantage of this method is that the test project can easily adapt to changes. For example, if the name of a text field is changed on this web page (and thus the path used to access this element also changes), then not every test that uses this text field needs to be adapted, but only the Page Object.


The right choice of selectors ensures that components can be uniquely identified. There are several types of selectors – identification by the ID of the object, for example, is very simple, but it can happen in some circumstances that two objects have the same ID. Xpath and CSS selectors are popular for good reason, the latter being more difficult to build but faster, more robust and widely usable. Both allow the object to be uniquely identified.

Readable code

The names of variables and methods should be meaningful. The same goes for the names of tests – the name should not just tell you what is being tested (e.g. loginTest), but also how and what the expected result is; e.g. loginTest_validPassword_HTTP200 tells us that the login test is performed with a valid password and the expected result is a response with HTTP status code 200. Also important is that there should be a consistent naming structure in the project. Especially in larger teams, this structure should be agreed upon from the start.

Commented code

No matter how readable code is, it rarely replaces natural language. Often, even the actual programmers of a software project need a lot of time and effort to understand their own code after a year. Therefore, it is essential to comment the program comprehensively. Each method should have a description of its function in addition to basic information such as the author’s name, creation date, parameters and return values.

Selenium test reporting

Selenium in itself has no reporting capabilities. Test reporting is implemented in the test framework in which Selenium is implemented, e.g. TestNG, JUnit, NUnit, PHPUnit. Sufficient time should be invested to implement effective and consistent Selenium test reporting. Based on this, further detailed information (test levels, defect classes, test priorities, etc.) must be elicited, which is very important for the course of the tests. Many of the principles listed here also apply to software projects, because it is true that a test automation project is also a software project.

Selenium test automation training/education

You prefer to perform sophisticated automated tests of web application yourself to be independent? Qytera offers seminars on Selenium:

  • Test Automation with Selenium
  • Test Automation with Selenium Advanced

Selenium sponsoring by Qytera

For several years now, Qytera has been a silver-level sponsor of Selenium and thus actively promotes the further development and maintenance of the project.

Webinar: Test Automation with Open Source Tools (Selenium, Kubernetes, Docker) – Live Demo

  Selenium Cicd.

Leave a comment

Your email address will not be published. Required fields are marked *