How-To Automat Testing process

Serhii Horoshko
9 min readJul 19, 2020

--

at the beginning…

  1. saver, Selenium, Selenoid, Selenide, Selendroid… Что все это значит?, (2019)
  2. Qualsolife, Как QA организовать автоматизацию тестирования на проекте, (2019)
  3. Jackie King, Why Do We Need Automated Tests, Anyway?, (2019)

“Automatisation de test”

Overview

How do you decide which tests to automate and which ones to leave for manual testing? Before you begin to automate the test case, you need to find out what benefits you will get from automating this test, given the time, effort, and resources invested in automation. The following are the factors that should be taken into account when deciding what manual tests should or should not be automated.

We would like to use the test process outlined in the Foundation level ISTQB syllabus:

  • Test Planning and Control
  • Test Analysis and Design
  • Test Implementation and Execution
  • Evaluating Exit Criteria and Reporting
  • Test Closure Activities

Below we want to give you examples of important questions and requirements when developing a test automation strategy that will increase throughput and free up teams to focus on quality enhancements that drive revenue, the collaboration between Business, Development, and QA.

a. Business — What problem are we trying to solve?
b. Development — How might we build a solution to solve that problem?
c. Testing — What about this, what could possibly happen

Tests that need to be automated:

  • Business-critical paths — the features or user flows that if they fail, cause considerable damage to the business.
  • Tests that need to be run against every build/release of the application, such as smoke test, sanity test, and regression test.
  • Tests that need to run against multiple configurations — different OS & Browser combinations.
  • Tests that execute the same workflow but use different data for its inputs for each test run e.g. data-driven.
  • Tests that involve inputting large volumes of data, such as filling up very long forms.
  • Tests that can be used for performance testing, like stress and load tests.
  • Tests that take a long time to perform and may need to be run during breaks or overnight.
  • Tests during which images must be captured to prove that the application behaved as expected, or to check that a multitude of web pages looks the same on multiple browsers.

Remember also that tests are not the only candidates for automation. Tasks, such as setting up, creating test data for manual testing, are also great ideas for automation.

Tests that do not need to be automated:

  • Tests that are run only once. The only exception is if this test runs with a large amount of data. Even if you only run it once, it may make sense to automate it.
  • Tests that you will only run only once. The only exception to this rule is that if you want to execute a test with a very large set of data, even if it’s only once, then it makes sense to automate it.
  • User experience tests for usability (tests that require a user to respond as to how easy the app is to use).
  • Tests that need to be run ASAP. Usually, a new feature which is developed requires quick feedback so testing it manually at first
  • Tests that require ad hoc/random testing based on domain knowledge/expertise — Exploratory Testing.
  • Intermittent tests. Tests without predictable results cause more noise than value. To get the best value out of automation the tests must produce predictable and reliable results in order to produce pass and fail conditions.
  • Tests that require visual confirmation, however, we can capture page images during automated testing and then have a manual check of the images.
  • Tests that cannot be 100% automated should not be automated at all unless doing so will save a considerable amount of time.
  • Requirements that can change rapidly in the development process

End-to-end tests cover a path through a system. They could arguably be defined as a multi-step integration test, and they should also be a “black box.” Typically, they interact with the product like a real user. Web UI tests are examples of integration tests because they need the full-stack beneath them.

Why end-to-end structure was a challenge? We need to understand what we will develop (testing), why do we need it, and how it will be implemented. Following these questions, you can specify the following goals for the test coverage:

  1. Understanding of the system flows
  2. Accesses to the system
  3. Requirements for end-to-end flows
  4. Test Design for end-to-end flows
  5. Test data for each system
  6. Test environment readiness

Automation Testing Risk in Agile

  • Automated UI provides a high level of confidence, but they are slow to execute, fragile to maintain, and expensive to build. Automation may not significantly improve test productivity unless the testers know how to test
  • Unreliable tests are a major concern in automated testing. Fixing failing tests and resolving issues related to brittle tests should be a top priority in order to avoid false positives
  • If the automated test is initiated manually rather than through CI (Continuous Integration) then there is a risk that they are not regularly running and therefore may cause the failure of tests
  • Automated tests are not a replacement for exploratory manual testing. To obtain the expected quality of the product, a mixture of testing types and levels is required
  • Many commercially available automation tools provide simple features like automating the capture and replay of manual test cases. Such a tool encourages testing through the UI and leads to an inherently brittle and difficult to maintain tests. Also, storing test cases outside the version control system creates unnecessary complexity
  • In order to save time, much times the automation test plan is poorly planned or unplanned which results in the test fail
  • A test set up and tear down procedures are usually missed out during test automation while Performing manual testing, a test set up and tear down procedures sounds seamless
  • Productivity metrics such as a number of test cases created or executed per day can be terribly misleading and could lead to making a large investment in running useless tests
  • Members of the agile automation team must be effective consultants: approachable, cooperative, and resourceful, or this system will quickly fail
  • Automation may propose and deliver testing solutions that require too much ongoing maintenance relative to the value provided
  • Automated testing may lack the expertise to conceive and deliver effective solutions
  • Automated testing may be so successful that they run out of important problems to solve, and thus turn to unimportant problems.

Choosing programming language

This is just an example there are many other cases that require knowledge of automation end-to-end testing, API testing, while performance, and load testing.

In fact, the number of errors matters if you are very much focused on reducing the amount of time to search for them. Let’s say there are 10 critical bugs in the system. And I really quickly found two of them, and this is really cool! Two critical errors were found before the presentation of the product. But I did not find others before deploying. This means that 8 critical errors remained undiscovered. In this case, the number of errors is a key measure, even if we did not understand it at that time.

It is important to think in a slightly different way. The number of errors or their quality is not as important as the mechanisms by which they occur and, accordingly, the mechanisms for their search. There are many existing options:

  • Mechanisms that are good at finding bugs, but which work too long;
  • Mechanisms that find bugs poorly, but work very fast;
  • Mechanisms that are “prone” to notice bugs of a certain kind, but at the same time point-blank not to see others;
  • Mechanisms that are not very popular with testers, but do work and do not use them, because no one knows about them in the team, which is why what can be found remains undiscovered;
  • Mechanisms that can work well and quickly, are capable of finding many errors, but the response from them is so vague that people cannot make decisions based on their output.

Based on these conclusions, you need to choose the best option based on the speed of work, universal approach to UI testing, in this we will come to the rescue JAVA plus Selenium WebDriver

It is platform independent. Widely used for testing J2EE projects but it is also used for testing web projects as well, especially big ones. It is a robust language that helps to execute long test. It also supports you with integration tools like Hudson, Selenium-Grid, QMetry etc. Because of its popularity, you can find a large number of testing frameworks built with Java.

Selenium WebDriver is a free, open-source framework that provides a common application programming interface (API) for browser automation. Ideally, modern web browsers should all render a web application in the same way. However, each browser has its own rendering engine and handles HTML a little differently, which is why testing is needed to ensure that an application performs consistently across browsers and devices. The same browser compatibility issues that affect web applications could also affect automated web tests. But automated tests that use the Selenium client API can run against any browser with a WebDriver-compliant driver, including Chrome, Safari, Internet Explorer, Microsoft Edge, and Firefox. Selenium WebDriver can run on Windows, Linux, and macOS platforms. Tests can be written for Selenium WebDriver in any of the programming languages supported by the Selenium project, including Java, C#, Ruby, Python, and JavaScript. These programming languages communicate with Selenium WebDriver by calling methods in the Selenium client API.

Focusing on these aspects to the same extent as on other known ones is important because it helps to get around some traditionally arising problems. For example, such when you drove away a hundred tests but did not find a single bug. And this may be good, but only if there are really no errors. But if they are still there, then this is bad, if the applied testing methods cannot reveal them. Or the situation when I run a bunch of tests, find minor errors while skipping the more difficult (critical) to find. A team must make certain decisions based on the tests performed. This means that we have to believe what the test results tell us, respectively, we must initially trust the detection methods that we have implemented in these tests. Other detection methods must be inherent in the environment and testability itself, which we determine to determine how likely and possible it is in principle that tests will cause an error if it exists.

I want to conclude that I do not determine the success of testing by any specific factors. But if you still want to somehow define it for yourself, then you should determine not by the number of errors and vulnerabilities found and not by the quality of these errors, but by its specific ability of the testing mechanisms to detect them.

Selenium Webdriver

Java road

Literature and Learning courses

Automation Testing process

  • Automation of UI (functional) tests
  • Automation API test
  • Automation of Performance / Load tests

The most common (the most important) function of a project can be considered part of the user interface (UI) and API.

Automation Strategy

Tool Selection

Automated Process Introduction

Test Plan && Development

Execution

Review and Assessment

Reference

  1. Amir Ghahrai, How to Choose Which Tests to Automate? (2018)
  2. Michael Bolton, Which Test Cases Should I Automate? (2018)
  3. ISTQB Guide, What is fundamental test process in software testing? (2017)
  4. George Dinwiddie, Three amigos agile approach (2017)
  5. Dayana Stockdale, How to Develop an Automated Testing Strategy (2017)
  6. Mr.Slavchev, Hindsight lessons about automation: What tests to automate (2018)
  7. Mike Wacker, Just Say No to More End-to-End Tests (2015)
  8. Elemental Selenium, How To Structure Your Test Code (2017)
  9. Денис Корейба, Какой должна быть идеальная структура проекта автотестов? (2016)

--

--

No responses yet