Test case building

A web app is typically composed of user flows. A user flow lets a user accomplish a certain goal. “Sign-in with email & password” is an example. To perform a sign-in with email & password, a certain sequence of steps - interactions - is required. We are recording and storing the interaction chain of a test case in an intermediate representation. The corresponding Playwright code which is exercising the UI is generated on the fly just before test execution. The interaction chain can be examined in the test detail view and the Playwright trace viewer.

Auto-maintenance

We follow a playbook to find out if a test failure is caused by a behavioral change of your user flows, the test code itself or a bug in your code. In the case of a behavioral change, we pinpoint the failing interaction. We then apply machine learning to detect the new desired interaction to achieve the original goal of the test case. The interaction chain of the test case will be adjusted permanently to the new behavior as a result. This feature is under active development and not publicly accesible yet.

Issue pinpointing

When a test case rightfully fails, we help you quickly understand what went wrong. We are providing a set of tools to help you understand the issue.

  • Screenshots at the time of test failure
  • Execution log of the test cases
  • Playwright traces via trace viewer
  • Debugtopus tooling, which lets you run our tests localy, so you can set breakpoints to step through the code.

Debugtopus

See details at Debug your code.

Debugtopus interaction diagram

Debugtopus interaction diagram

Test suite runtime

Browser tests are not super fast since they are simulating a real user. To provide test results as fast as possible we parallelize test execution to the max.
To do so, we are fully cloud based and we scale instances up and down as needed. We are also working on techniques to separate test execution to avoid side effects for better scaling.

Flakiness

Test flakiness is the biggest problem of browser tests. Fighting flakiness is an active focus of research on our end. Some of the strategies we follow are:

measurestatus
Active interaction timing to handle varying response timeswork in progress
Smart learning based retries to understand flakiness levelopen
AI based analysis of unexpected circumstances to handle temporary pop-ups, toasts and similar stuff which would otherwise break the testopen
AI based rediscovery in case of permanent user flow changesopen
Coding rules and best practices of how to write tests in a way to avoid common pitfalls.work in progress