1. What user flows do you cover?

For the time being, we cover basic user flows which happen inside a browser window. We don’t test canvas or multi-user applications yet.

We will add building blocks which allow for more demanding scenarios over time, like e-mail or mobile phone based flows, multi-user setups or the inclusion of external apps.

2. How are my tests generated?

We are using our AI agents for test case discovery and test step generation. We’ll discover the interaction chain of the test case in an intermediate representation. We’ll generate a corresponding Playwright code on the fly and execute on a manual trigger, a schedule or against your pull request.

We generate tests on sign-up, when you launch test discovery and when you ask our AI agents to suggest more tests.

3. What code are you using for your tests?

We are using the Playwright framework to generate tests in standard Playwright test code.

4. How are you securing the stability of your tests?

End-to-end tests are notoriously flaky. Some of our strategies to fight flakiness are:

  • Smart learning based retries
  • active interaction timing (sleeps)
  • AI based analysis of unexpected circumstances
  • Rediscovery in case of user flow changes

5. How can I run your tests locally?

Our open source tool Debugtopus can pull the latest test case from our repository and execute it against your local environment. Learn how.

6. How does the auto-maintenance work?

This feature is under active development and not publicly accesible yet. We will follow a playbook to find out if a test failure is caused by a behavioral change of your user flows, the test code itself or a bug in your code.

In case of a behavioral change, we pinpoint the failing interaction. We apply machine learning to find out what’s the new desired interaction to achieve the original goal of the test case. The interaction chain of this test case will be adjusted permanently to the new behavior as a result.

7. How do I write a good prompt?

See our section about prompting a new test case.

8. I do not use use GitHub, can I use your tests?

Yes. Apart from GitHub we do offer a native integration for Azure DevOps and API based integrations for Vercel and Jenkins. For all other build pipelines you can script your own test trigger so that our test suite is triggered whenever you run a pull request. Lean more about our CI integration options.

Unfortunately, we won’t be able to comment back into your pipeline. Instead, you’ll be able to receive the test results through our app. You can also run us programmatically without using a CI, schedule regular test runs or trigger test runs manually.

9. How can I get in touch with you?

Either use our discord server or write us an email

10. From which IP addresses are your tests run?

Please, see our Data Governance page.

11. What is the User-agent string of your test agent?

The User-agent string of our test agent starts with “octomind” and then continues with a current Chrome User-Agent string.

Example: octomind Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.6312.4 Safari/537.36