We deploy AI to simulate user behavior to automatically generate end-to-end tests. It's followed up by deterministic systems that generate and run the tests.
Octomind AI agent mimics human users (i.e., clicks input fields, signs up for newsletter) to navigate apps, interprets app intent, and identifies all relevant user flows. This is how it runs when you sign up:
The agent looks at your site.
It looks for cookies and a 'required' login first. 'Required' is the type of login you need to get further in the app.
When a 'required' login flow is found, we will ask you for your login test credentials.
The agent will put these 2 tests as pre-requisites (dependencies) to all new tests.
Next, the agent automatically generates up to 3 meaningful tests.
If successful, it validates the tests and files them as 'active'.
We will run all active cases.
You can have the agent generate more tests sequential to the active test (up to 3 on each run) they were launched from.
You can prompt the AIÂ agent to generate a specific test case.
You can run the AIÂ agent to re-generate a broken test.
AI agent knows what to test
Our AI agent traverses the publicly accessible code in the DOM and uses the vision capability of the underlying multimodal LLM to add visual context when code insight is insufficient. Â It tries to understand the purpose of your site to generate relevant end-to-end tests.
AI agent generates tests, step by step
Our AI mimics user behavior to interact with apps as a human would to reach the goal of a user flow. These interactions are represented in test steps.
We record and store each test case's interaction chain and generate the corresponding Playwright code deterministically on the fly immediately prior to test execution.
AI auto-maintenance
work in progress
Octomind automatically determines if test failures are caused by user flow behavioral changes, the test code itself, or bugs in the code.
In the case of a behavioral change, we pinpoint failing interactions, and deploy the AI Agent to detect new desired interaction that will allow us to achieve the test case's goals.
fight flaky tests with AI
work in progress Â
We already deploy a variety of techniques to fight test flakiness. Yet it's not enough.
That's why we are investigating the best ways to implement AI-based analysis of unexpected site behavior, like temporary pop-ups or toasts, and further improve testing performance.
blogtopus: on AI, agents and app testing
50% developer rants, 50% tech news commentary, 100% cephalopod opinions on all things software. It's worth the read. Â Guaranteed.
We have launched the beta version. Browse our docs at your own pace or request a personalized demo, if you need to talk to our devs. Or just start for free, today.