1. Give us a URL

We’ll ask for a URL to generate test cases. The URL has to be publicly accessible. We can test both in staging and in production, as long as we can access the site.

First page of the setup flow - link to your website, screenshot 10/2024

2. Name your project

The second page will render a screenshot of the url you provided. We check if it is accessible. You can name your project now. We’ll propose one similar to the url you provided. But you can choose your own.

Second page of the setup flow: Name your project, screenshot 07/2024

3. Sign up

Now we need to sign you up. Please, give us your email, so we can get in touch.

Third page of the setup flow - create account, screenshot 10/2024

You should receive a confirmation email that contains a link for you confirm your email and set your password.

4. Open the Octomind app for the first time

After being redirected back to app.octomind.dev, you will land on your project overview page.

Project overview page after sign-up, screenshot 10/2024

Our AI Agent might be still going through your website, checking if there is a cookie banner and a required login functionality that need to be tested.

If the AI Agent finds a required login, it will ask you for test credentials. It will use them to generate and run a login test. It will be added as dependency for other tests. So will be the cookie banner test if one is found.

These credentials are only usable for username/password logins not for social logins. Login using your Google, Facebook, etc. account will not work.

AI agent asks for test credentials to auto-generate and run a login test, screenshot 07/2024

5. We are auto-generating test cases for you

Here comes the really cool part. Once we finished searching for a potential log-in and cookie banner test, we start generating 3 test cases for you automatically. You can follow the generation progress in the stack:

Stack showing ongoing AI-tasks, screenshot 10/2024

It’s possible that the Agent fails to generate a test (here are some reasons why) or needs help with something. This is how you edit a yellow test case to nudge the Agent into the right direction.

6. Run your test suite to check your app for bugs

We will execute your generated test cases and create several test reports containing your test results. These will ensure that they successfully pass when executed on your site.

Automatically executed test reports - screenshot 10/2024

7. Evaluate your test results

Inside each test report, you can find the test results for the executed test cases:

  • a green test result indicates a successful test run, meaning that your site passed the test described in the test case
  • a red test result indicates a test failure, meaning we could not successfully run the test case steps. The reason might be 1. a bug in your app or 2. a broken test.

Click on it to see in which step is broken and diagnose the reason for the failure.

Find out more about test reports and how to debug your red tests

First test report - screenshot 07/2024

8. Go to test cases to generate more tests

You can grow your test suite by adding more test cases. For this you can jump straight into the test case view by using the go to test cases button in the test report.

Go to the test case section to see the generated test cases - screenshot 10/2024

You should have your first active test cases generated. The AI agent has found several test cases, auto-generated the test steps and has run them to validate whether they work.

Functioning tests are turned on which means they will run once you trigger a test report.

First active test cases auto-generated - screenshot 07/2024

Next steps

Use auto-generation to generate more test cases based off existing ones or prompt our AI agent to create new ones.

If you are happy with the test cases we generated for you, you can set up scheduling to periodically run your tests and ensure your site doesn’t break.