Long running background tasks
Test orchestration of flows with long running parts outside of the app
This is how you end-to-end test a user flow that involves a long running “background sequence” - a part of the flow that happens outside of the app the browser can’t connect to. Imagine tasks that wait for a user to do something manually or outside physical systems that have to act or respond.
When testing applications that involve long-running backend tasks, managing test execution efficiently is crucial. Instead of waiting for a single test to complete, a better approach is to split the test into two phases: preparation and validation. This allows for a structured and efficient workflow while leveraging Octomind’s CLI for execution control.
The two-phase testing approach
- preparation phase: This phase initiates the long-running backend task. It sets up the necessary conditions and triggers the backend process without waiting for it to complete.
- validation phase: After an appropriate waiting period, this phase verifies the results of the backend task to ensure correctness.
Using Octomind’s CLI, we can trigger tests using tags to control execution. This allows us to specify which tests belong to the preparation phase and which belong to the validation phase.
Setting up Octomind CLI
To use Octomind’s CLI, install it via npm
:
Then, set your Octomind API key for further use:
Running the two-phase test
Test run will follow this flow:
1. Preparation phase: Trigger backend task
Set up a specific tag, for example - preparation
- to run only the test cases that set up the environment and start the long-running task:
This command will execute all tests associated with the preparation
tag, ensuring that the backend task is triggered.
2. Validation phase: Check results
Once the backend task has had enough time to complete, run the validation tests using a different tag, let’s say - validation
.
This executes only the tests that check whether the backend task completed successfully and produced the expected results.
Example: Testing a demo app
Let’s consider a demo application that processes user data asynchronously. The preparation phase will trigger data processing, while the validation phase will check if the processing is complete. The see how this works we created such a demo app: https://github.com/sker65/orchestration-demo/.
It is written in python and uses flask and celery. Flask provides an API and a web UI, while celery provides background tasks. The application can be started using docker:
The demo app docker compose also contains a private location worker so that the local running app can be tested from Octomind’s cloud testing platform.
Preparation: Start processing
This test case will use the web UI to start some background task.
Wait for the background task to finish
After the test case execution is triggered we will use a bash script to monitor the progress. This is rather simple as we can just use the app’s API to look for tasks and their status.
Validation: Verify processing completion
After the background tasks has finished, we validate:
This could correspond to a check in the UI that ensures that there is a task with the state success
.
Conclusion
By splitting tests into preparation and validation phases and leveraging Octomind CLI with tags, you can efficiently manage long-running backend tasks without unnecessary delays. This ensures a structured test execution strategy while keeping your CI/CD pipeline efficient and responsive.