You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -9,17 +9,43 @@ CAUTION: *Adaptive testing* is available in closed preview. When the feature is
9
9
10
10
NOTE: This page is currently in development and will be updated as the feature is developed.
11
11
12
-
Use adaptive testing to run only tests that are impacted by code changes and evenly distribute tests across parallel execution nodes. Adaptive testing reduces test execution time while maintaining test confidence.
12
+
Use adaptive testing to optimize test runs as follows:
13
13
14
-
== Key benefits:
14
+
* Run only tests that are impacted by code changes.
15
+
* Evenly distribute tests across parallel execution nodes.
16
+
17
+
Adaptive testing reduces test execution time while maintaining test confidence.
18
+
19
+
== Is my project a good fit for adaptive testing?
20
+
21
+
The following list shows some examples of where adaptive testing can be most beneficial:
22
+
23
+
* Unit and integration tests that exercise code within the same repository.
24
+
* Projects with comprehensive test coverage. The more thorough your tests, the more precisely adaptive testing can identify which tests are impacted by changes.
25
+
* Test frameworks with built-in coverage support (Jest, pytest, Go test, Vitest) where generating coverage reports is straightforward.
26
+
+
27
+
TIP: In codebases with sparse test coverage, adaptive testing cannot accurately determine which tests cover changed code. This causes the system to run more tests, reducing the benefits of intelligent test selection.
28
+
29
+
== Limitations
30
+
31
+
* Generating code coverage data is essential for determining how tests are related to code. If tests are run in a way that makes generating and accessing code coverage data tricky then adaptive testing may not be a good fit.
32
+
* Adaptive testing needs to be configured with commands to discover all available tests and run a subset of those tests. If you cannot run commands to discover tests and run a subset of tests on the CLI then adaptive testing may not be a good fit.
33
+
* Adaptive testing works best when testing a single deployable unit. A monorepo which performs integration tests across many packages at once may not be a good fit.
34
+
35
+
== Key benefits
15
36
16
37
* Faster CI/CD pipelines through intelligent test selection.
17
38
* Optimized resource usage and cost efficiency.
18
39
* Maintain fast feedback loops for development teams.
19
40
* Scale efficiently as test suites grow.
20
41
21
42
== How it works
22
-
The adaptive testing feature operates through two main components that work together to optimize your test execution:
43
+
Adaptive testing operates through two main components that work together to optimize your test execution:
44
+
45
+
* Dynamic test splitting
46
+
* Test impact analysis
47
+
48
+
Each component is described in more detail below.
23
49
24
50
=== Dynamic test splitting
25
51
Dynamic test splitting distributes your tests across parallel execution nodes. The system maintains a shared queue that each node pulls from to create a balanced workload.
@@ -249,7 +275,7 @@ The two most common causes for this:
249
275
* The tests were run with a different job name, in this case, rerunning the job should find timing data.
250
276
* The `<< outputs.junit >>` template variable is not set up correctly. Ensure that the run command uses the template variable and the `store_test_results` step provides a path to a directory so that all batches of `<< outputs.junit >>` are stored.
251
277
252
-
If the tests are still slower, the test runner being used might have initial start up time when running tests, this can cause significant slow down using the dynamic batching as each batch needs to do that initial start up.
278
+
If the tests are still slower, the test runner being used might have initial start up time when running tests. Test runner start up time can cause significant slow down using the dynamic batching as each batch needs to do that initial start up.
253
279
254
280
Add the `dynamic-batching: false` option to `.circleci/test-suites.yml` to disable dynamic batching.
255
281
@@ -275,7 +301,11 @@ The goal of this section is to enable adaptive testing for your test suite.
275
301
276
302
=== 2.1 Update the test suites file
277
303
278
-
When using adaptive testing for test impact analysis, the `discover` command discovers all tests in a test suite, the `run` command runs only impacted tests and a new command, the `analysis` command, analyzes each test impacted.
304
+
When using adaptive testing for test impact analysis the following commands are used:
305
+
306
+
* The `discover` command discovers all tests in a test suite.
307
+
* The `run` command runs only impacted tests and a new command.
308
+
* The `analysis` command analyzes each test impacted.
279
309
280
310
. Update the `.circleci/test-suites.yml` file to include a stubbed analysis command.
281
311
. Update the `.circleci/test-suites.yml` file to include the option `adaptive-testing: true`.
* `<< outputs.lcov >>`: Coverage data in LCOV format.
310
340
* `<< outputs.go-coverage >>`: Coverage data in Go coverage format.
311
-
* `<< outputs.gcov >>`: Coverage data in `gcov` coverage format.
312
341
313
-
The coverage location does not need to be set in the outputs map, a temporary file will be created and used during analysis with the template variable from the analysis command.
342
+
The coverage location does not need to be set in the outputs map. A temporary file will be created and used during analysis with the template variable from the analysis command.
314
343
315
344
. Update your `.circleci/test-suites.yml` file with the analysis command.
316
345
@@ -329,8 +358,8 @@ options:
329
358
330
359
*Checklist*
331
360
332
-
. The `analysis` command defines `<< test.atoms >>` to pass in the test, or passes in stdin.
333
-
. The `analysis` command defines `<< outputs.lcov|go-coverage|gcov >>` to write coverage data.
361
+
* The `analysis` command defines `<< test.atoms >>` to pass in the test, or passes in stdin.
362
+
* The `analysis` command defines `<< outputs.lcov|go-coverage|gcov >>` to write coverage data.
334
363
335
364
*Examples of `analysis` commands*
336
365
@@ -414,8 +443,8 @@ This section will run analysis on a feature branch to seed the initial impact da
414
443
415
444
*Checklist*
416
445
417
-
. The step output includes prefix Running impact analysis.
418
-
. The step output finds files impacting a test (for example, found 12 files impacting test `src/foo.test.ts`).
446
+
* The step output includes prefix Running impact analysis.
447
+
* The step output finds files impacting a test (for example, found 12 files impacting test `src/foo.test.ts`).
419
448
420
449
[source,yaml]
421
450
----
@@ -529,18 +558,19 @@ Now the test suite is set up, test selection is working and the test analysis is
529
558
530
559
*Checklist*
531
560
532
-
. The `.circleci/config.yml` is set up to run analysis on the default branch.
533
-
. The `.circleci/config.yml` is set up to run selection on non-default branch.
534
-
. The `.circleci/config.yml` is set up to use high parallelism on the analysis branch.
561
+
* The `.circleci/config.yml` is set up to run analysis on the default branch.
562
+
* The `.circleci/config.yml` is set up to run selection on non-default branch.
563
+
* The `.circleci/config.yml` is set up to use high parallelism on the analysis branch.
535
564
536
565
=== Examples
537
566
538
-
*Running analysis on a branch named `main` and selection on all other branches*
567
+
==== Run analysis on a branch named `main` and selection on all other branches
539
568
540
569
No changes required, this is the default setting.
541
570
542
-
*Running analysis on a branch named `master` and selection on all other branches*
571
+
==== Run analysis on a branch named `master` and selection on all other branches
543
572
573
+
.CircleCI configuration for running analysis on a branch named `master` and selection on all other branches
544
574
[source,yaml]
545
575
----
546
576
# .circleci/config.yml
@@ -556,8 +586,9 @@ jobs:
556
586
path: test-reports
557
587
----
558
588
559
-
*Running higher parallelism on the analysis branch*
589
+
==== Run higher parallelism on the analysis branch
560
590
591
+
.CircleCI configuration for running parallelism of 10 on the main branch and 2 on all other branches
561
592
[source,yaml]
562
593
----
563
594
# .circleci/config.yml
@@ -573,8 +604,10 @@ jobs:
573
604
path: test-reports
574
605
----
575
606
576
-
*Running analysis on a scheduled pipeline and timeboxing some analysis on main*
607
+
[#run-analysis-on-scheduled-pipeline]
608
+
==== Run analysis on a scheduled pipeline and timeboxing some analysis on main
577
609
610
+
.CircleCI configuration for running analysis only on scheduled pipelines
578
611
[source,yaml]
579
612
----
580
613
# .circleci/config.yml
@@ -607,6 +640,7 @@ workflows:
607
640
- test
608
641
----
609
642
643
+
.Test suite config. Set time limit of 10 minutes for the analysis on the main branch
610
644
[source,yaml]
611
645
----
612
646
# .circleci/test-suites.yml
@@ -710,11 +744,11 @@ The frequency depends on your test execution speed and development pace:
710
744
711
745
*Consider re-running analysis:*
712
746
713
-
. After major refactoring or code restructuring
714
-
. When test selection seems inaccurate or outdated
715
-
. After adding significant new code or tests
747
+
* After major refactoring or code restructuring
748
+
* When test selection seems inaccurate or outdated
749
+
* After adding significant new code or tests
716
750
717
-
*Remember:* You can customize which branches run analysis through your CircleCI configuration - it doesn't have to be limited to the main branch.
751
+
*Remember:* You can customize which branches run analysis through your CircleCI configuration - it does not have to be limited to the main branch.
718
752
719
753
=== Can I customize the test-suites.yml commands?
720
754
@@ -727,12 +761,10 @@ Yes, you can fully customize commands by defining `discover`, `run`, and `analys
727
761
728
762
*Requirements when customizing:*
729
763
730
-
. Ensure your commands properly handle test execution
731
-
. Generate valid coverage data for the analysis phase
732
-
. Use the correct template variables (`<< test.atoms >>`, `<< outputs.junit >>`, `<< outputs.lcov >>`)
733
-
. Output test results in a format CircleCI can parse (typically JUnit XML)
734
-
735
-
See the "Custom Configuration" section for detailed examples.
764
+
* Ensure your commands properly handle test execution.
765
+
* Generate valid coverage data for the analysis phase.
766
+
* Use the correct template variables (`<< test.atoms >>`, `<< outputs.junit >>`, `<< outputs.lcov >>`).
767
+
* Output test results in a format CircleCI can parse (typically JUnit XML).
736
768
737
769
=== What happens if no tests are impacted by a change?
738
770
@@ -767,14 +799,62 @@ You can also compare:
767
799
768
800
=== Can I run analysis on branches other than main?
769
801
770
-
Yes! The branch behavior is fully customizable through your CircleCI configuration. While analysis typically runs on `main` by default, you can configure it to run on:
802
+
Yes! The branch behavior is fully customizable through your CircleCI configuration. While analysis typically runs on `main` by default, you can configure it to run on any of the following:
803
+
804
+
* Any specific branch (for example, `develop` or `staging`).
805
+
* Multiple branches simultaneously.
806
+
* Feature branches if needed for testing.
807
+
* Scheduled pipelines independent of branch.
808
+
809
+
See the <<run-higher-parallelism-on-the-analysis-branch,Run higher parallelism on the analysis branch>> example for an example of customizing branch behavior.
810
+
811
+
=== Can I run test analysis and selection on any branch?
812
+
813
+
Yes! The branch behavior is fully customizable through your CircleCI configuration. While analysis runs on main by default, you can configure it to run on:
814
+
815
+
* Any specific branch (for example, `develop` or `staging`).
816
+
* Feature branches if needed for testing.
817
+
* Scheduled pipelines.
771
818
772
-
. Any specific branch (for example, `develop` or `staging`).
773
-
. Multiple branches simultaneously.
774
-
. Feature branches if needed for testing.
775
-
. Scheduled pipelines independent of branch.
819
+
See the <<run-higher-parallelism-on-the-analysis-branch,Run higher parallelism on the analysis branch>> example for an example of customizing branch behavior.
820
+
821
+
[#baseline-coverage]
822
+
=== Why are there so many files impacting a test?
823
+
824
+
If you see many files impacting each test during analysis (for example, "...found 150 files impacting test..."), this may be caused by shared setup code like global imports or framework initialization being included in coverage.
825
+
826
+
This extraneous coverage can be excluded by providing an `analysis-baseline` command to compute the code covered during startup that isn't directly exercised by test code. We call this "baseline coverage data".
827
+
828
+
The `analysis-baseline` command must produce coverage output written a coverage template variable. The baseline coverage data can be in any supported coverage format. While it does not need to match your test coverage output format, using the same format (for example, LCOV format for `<< outputs.lcov >>`) is recommended for consistency.
829
+
830
+
. Create a minimal test that only does imports/setup (no test logic), in the example below this is called `src/baseline/noop.test.ts`.
831
+
. Add an `analysis-baseline` command to your test suite. This command will be broadly similar to your `analysis` command, except that it should only run the minimal test.
See Scenario 3 in the "Flag Usage Scenarios" section for examples of customizing branch behavior.
857
+
The `analysis-baseline` command will be run just before running analysis. The coverage data produced by the `analysis-baseline` command will be subtracted from each test's coverage during analysis. Rerun analysis and you should see fewer impacting files per test.
778
858
779
859
=== What test frameworks are supported?
780
860
@@ -788,4 +868,4 @@ Adaptive testing is runner-agnostic. We provide default configurations for the f
788
868
* Cypress (E2E testing)
789
869
* Vitest
790
870
791
-
The key requirement is that your test runner can generate coverage data in a parsable format (typically LCOV or similar).
871
+
The key requirement is that your test runner can generate coverage data in a parsable format (currently, we support LCOV and Go's "legacy coverage" format).
0 commit comments