Controlling How Tests Are Run

Just as cargo run compiles your code and then runs the resultant binary, cargo test compiles your code in test mode and runs the resultant test binary. The default behavior of the binary produced by cargo test is to run all the tests in parallel and capture output generated during test runs, preventing the output from being displayed and making it easier to read the output related to the test results. You can, however, specify command line options to change this default behavior.

Some command line options go to cargo test, and some go to the resultant test binary. To separate these two types of arguments, you list the arguments that go to cargo test followed by the separator -- and then the ones that go to the test binary. Running cargo test --help displays the options you can use with cargo test, and running cargo test -- --help displays the options you can use after the separator.

Running Tests in Parallel or Consecutively

When you run multiple tests, by default they run in parallel using threads, meaning they finish running faster and you get feedback quicker. Because the tests are running at the same time, you must make sure your tests don’t depend on each other or on any shared state, including a shared environment, such as the current working directory or environment variables.

For example, say each of your tests runs some code that creates a file on disk named test-output.txt and writes some data to that file. Then each test reads the data in that file and asserts that the file contains a particular value, which is different in each test. Because the tests run at the same time, one test might overwrite the file in the time between another test writing and reading the file. The second test will then fail, not because the code is incorrect but because the tests have interfered with each other while running in parallel. One solution is to make sure each test writes to a different file; another solution is to run the tests one at a time.

If you don’t want to run the tests in parallel or if you want more fine-grained control over the number of threads used, you can send the --test-threads flag and the number of threads you want to use to the test binary. Take a look at the following example:

  1. $ cargo test -- --test-threads=1

We set the number of test threads to 1, telling the program not to use any parallelism. Running the tests using one thread will take longer than running them in parallel, but the tests won’t interfere with each other if they share state.

Showing Function Output

By default, if a test passes, Rust’s test library captures anything printed to standard output. For example, if we call println! in a test and the test passes, we won’t see the println! output in the terminal; we’ll see only the line that indicates the test passed. If a test fails, we’ll see whatever was printed to standard output with the rest of the failure message.

As an example, Listing 11-10 has a silly function that prints the value of its parameter and returns 10, as well as a test that passes and a test that fails.

Filename: src/lib.rs
  1. fn prints_and_returns_10(a: i32) -> i32 {
  2. println!(“I got the value {a}”);
  3. 10
  4. }
  5. #[cfg(test)]
  6. mod tests {
  7. use super::*;
  8. #[test]
  9. fn this_test_will_pass() {
  10. let value = prints_and_returns_10(4);
  11. assert_eq!(value, 10);
  12. }
  13. #[test]
  14. fn this_test_will_fail() {
  15. let value = prints_and_returns_10(8);
  16. assert_eq!(value, 5);
  17. }
  18. }
Listing 11-10: Tests for a function that calls println!

When we run these tests with cargo test, we’ll see the following output:

  1. $ cargo test
  2. Compiling silly-function v0.1.0 (file:///projects/silly-function)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.58s
  4. Running unittests src/lib.rs (target/debug/deps/silly_function-160869f38cff9166)
  5. running 2 tests
  6. test tests::this_test_will_fail ... FAILED
  7. test tests::this_test_will_pass ... ok
  8. failures:
  9. ---- tests::this_test_will_fail stdout ----
  10. I got the value 8
  11. thread 'tests::this_test_will_fail' panicked at src/lib.rs:19:9:
  12. assertion `left == right` failed
  13. left: 10
  14. right: 5
  15. note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
  16. failures:
  17. tests::this_test_will_fail
  18. test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  19. error: test failed, to rerun pass `--lib`

Note that nowhere in this output do we see I got the value 4, which is printed when the test that passes runs. That output has been captured. The output from the test that failed, I got the value 8, appears in the section of the test summary output, which also shows the cause of the test failure.

If we want to see printed values for passing tests as well, we can tell Rust to also show the output of successful tests with --show-output:

  1. $ cargo test -- --show-output

When we run the tests in Listing 11-10 again with the --show-output flag, we see the following output:

  1. $ cargo test -- --show-output
  2. Compiling silly-function v0.1.0 (file:///projects/silly-function)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.60s
  4. Running unittests src/lib.rs (target/debug/deps/silly_function-160869f38cff9166)
  5. running 2 tests
  6. test tests::this_test_will_fail ... FAILED
  7. test tests::this_test_will_pass ... ok
  8. successes:
  9. ---- tests::this_test_will_pass stdout ----
  10. I got the value 4
  11. successes:
  12. tests::this_test_will_pass
  13. failures:
  14. ---- tests::this_test_will_fail stdout ----
  15. I got the value 8
  16. thread 'tests::this_test_will_fail' panicked at src/lib.rs:19:9:
  17. assertion `left == right` failed
  18. left: 10
  19. right: 5
  20. note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
  21. failures:
  22. tests::this_test_will_fail
  23. test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  24. error: test failed, to rerun pass `--lib`

Running a Subset of Tests by Name

Sometimes, running a full test suite can take a long time. If you’re working on code in a particular area, you might want to run only the tests pertaining to that code. You can choose which tests to run by passing cargo test the name or names of the test(s) you want to run as an argument.

To demonstrate how to run a subset of tests, we’ll first create three tests for our add_two function, as shown in Listing 11-11, and choose which ones to run.

Filename: src/lib.rs
  1. pub fn add_two(a: usize) -> usize {
  2. a + 2
  3. }
  4. #[cfg(test)]
  5. mod tests {
  6. use super::*;
  7. #[test]
  8. fn add_two_and_two() {
  9. let result = add_two(2);
  10. assert_eq!(result, 4);
  11. }
  12. #[test]
  13. fn add_three_and_two() {
  14. let result = add_two(3);
  15. assert_eq!(result, 5);
  16. }
  17. #[test]
  18. fn one_hundred() {
  19. let result = add_two(100);
  20. assert_eq!(result, 102);
  21. }
  22. }
Listing 11-11: Three tests with three different names

If we run the tests without passing any arguments, as we saw earlier, all the tests will run in parallel:

  1. $ cargo test
  2. Compiling adder v0.1.0 (file:///projects/adder)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.62s
  4. Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
  5. running 3 tests
  6. test tests::add_three_and_two ... ok
  7. test tests::add_two_and_two ... ok
  8. test tests::one_hundred ... ok
  9. test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  10. Doc-tests adder
  11. running 0 tests
  12. test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

Running Single Tests

We can pass the name of any test function to cargo test to run only that test:

  1. $ cargo test one_hundred
  2. Compiling adder v0.1.0 (file:///projects/adder)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.69s
  4. Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
  5. running 1 test
  6. test tests::one_hundred ... ok
  7. test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 2 filtered out; finished in 0.00s

Only the test with the name one_hundred ran; the other two tests didn’t match that name. The test output lets us know we had more tests that didn’t run by displaying 2 filtered out at the end.

We can’t specify the names of multiple tests in this way; only the first value given to cargo test will be used. But there is a way to run multiple tests.

Filtering to Run Multiple Tests

We can specify part of a test name, and any test whose name matches that value will be run. For example, because two of our tests’ names contain add, we can run those two by running cargo test add:

  1. $ cargo test add
  2. Compiling adder v0.1.0 (file:///projects/adder)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.61s
  4. Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
  5. running 2 tests
  6. test tests::add_three_and_two ... ok
  7. test tests::add_two_and_two ... ok
  8. test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out; finished in 0.00s

This command ran all tests with add in the name and filtered out the test named one_hundred. Also note that the module in which a test appears becomes part of the test’s name, so we can run all the tests in a module by filtering on the module’s name.

Ignoring Some Tests Unless Specifically Requested

Sometimes a few specific tests can be very time-consuming to execute, so you might want to exclude them during most runs of cargo test. Rather than listing as arguments all tests you do want to run, you can instead annotate the time-consuming tests using the ignore attribute to exclude them, as shown here:

Filename: src/lib.rs

  1. pub fn add(left: usize, right: usize) -> usize {
  2. left + right
  3. }
  4. #[cfg(test)]
  5. mod tests {
  6. use super::*;
  7. #[test]
  8. fn it_works() {
  9. let result = add(2, 2);
  10. assert_eq!(result, 4);
  11. }
  12. #[test]
  13. #[ignore]
  14. fn expensive_test() {
  15. // code that takes an hour to run
  16. }
  17. }

After #[test], we add the #[ignore] line to the test we want to exclude. Now when we run our tests, it_works runs, but expensive_test doesn’t:

  1. $ cargo test
  2. Compiling adder v0.1.0 (file:///projects/adder)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.60s
  4. Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
  5. running 2 tests
  6. test tests::expensive_test ... ignored
  7. test tests::it_works ... ok
  8. test result: ok. 1 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out; finished in 0.00s
  9. Doc-tests adder
  10. running 0 tests
  11. test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

The expensive_test function is listed as ignored. If we want to run only the ignored tests, we can use cargo test -- --ignored:

  1. $ cargo test -- --ignored
  2. Compiling adder v0.1.0 (file:///projects/adder)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.61s
  4. Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
  5. running 1 test
  6. test expensive_test ... ok
  7. test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out; finished in 0.00s
  8. Doc-tests adder
  9. running 0 tests
  10. test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

By controlling which tests run, you can make sure your cargo test results will be returned quickly. When you’re at a point where it makes sense to check the results of the ignored tests and you have time to wait for the results, you can run cargo test -- --ignored instead. If you want to run all tests whether they’re ignored or not, you can run cargo test -- --include-ignored.