How to Write Tests

Tests are Rust functions that verify that the non-test code is functioning in the expected manner. The bodies of test functions typically perform these three actions:

  • Set up any needed data or state.
  • Run the code you want to test.
  • Assert that the results are what you expect.

Let’s look at the features Rust provides specifically for writing tests that take these actions, which include the test attribute, a few macros, and the should_panic attribute.

The Anatomy of a Test Function

At its simplest, a test in Rust is a function that’s annotated with the test attribute. Attributes are metadata about pieces of Rust code; one example is the derive attribute we used with structs in Chapter 5. To change a function into a test function, add #[test] on the line before fn. When you run your tests with the cargo test command, Rust builds a test runner binary that runs the annotated functions and reports on whether each test function passes or fails.

Whenever we make a new library project with Cargo, a test module with a test function in it is automatically generated for us. This module gives you a template for writing your tests so you don’t have to look up the exact structure and syntax every time you start a new project. You can add as many additional test functions and as many test modules as you want!

We’ll explore some aspects of how tests work by experimenting with the template test before we actually test any code. Then we’ll write some real-world tests that call some code that we’ve written and assert that its behavior is correct.

Let’s create a new library project called adder that will add two numbers:

  1. $ cargo new adder --lib
  2. Created library `adder` project
  3. $ cd adder

The contents of the src/lib.rs file in your adder library should look like Listing 11-1.

Filename: src/lib.rs
  1. pub fn add(left: usize, right: usize) -> usize {
  2. left + right
  3. }
  4. #[cfg(test)]
  5. mod tests {
  6. use super::*;
  7. #[test]
  8. fn it_works() {
  9. let result = add(2, 2);
  10. assert_eq!(result, 4);
  11. }
  12. }
Listing 11-1: The code generated automatically by cargo new

For now, let’s focus solely on the it_works function. Note the #[test] annotation: this attribute indicates this is a test function, so the test runner knows to treat this function as a test. We might also have non-test functions in the tests module to help set up common scenarios or perform common operations, so we always need to indicate which functions are tests.

The example function body uses the assert_eq! macro to assert that result, which contains the result of adding 2 and 2, equals 4. This assertion serves as an example of the format for a typical test. Let’s run it to see that this test passes.

The cargo test command runs all tests in our project, as shown in Listing 11-2.

  1. $ cargo test
  2. Compiling adder v0.1.0 (file:///projects/adder)
  3. Finished test profile [unoptimized + debuginfo] target(s) in 0.57s
  4. Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
  5. running 1 test
  6. test tests::it_works ok
  7. test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  8. Doc-tests adder
  9. running 0 tests
  10. test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Listing 11-2: The output from running the automatically generated test

Cargo compiled and ran the test. We see the line running 1 test. The next line shows the name of the generated test function, called tests::it_works, and that the result of running that test is ok. The overall summary test result: ok. means that all the tests passed, and the portion that reads 1 passed; 0 failed totals the number of tests that passed or failed.

It’s possible to mark a test as ignored so it doesn’t run in a particular instance; we’ll cover that in the “Ignoring Some Tests Unless Specifically Requested” section later in this chapter. Because we haven’t done that here, the summary shows 0 ignored.

The 0 measured statistic is for benchmark tests that measure performance. Benchmark tests are, as of this writing, only available in nightly Rust. See the documentation about benchmark tests to learn more.

We can pass an argument to the cargo test command to run only tests whose name matches a string; this is called filtering and we’ll cover that in the “Running a Subset of Tests by Name” section. Here we haven’t filtered the tests being run, so the end of the summary shows 0 filtered out.

The next part of the test output starting at Doc-tests adder is for the results of any documentation tests. We don’t have any documentation tests yet, but Rust can compile any code examples that appear in our API documentation. This feature helps keep your docs and your code in sync! We’ll discuss how to write documentation tests in the “Documentation Comments as Tests” section of Chapter 14. For now, we’ll ignore the Doc-tests output.

Let’s start to customize the test to our own needs. First, change the name of the it_works function to a different name, such as exploration, like so:

Filename: src/lib.rs

  1. pub fn add(left: usize, right: usize) -> usize {
  2. left + right
  3. }
  4. #[cfg(test)]
  5. mod tests {
  6. use super::*;
  7. #[test]
  8. fn exploration() {
  9. let result = add(2, 2);
  10. assert_eq!(result, 4);
  11. }
  12. }

Then run cargo test again. The output now shows exploration instead of it_works:

  1. $ cargo test
  2. Compiling adder v0.1.0 (file:///projects/adder)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.59s
  4. Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
  5. running 1 test
  6. test tests::exploration ... ok
  7. test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  8. Doc-tests adder
  9. running 0 tests
  10. test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

Now we’ll add another test, but this time we’ll make a test that fails! Tests fail when something in the test function panics. Each test is run in a new thread, and when the main thread sees that a test thread has died, the test is marked as failed. In Chapter 9, we talked about how the simplest way to panic is to call the panic! macro. Enter the new test as a function named another, so your src/lib.rs file looks like Listing 11-3.

Filename: src/lib.rs
  1. pub fn add(left: usize, right: usize) -> usize {
  2. left + right
  3. }
  4. #[cfg(test)]
  5. mod tests {
  6. use super::*;
  7. #[test]
  8. fn exploration() {
  9. let result = add(2, 2);
  10. assert_eq!(result, 4);
  11. }
  12. #[test]
  13. fn another() {
  14. panic!(“Make this test fail”);
  15. }
  16. }
Listing 11-3: Adding a second test that will fail because we call the panic! macro

Run the tests again using cargo test. The output should look like Listing 11-4, which shows that our exploration test passed and another failed.

  1. $ cargo test
  2. Compiling adder v0.1.0 (file:///projects/adder)
  3. Finished test profile [unoptimized + debuginfo] target(s) in 0.72s
  4. Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
  5. running 2 tests
  6. test tests::another FAILED
  7. test tests::exploration ok
  8. failures:
  9. —— tests::another stdout ——
  10. thread tests::another panicked at src/lib.rs:17:9:
  11. Make this test fail
  12. note: run with RUST_BACKTRACE=1 environment variable to display a backtrace
  13. failures:
  14. tests::another
  15. test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  16. error: test failed, to rerun pass --lib
Listing 11-4: Test results when one test passes and one test fails

Instead of ok, the line test tests::another shows FAILED. Two new sections appear between the individual results and the summary: the first displays the detailed reason for each test failure. In this case, we get the details that another failed because it panicked at 'Make this test fail' on line 17 in the src/lib.rs file. The next section lists just the names of all the failing tests, which is useful when there are lots of tests and lots of detailed failing test output. We can use the name of a failing test to run just that test to more easily debug it; we’ll talk more about ways to run tests in the “Controlling How Tests Are Run” section.

The summary line displays at the end: overall, our test result is FAILED. We had one test pass and one test fail.

Now that you’ve seen what the test results look like in different scenarios, let’s look at some macros other than panic! that are useful in tests.

Checking Results with the assert! Macro

The assert! macro, provided by the standard library, is useful when you want to ensure that some condition in a test evaluates to true. We give the assert! macro an argument that evaluates to a Boolean. If the value is true, nothing happens and the test passes. If the value is false, the assert! macro calls panic! to cause the test to fail. Using the assert! macro helps us check that our code is functioning in the way we intend.

In Chapter 5, Listing 5-15, we used a Rectangle struct and a can_hold method, which are repeated here in Listing 11-5. Let’s put this code in the src/lib.rs file, then write some tests for it using the assert! macro.

Filename: src/lib.rs
  1. #[derive(Debug)]
  2. struct Rectangle {
  3. width: u32,
  4. height: u32,
  5. }
  6. impl Rectangle {
  7. fn can_hold(&self, other: &Rectangle) -> bool {
  8. self.width > other.width && self.height > other.height
  9. }
  10. }
Listing 11-5: The Rectangle struct and its can_hold method from Chapter 5

The can_hold method returns a Boolean, which means it’s a perfect use case for the assert! macro. In Listing 11-6, we write a test that exercises the can_hold method by creating a Rectangle instance that has a width of 8 and a height of 7 and asserting that it can hold another Rectangle instance that has a width of 5 and a height of 1.

Filename: src/lib.rs
  1. #[derive(Debug)]
  2. struct Rectangle {
  3. width: u32,
  4. height: u32,
  5. }
  6. impl Rectangle {
  7. fn can_hold(&self, other: &Rectangle) -> bool {
  8. self.width > other.width && self.height > other.height
  9. }
  10. }
  11. #[cfg(test)]
  12. mod tests {
  13. use super::*;
  14. #[test]
  15. fn larger_can_hold_smaller() {
  16. let larger = Rectangle {
  17. width: 8,
  18. height: 7,
  19. };
  20. let smaller = Rectangle {
  21. width: 5,
  22. height: 1,
  23. };
  24. assert!(larger.can_hold(&smaller));
  25. }
  26. }
Listing 11-6: A test for can_hold that checks whether a larger rectangle can indeed hold a smaller rectangle

Note the use super::*; line inside the tests module. The tests module is a regular module that follows the usual visibility rules we covered in Chapter 7 in the “Paths for Referring to an Item in the Module Tree” section. Because the tests module is an inner module, we need to bring the code under test in the outer module into the scope of the inner module. We use a glob here, so anything we define in the outer module is available to this tests module.

We’ve named our test larger_can_hold_smaller, and we’ve created the two Rectangle instances that we need. Then we called the assert! macro and passed it the result of calling larger.can_hold(&smaller). This expression is supposed to return true, so our test should pass. Let’s find out!

  1. $ cargo test
  2. Compiling rectangle v0.1.0 (file:///projects/rectangle)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.66s
  4. Running unittests src/lib.rs (target/debug/deps/rectangle-6584c4561e48942e)
  5. running 1 test
  6. test tests::larger_can_hold_smaller ... ok
  7. test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  8. Doc-tests rectangle
  9. running 0 tests
  10. test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

It does pass! Let’s add another test, this time asserting that a smaller rectangle cannot hold a larger rectangle:

Filename: src/lib.rs

  1. #[derive(Debug)]
  2. struct Rectangle {
  3. width: u32,
  4. height: u32,
  5. }
  6. impl Rectangle {
  7. fn can_hold(&self, other: &Rectangle) -> bool {
  8. self.width > other.width && self.height > other.height
  9. }
  10. }
  11. #[cfg(test)]
  12. mod tests {
  13. use super::*;
  14. #[test]
  15. fn larger_can_hold_smaller() {
  16. // --snip--
  17. let larger = Rectangle {
  18. width: 8,
  19. height: 7,
  20. };
  21. let smaller = Rectangle {
  22. width: 5,
  23. height: 1,
  24. };
  25. assert!(larger.can_hold(&smaller));
  26. }
  27. #[test]
  28. fn smaller_cannot_hold_larger() {
  29. let larger = Rectangle {
  30. width: 8,
  31. height: 7,
  32. };
  33. let smaller = Rectangle {
  34. width: 5,
  35. height: 1,
  36. };
  37. assert!(!smaller.can_hold(&larger));
  38. }
  39. }

Because the correct result of the can_hold function in this case is false, we need to negate that result before we pass it to the assert! macro. As a result, our test will pass if can_hold returns false:

  1. $ cargo test
  2. Compiling rectangle v0.1.0 (file:///projects/rectangle)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.66s
  4. Running unittests src/lib.rs (target/debug/deps/rectangle-6584c4561e48942e)
  5. running 2 tests
  6. test tests::larger_can_hold_smaller ... ok
  7. test tests::smaller_cannot_hold_larger ... ok
  8. test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  9. Doc-tests rectangle
  10. running 0 tests
  11. test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

Two tests that pass! Now let’s see what happens to our test results when we introduce a bug in our code. We’ll change the implementation of the can_hold method by replacing the greater-than sign with a less-than sign when it compares the widths:

  1. #[derive(Debug)]
  2. struct Rectangle {
  3. width: u32,
  4. height: u32,
  5. }
  6. // --snip--
  7. impl Rectangle {
  8. fn can_hold(&self, other: &Rectangle) -> bool {
  9. self.width < other.width && self.height > other.height
  10. }
  11. }
  12. #[cfg(test)]
  13. mod tests {
  14. use super::*;
  15. #[test]
  16. fn larger_can_hold_smaller() {
  17. let larger = Rectangle {
  18. width: 8,
  19. height: 7,
  20. };
  21. let smaller = Rectangle {
  22. width: 5,
  23. height: 1,
  24. };
  25. assert!(larger.can_hold(&smaller));
  26. }
  27. #[test]
  28. fn smaller_cannot_hold_larger() {
  29. let larger = Rectangle {
  30. width: 8,
  31. height: 7,
  32. };
  33. let smaller = Rectangle {
  34. width: 5,
  35. height: 1,
  36. };
  37. assert!(!smaller.can_hold(&larger));
  38. }
  39. }

Running the tests now produces the following:

  1. $ cargo test
  2. Compiling rectangle v0.1.0 (file:///projects/rectangle)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.66s
  4. Running unittests src/lib.rs (target/debug/deps/rectangle-6584c4561e48942e)
  5. running 2 tests
  6. test tests::larger_can_hold_smaller ... FAILED
  7. test tests::smaller_cannot_hold_larger ... ok
  8. failures:
  9. ---- tests::larger_can_hold_smaller stdout ----
  10. thread 'tests::larger_can_hold_smaller' panicked at src/lib.rs:28:9:
  11. assertion failed: larger.can_hold(&smaller)
  12. note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
  13. failures:
  14. tests::larger_can_hold_smaller
  15. test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  16. error: test failed, to rerun pass `--lib`

Our tests caught the bug! Because larger.width is 8 and smaller.width is 5, the comparison of the widths in can_hold now returns false: 8 is not less than 5.

Testing Equality with the assert_eq! and assert_ne! Macros

A common way to verify functionality is to test for equality between the result of the code under test and the value you expect the code to return. You could do this by using the assert! macro and passing it an expression using the == operator. However, this is such a common test that the standard library provides a pair of macros—assert_eq! and assert_ne!—to perform this test more conveniently. These macros compare two arguments for equality or inequality, respectively. They’ll also print the two values if the assertion fails, which makes it easier to see why the test failed; conversely, the assert! macro only indicates that it got a false value for the == expression, without printing the values that led to the false value.

In Listing 11-7, we write a function named add_two that adds 2 to its parameter, then we test this function using the assert_eq! macro.

Filename: src/lib.rs
  1. pub fn add_two(a: usize) -> usize {
  2. a + 2
  3. }
  4. #[cfg(test)]
  5. mod tests {
  6. use super::*;
  7. #[test]
  8. fn it_adds_two() {
  9. let result = add_two(2);
  10. assert_eq!(result, 4);
  11. }
  12. }
Listing 11-7: Testing the function add_two using the assert_eq! macro

Let’s check that it passes!

  1. $ cargo test
  2. Compiling adder v0.1.0 (file:///projects/adder)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.58s
  4. Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
  5. running 1 test
  6. test tests::it_adds_two ... ok
  7. test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  8. Doc-tests adder
  9. running 0 tests
  10. test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

We create a variable named result that holds the result of calling add_two(2). Then we pass result and 4 as the arguments to assert_eq!. The output line for this test is test tests::it_adds_two ... ok, and the ok text indicates that our test passed!

Let’s introduce a bug into our code to see what assert_eq! looks like when it fails. Change the implementation of the add_two function to instead add 3:

  1. pub fn add_two(a: usize) -> usize {
  2. a + 3
  3. }
  4. #[cfg(test)]
  5. mod tests {
  6. use super::*;
  7. #[test]
  8. fn it_adds_two() {
  9. let result = add_two(2);
  10. assert_eq!(result, 4);
  11. }
  12. }

Run the tests again:

  1. $ cargo test
  2. Compiling adder v0.1.0 (file:///projects/adder)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.61s
  4. Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
  5. running 1 test
  6. test tests::it_adds_two ... FAILED
  7. failures:
  8. ---- tests::it_adds_two stdout ----
  9. thread 'tests::it_adds_two' panicked at src/lib.rs:12:9:
  10. assertion `left == right` failed
  11. left: 5
  12. right: 4
  13. note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
  14. failures:
  15. tests::it_adds_two
  16. test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  17. error: test failed, to rerun pass `--lib`

Our test caught the bug! The it_adds_two test failed, and the message tells us assertion `left == right` failed and what the left and right values are. This message helps us start debugging: the left argument, where we had the result of calling add_two(2), was 5 but the right argument was 4. You can imagine that this would be especially helpful when we have a lot of tests going on.

Note that in some languages and test frameworks, the parameters to equality assertion functions are called expected and actual, and the order in which we specify the arguments matters. However, in Rust, they’re called left and right, and the order in which we specify the value we expect and the value the code produces doesn’t matter. We could write the assertion in this test as assert_eq!(4, result), which would produce the same failure message that displays assertion failed: `(left == right)` .

The assert_ne! macro will pass if the two values we give it are not equal and fail if they’re equal. This macro is most useful for cases when we’re not sure what a value will be, but we know what the value definitely shouldn’t be. For example, if we’re testing a function that is guaranteed to change its input in some way, but the way in which the input is changed depends on the day of the week that we run our tests, the best thing to assert might be that the output of the function is not equal to the input.

Under the surface, the assert_eq! and assert_ne! macros use the operators == and !=, respectively. When the assertions fail, these macros print their arguments using debug formatting, which means the values being compared must implement the PartialEq and Debug traits. All primitive types and most of the standard library types implement these traits. For structs and enums that you define yourself, you’ll need to implement PartialEq to assert equality of those types. You’ll also need to implement Debug to print the values when the assertion fails. Because both traits are derivable traits, as mentioned in Listing 5-12 in Chapter 5, this is usually as straightforward as adding the #[derive(PartialEq, Debug)] annotation to your struct or enum definition. See Appendix C, “Derivable Traits,” for more details about these and other derivable traits.

Adding Custom Failure Messages

You can also add a custom message to be printed with the failure message as optional arguments to the assert!, assert_eq!, and assert_ne! macros. Any arguments specified after the required arguments are passed along to the format! macro (discussed in Chapter 8 in the “Concatenation with the + Operator or the format! Macro” section), so you can pass a format string that contains {} placeholders and values to go in those placeholders. Custom messages are useful for documenting what an assertion means; when a test fails, you’ll have a better idea of what the problem is with the code.

For example, let’s say we have a function that greets people by name and we want to test that the name we pass into the function appears in the output:

Filename: src/lib.rs

  1. pub fn greeting(name: &str) -> String {
  2. format!("Hello {name}!")
  3. }
  4. #[cfg(test)]
  5. mod tests {
  6. use super::*;
  7. #[test]
  8. fn greeting_contains_name() {
  9. let result = greeting("Carol");
  10. assert!(result.contains("Carol"));
  11. }
  12. }

The requirements for this program haven’t been agreed upon yet, and we’re pretty sure the Hello text at the beginning of the greeting will change. We decided we don’t want to have to update the test when the requirements change, so instead of checking for exact equality to the value returned from the greeting function, we’ll just assert that the output contains the text of the input parameter.

Now let’s introduce a bug into this code by changing greeting to exclude name to see what the default test failure looks like:

  1. pub fn greeting(name: &str) -> String {
  2. String::from("Hello!")
  3. }
  4. #[cfg(test)]
  5. mod tests {
  6. use super::*;
  7. #[test]
  8. fn greeting_contains_name() {
  9. let result = greeting("Carol");
  10. assert!(result.contains("Carol"));
  11. }
  12. }

Running this test produces the following:

  1. $ cargo test
  2. Compiling greeter v0.1.0 (file:///projects/greeter)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.91s
  4. Running unittests src/lib.rs (target/debug/deps/greeter-170b942eb5bf5e3a)
  5. running 1 test
  6. test tests::greeting_contains_name ... FAILED
  7. failures:
  8. ---- tests::greeting_contains_name stdout ----
  9. thread 'tests::greeting_contains_name' panicked at src/lib.rs:12:9:
  10. assertion failed: result.contains("Carol")
  11. note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
  12. failures:
  13. tests::greeting_contains_name
  14. test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  15. error: test failed, to rerun pass `--lib`

This result just indicates that the assertion failed and which line the assertion is on. A more useful failure message would print the value from the greeting function. Let’s add a custom failure message composed of a format string with a placeholder filled in with the actual value we got from the greeting function:

  1. pub fn greeting(name: &str) -> String {
  2. String::from("Hello!")
  3. }
  4. #[cfg(test)]
  5. mod tests {
  6. use super::*;
  7. #[test]
  8. fn greeting_contains_name() {
  9. let result = greeting("Carol");
  10. assert!(
  11. result.contains("Carol"),
  12. "Greeting did not contain name, value was `{result}`"
  13. );
  14. }
  15. }

Now when we run the test, we’ll get a more informative error message:

  1. $ cargo test
  2. Compiling greeter v0.1.0 (file:///projects/greeter)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.93s
  4. Running unittests src/lib.rs (target/debug/deps/greeter-170b942eb5bf5e3a)
  5. running 1 test
  6. test tests::greeting_contains_name ... FAILED
  7. failures:
  8. ---- tests::greeting_contains_name stdout ----
  9. thread 'tests::greeting_contains_name' panicked at src/lib.rs:12:9:
  10. Greeting did not contain name, value was `Hello!`
  11. note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
  12. failures:
  13. tests::greeting_contains_name
  14. test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  15. error: test failed, to rerun pass `--lib`

We can see the value we actually got in the test output, which would help us debug what happened instead of what we were expecting to happen.

Checking for Panics with should_panic

In addition to checking return values, it’s important to check that our code handles error conditions as we expect. For example, consider the Guess type that we created in Chapter 9, Listing 9-13. Other code that uses Guess depends on the guarantee that Guess instances will contain only values between 1 and 100. We can write a test that ensures that attempting to create a Guess instance with a value outside that range panics.

We do this by adding the attribute should_panic to our test function. The test passes if the code inside the function panics; the test fails if the code inside the function doesn’t panic.

Listing 11-8 shows a test that checks that the error conditions of Guess::new happen when we expect them to.

Filename: src/lib.rs
  1. pub struct Guess {
  2. value: i32,
  3. }
  4. impl Guess {
  5. pub fn new(value: i32) -> Guess {
  6. if value < 1 || value > 100 {
  7. panic!(“Guess value must be between 1 and 100, got {value}.”);
  8. }
  9. Guess { value }
  10. }
  11. }
  12. #[cfg(test)]
  13. mod tests {
  14. use super::*;
  15. #[test]
  16. #[should_panic]
  17. fn greater_than_100() {
  18. Guess::new(200);
  19. }
  20. }
Listing 11-8: Testing that a condition will cause a panic!

We place the #[should_panic] attribute after the #[test] attribute and before the test function it applies to. Let’s look at the result when this test passes:

  1. $ cargo test
  2. Compiling guessing_game v0.1.0 (file:///projects/guessing_game)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.58s
  4. Running unittests src/lib.rs (target/debug/deps/guessing_game-57d70c3acb738f4d)
  5. running 1 test
  6. test tests::greater_than_100 - should panic ... ok
  7. test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  8. Doc-tests guessing_game
  9. running 0 tests
  10. test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

Looks good! Now let’s introduce a bug in our code by removing the condition that the new function will panic if the value is greater than 100:

  1. pub struct Guess {
  2. value: i32,
  3. }
  4. // --snip--
  5. impl Guess {
  6. pub fn new(value: i32) -> Guess {
  7. if value < 1 {
  8. panic!("Guess value must be between 1 and 100, got {value}.");
  9. }
  10. Guess { value }
  11. }
  12. }
  13. #[cfg(test)]
  14. mod tests {
  15. use super::*;
  16. #[test]
  17. #[should_panic]
  18. fn greater_than_100() {
  19. Guess::new(200);
  20. }
  21. }

When we run the test in Listing 11-8, it will fail:

  1. $ cargo test
  2. Compiling guessing_game v0.1.0 (file:///projects/guessing_game)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.62s
  4. Running unittests src/lib.rs (target/debug/deps/guessing_game-57d70c3acb738f4d)
  5. running 1 test
  6. test tests::greater_than_100 - should panic ... FAILED
  7. failures:
  8. ---- tests::greater_than_100 stdout ----
  9. note: test did not panic as expected
  10. failures:
  11. tests::greater_than_100
  12. test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  13. error: test failed, to rerun pass `--lib`

We don’t get a very helpful message in this case, but when we look at the test function, we see that it’s annotated with #[should_panic]. The failure we got means that the code in the test function did not cause a panic.

Tests that use should_panic can be imprecise. A should_panic test would pass even if the test panics for a different reason from the one we were expecting. To make should_panic tests more precise, we can add an optional expected parameter to the should_panic attribute. The test harness will make sure that the failure message contains the provided text. For example, consider the modified code for Guess in Listing 11-9 where the new function panics with different messages depending on whether the value is too small or too large.

Filename: src/lib.rs
  1. pub struct Guess {
  2. value: i32,
  3. }
  4. // —snip—
  5. impl Guess {
  6. pub fn new(value: i32) -> Guess {
  7. if value < 1 {
  8. panic!(
  9. Guess value must be greater than or equal to 1, got {value}.”
  10. );
  11. } else if value > 100 {
  12. panic!(
  13. Guess value must be less than or equal to 100, got {value}.”
  14. );
  15. }
  16. Guess { value }
  17. }
  18. }
  19. #[cfg(test)]
  20. mod tests {
  21. use super::*;
  22. #[test]
  23. #[should_panic(expected = “less than or equal to 100”)]
  24. fn greater_than_100() {
  25. Guess::new(200);
  26. }
  27. }
Listing 11-9: Testing for a panic! with a panic message containing a specified substring

This test will pass because the value we put in the should_panic attribute’s expected parameter is a substring of the message that the Guess::new function panics with. We could have specified the entire panic message that we expect, which in this case would be Guess value must be less than or equal to 100, got 200. What you choose to specify depends on how much of the panic message is unique or dynamic and how precise you want your test to be. In this case, a substring of the panic message is enough to ensure that the code in the test function executes the else if value > 100 case.

To see what happens when a should_panic test with an expected message fails, let’s again introduce a bug into our code by swapping the bodies of the if value < 1 and the else if value > 100 blocks:

  1. pub struct Guess {
  2. value: i32,
  3. }
  4. impl Guess {
  5. pub fn new(value: i32) -> Guess {
  6. if value < 1 {
  7. panic!(
  8. "Guess value must be less than or equal to 100, got {value}."
  9. );
  10. } else if value > 100 {
  11. panic!(
  12. "Guess value must be greater than or equal to 1, got {value}."
  13. );
  14. }
  15. Guess { value }
  16. }
  17. }
  18. #[cfg(test)]
  19. mod tests {
  20. use super::*;
  21. #[test]
  22. #[should_panic(expected = "less than or equal to 100")]
  23. fn greater_than_100() {
  24. Guess::new(200);
  25. }
  26. }

This time when we run the should_panic test, it will fail:

  1. $ cargo test
  2. Compiling guessing_game v0.1.0 (file:///projects/guessing_game)
  3. Finished `test` profile [unoptimized + debuginfo] target(s) in 0.66s
  4. Running unittests src/lib.rs (target/debug/deps/guessing_game-57d70c3acb738f4d)
  5. running 1 test
  6. test tests::greater_than_100 - should panic ... FAILED
  7. failures:
  8. ---- tests::greater_than_100 stdout ----
  9. thread 'tests::greater_than_100' panicked at src/lib.rs:12:13:
  10. Guess value must be greater than or equal to 1, got 200.
  11. note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
  12. note: panic did not contain expected string
  13. panic message: `"Guess value must be greater than or equal to 1, got 200."`,
  14. expected substring: `"less than or equal to 100"`
  15. failures:
  16. tests::greater_than_100
  17. test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
  18. error: test failed, to rerun pass `--lib`

The failure message indicates that this test did indeed panic as we expected, but the panic message did not include the expected string less than or equal to 100. The panic message that we did get in this case was Guess value must be greater than or equal to 1, got 200. Now we can start figuring out where our bug is!

Using Result in Tests

Our tests so far all panic when they fail. We can also write tests that use Result<T, E>! Here’s the test from Listing 11-1, rewritten to use Result<T, E> and return an Err instead of panicking:

  1. pub fn add(left: usize, right: usize) -> usize {
  2. left + right
  3. }
  4. #[cfg(test)]
  5. mod tests {
  6. use super::*;
  7. #[test]
  8. fn it_works() -> Result<(), String> {
  9. let result = add(2, 2);
  10. if result == 4 {
  11. Ok(())
  12. } else {
  13. Err(String::from("two plus two does not equal four"))
  14. }
  15. }
  16. }

The it_works function now has the Result<(), String> return type. In the body of the function, rather than calling the assert_eq! macro, we return Ok(()) when the test passes and an Err with a String inside when the test fails.

Writing tests so they return a Result<T, E> enables you to use the question mark operator in the body of tests, which can be a convenient way to write tests that should fail if any operation within them returns an Err variant.

You can’t use the #[should_panic] annotation on tests that use Result<T, E>. To assert that an operation returns an Err variant, don’t use the question mark operator on the Result<T, E> value. Instead, use assert!(value.is_err()).

Now that you know several ways to write tests, let’s look at what is happening when we run our tests and explore the different options we can use with cargo test.