Multi-threaded Link Checker

Let us use our new knowledge to create a multi-threaded link checker. It should start at a webpage and check that links on the page are valid. It should recursively check other pages on the same domain and keep doing this until all pages have been validated.

For this, you will need an HTTP client such as reqwest. You will also need a way to find links, we can use scraper. Finally, we’ll need some way of handling errors, we will use thiserror.

Create a new Cargo project and reqwest it as a dependency with:

  1. cargo new link-checker
  2. cd link-checker
  3. cargo add --features blocking,rustls-tls reqwest
  4. cargo add scraper
  5. cargo add thiserror

If cargo add fails with error: no such subcommand, then please edit the Cargo.toml file by hand. Add the dependencies listed below.

The cargo add calls will update the Cargo.toml file to look like this:

  1. [package]
  2. name = "link-checker"
  3. version = "0.1.0"
  4. edition = "2021"
  5. publish = false
  6. [dependencies]
  7. reqwest = { version = "0.11.12", features = ["blocking", "rustls-tls"] }
  8. scraper = "0.13.0"
  9. thiserror = "1.0.37"

You can now download the start page. Try with a small site such as https://www.google.org/.

Your src/main.rs file should look something like this:

  1. use reqwest::blocking::Client;
  2. use reqwest::Url;
  3. use scraper::{Html, Selector};
  4. use thiserror::Error;
  5. #[derive(Error, Debug)]
  6. enum Error {
  7.     #[error("request error: {0}")]
  8.     ReqwestError(#[from] reqwest::Error),
  9.     #[error("bad http response: {0}")]
  10.     BadResponse(String),
  11. }
  12. #[derive(Debug)]
  13. struct CrawlCommand {
  14.     url: Url,
  15.     extract_links: bool,
  16. }
  17. fn visit_page(client: &Client, command: &CrawlCommand) -> Result<Vec<Url>, Error> {
  18.     println!("Checking {:#}", command.url);
  19.     let response = client.get(command.url.clone()).send()?;
  20.     if !response.status().is_success() {
  21.         return Err(Error::BadResponse(response.status().to_string()));
  22.     }
  23.     let mut link_urls = Vec::new();
  24.     if !command.extract_links {
  25.         return Ok(link_urls);
  26.     }
  27.     let base_url = response.url().to_owned();
  28.     let body_text = response.text()?;
  29.     let document = Html::parse_document(&body_text);
  30.     let selector = Selector::parse("a").unwrap();
  31.     let href_values = document
  32.         .select(&selector)
  33.         .filter_map(|element| element.value().attr("href"));
  34.     for href in href_values {
  35.         match base_url.join(href) {
  36.             Ok(link_url) => {
  37.                 link_urls.push(link_url);
  38.             }
  39.             Err(err) => {
  40.                 println!("On {base_url:#}: ignored unparsable {href:?}: {err}");
  41.             }
  42.         }
  43.     }
  44.     Ok(link_urls)
  45. }
  46. fn main() {
  47.     let client = Client::new();
  48.     let start_url = Url::parse("https://www.google.org").unwrap();
  49.     let crawl_command = CrawlCommand{ url: start_url, extract_links: true };
  50.     match visit_page(&client, &crawl_command) {
  51.         Ok(links) => println!("Links: {links:#?}"),
  52.         Err(err) => println!("Could not extract links: {err:#}"),
  53.     }
  54. }

Run the code in src/main.rs with

  1. cargo run

Tasks

  • Use threads to check the links in parallel: send the URLs to be checked to a channel and let a few threads check the URLs in parallel.
  • Extend this to recursively extract links from all pages on the www.google.org domain. Put an upper limit of 100 pages or so so that you don’t end up being blocked by the site.