Architecture overview
This document describes the architecture of Scrapy and how its componentsinteract.
Overview
The following diagram shows an overview of the Scrapy architecture with itscomponents and an outline of the data flow that takes place inside the system(shown by the red arrows). A brief description of the components is includedbelow with links for more detailed information about them. The data flow isalso described below.
Data flow
The data flow in Scrapy is controlled by the execution engine, and goes likethis:
- The Engine gets the initial Requests to crawl from theSpider.
- The Engine schedules the Requests in theScheduler and asks for thenext Requests to crawl.
- The Scheduler returns the next Requeststo the Engine.
- The Engine sends the Requests to theDownloader, passing through theDownloader Middlewares (see
process_request()
). - Once the page finishes downloading theDownloader generates a Response (withthat page) and sends it to the Engine, passing through theDownloader Middlewares (see
process_response()
). - The Engine receives the Response from theDownloader and sends it to theSpider for processing, passingthrough the Spider Middleware (see
process_spider_input()
). - The Spider processes the Response and returnsscraped items and new Requests (to follow) to theEngine, passing through theSpider Middleware (see
process_spider_output()
). - The Engine sends processed items toItem Pipelines, then send processed Requests tothe Scheduler and asks for possible next Requeststo crawl.
- The process repeats (from step 1) until there are no more requests from theScheduler.
Components
Scrapy Engine
The engine is responsible for controlling the data flow between all componentsof the system, and triggering events when certain actions occur. See theData Flow section above for more details.
Scheduler
The Scheduler receives requests from the engine and enqueues them for feedingthem later (also to the engine) when the engine requests them.
Downloader
The Downloader is responsible for fetching web pages and feeding them to theengine which, in turn, feeds them to the spiders.
Spiders
Spiders are custom classes written by Scrapy users to parse responses andextract items (aka scraped items) from them or additional requests tofollow. For more information see Spiders.
Item Pipeline
The Item Pipeline is responsible for processing the items once they have beenextracted (or scraped) by the spiders. Typical tasks include cleansing,validation and persistence (like storing the item in a database). For moreinformation see Item Pipeline.
Downloader middlewares
Downloader middlewares are specific hooks that sit between the Engine and theDownloader and process requests when they pass from the Engine to theDownloader, and responses that pass from Downloader to the Engine.
Use a Downloader middleware if you need to do one of the following:
- process a request just before it is sent to the Downloader(i.e. right before Scrapy sends the request to the website);
- change received response before passing it to a spider;
- send a new Request instead of passing received response to a spider;
- pass response to a spider without fetching a web page;
- silently drop some requests.
For more information see Downloader Middleware.
Spider middlewares
Spider middlewares are specific hooks that sit between the Engine and theSpiders and are able to process spider input (responses) and output (items andrequests).
Use a Spider middleware if you need to
- post-process output of spider callbacks - change/add/remove requests or items;
- post-process start_requests;
- handle spider exceptions;
- call errback instead of callback for some of the requests based on responsecontent.
For more information see Spider Middleware.
Event-driven networking
Scrapy is written with Twisted, a popular event-driven networking frameworkfor Python. Thus, it’s implemented using a non-blocking (aka asynchronous) codefor concurrency.
For more information about asynchronous programming and Twisted see theselinks: