- Downloader Middleware
- Activating a downloader middleware
- Writing your own downloader middleware
- Built-in downloader middleware reference
- CookiesMiddleware
- DefaultHeadersMiddleware
- DownloadTimeoutMiddleware
- HttpAuthMiddleware
- HttpCacheMiddleware
- HttpCompressionMiddleware
- HttpProxyMiddleware
- RedirectMiddleware
- MetaRefreshMiddleware
- RetryMiddleware
- RobotsTxtMiddleware
- Implementing support for a new parser
- DownloaderStats
- UserAgentMiddleware
- AjaxCrawlMiddleware
Downloader Middleware
The downloader middleware is a framework of hooks into Scrapy’srequest/response processing. It’s a light, low-level system for globallyaltering Scrapy’s requests and responses.
Activating a downloader middleware
To activate a downloader middleware component, add it to theDOWNLOADER_MIDDLEWARES
setting, which is a dict whose keys are themiddleware class paths and their values are the middleware orders.
Here’s an example:
- DOWNLOADER_MIDDLEWARES = {
- 'myproject.middlewares.CustomDownloaderMiddleware': 543,
- }
The DOWNLOADER_MIDDLEWARES
setting is merged with theDOWNLOADER_MIDDLEWARES_BASE
setting defined in Scrapy (and not meantto be overridden) and then sorted by order to get the final sorted list ofenabled middlewares: the first middleware is the one closer to the engine andthe last is the one closer to the downloader. In other words,the process_request()
method of each middleware will be invoked in increasingmiddleware order (100, 200, 300, …) and the process_response()
methodof each middleware will be invoked in decreasing order.
To decide which order to assign to your middleware see theDOWNLOADER_MIDDLEWARES_BASE
setting and pick a value according towhere you want to insert the middleware. The order does matter because eachmiddleware performs a different action and your middleware could depend on someprevious (or subsequent) middleware being applied.
If you want to disable a built-in middleware (the ones defined inDOWNLOADER_MIDDLEWARES_BASE
and enabled by default) you must define itin your project’s DOWNLOADER_MIDDLEWARES
setting and assign None
as its value. For example, if you want to disable the user-agent middleware:
- DOWNLOADER_MIDDLEWARES = {
- 'myproject.middlewares.CustomDownloaderMiddleware': 543,
- 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
- }
Finally, keep in mind that some middlewares may need to be enabled through aparticular setting. See each middleware documentation for more info.
Writing your own downloader middleware
Each downloader middleware is a Python class that defines one or more of themethods defined below.
The main entry point is the from_crawler
class method, which receives aCrawler
instance. The Crawler
object gives you access, for example, to the settings.
Note
Any of the downloader middleware methods may also return a deferred.
processrequest
(_request, spider)- This method is called for each request that goes through the downloadmiddleware.
process_request()
should either: return None
, return aResponse
object, return a Request
object, or raise IgnoreRequest
.
If it returns None
, Scrapy will continue processing this request, executing allother middlewares until, finally, the appropriate downloader handler is calledthe request performed (and its response downloaded).
If it returns a Response
object, Scrapy won’t bothercalling any other process_request()
or process_exception()
methods,or the appropriate download function; it’ll return that response. The process_response()
methods of installed middleware is always called on every response.
If it returns a Request
object, Scrapy will stop callingprocess_request methods and reschedule the returned request. Once the newly returnedrequest is performed, the appropriate middleware chain will be called onthe downloaded response.
If it raises an IgnoreRequest
exception, theprocess_exception()
methods of installed downloader middleware will be called.If none of them handle the exception, the errback function of the request(Request.errback
) is called. If no code handles the raised exception, it isignored and not logged (unlike other exceptions).
Parameters:
- **request** ([<code>Request</code>]($a8ff621e6a7e7cdf.md#scrapy.http.Request) object) – the request being processed
- **spider** ([<code>Spider</code>]($fb832e20e85f228c.md#scrapy.spiders.Spider) object) – the spider for which this request is intended
processresponse
(_request, response, spider)process_response()
should either: return aResponse
object, return aRequest
object orraise aIgnoreRequest
exception.
If it returns a Response
(it could be the same givenresponse, or a brand-new one), that response will continue to be processedwith the process_response()
of the next middleware in the chain.
If it returns a Request
object, the middleware chain ishalted and the returned request is rescheduled to be downloaded in the future.This is the same behavior as if a request is returned from process_request()
.
If it raises an IgnoreRequest
exception, the errbackfunction of the request (Request.errback
) is called. If no code handles the raisedexception, it is ignored and not logged (unlike other exceptions).
Parameters:
- **request** (is a [<code>Request</code>]($a8ff621e6a7e7cdf.md#scrapy.http.Request) object) – the request that originated the response
- **response** ([<code>Response</code>]($a8ff621e6a7e7cdf.md#scrapy.http.Response) object) – the response being processed
- **spider** ([<code>Spider</code>]($fb832e20e85f228c.md#scrapy.spiders.Spider) object) – the spider for which this response is intended
processexception
(_request, exception, spider)- Scrapy calls
process_exception()
when a download handleror aprocess_request()
(from a downloader middleware) raises anexception (including anIgnoreRequest
exception)
process_exception()
should return: either None
,a Response
object, or a Request
object.
If it returns None
, Scrapy will continue processing this exception,executing any other process_exception()
methods of installed middleware,until no middleware is left and the default exception handling kicks in.
If it returns a Response
object, the process_response()
method chain of installed middleware is started, and Scrapy won’t bother callingany other process_exception()
methods of middleware.
If it returns a Request
object, the returned request isrescheduled to be downloaded in the future. This stops the execution ofprocess_exception()
methods of the middleware the same as returning aresponse would.
Parameters:
- **request** (is a [<code>Request</code>]($a8ff621e6a7e7cdf.md#scrapy.http.Request) object) – the request that generated the exception
- **exception** (an <code>Exception</code> object) – the raised exception
- **spider** ([<code>Spider</code>]($fb832e20e85f228c.md#scrapy.spiders.Spider) object) – the spider for which this request is intended
fromcrawler
(_cls, crawler)- If present, this classmethod is called to create a middleware instancefrom a
Crawler
. It must return a new instanceof the middleware. Crawler object provides access to all Scrapy corecomponents like settings and signals; it is a way for middleware toaccess them and hook its functionality into Scrapy.
Parameters:crawler (Crawler
object) – crawler that uses this middleware
Built-in downloader middleware reference
This page describes all downloader middleware components that come withScrapy. For information on how to use them and how to write your own downloadermiddleware, see the downloader middleware usage guide.
For a list of the components enabled by default (and their orders) see theDOWNLOADER_MIDDLEWARES_BASE
setting.
CookiesMiddleware
- class
scrapy.downloadermiddlewares.cookies.
CookiesMiddleware
[source] - This middleware enables working with sites that require cookies, such asthose that use sessions. It keeps track of cookies sent by web servers, andsends them back on subsequent requests (from that spider), just like webbrowsers do.
The following settings can be used to configure the cookie middleware:
Multiple cookie sessions per spider
New in version 0.15.
There is support for keeping multiple cookie sessions per spider by using thecookiejar
Request meta key. By default it uses a single cookie jar(session), but you can pass an identifier to use different ones.
For example:
- for i, url in enumerate(urls):
- yield scrapy.Request(url, meta={'cookiejar': i},
- callback=self.parse_page)
Keep in mind that the cookiejar
meta key is not “sticky”. You need to keeppassing it along on subsequent requests. For example:
- def parse_page(self, response):
- # do some processing
- return scrapy.Request("http://www.example.com/otherpage",
- meta={'cookiejar': response.meta['cookiejar']},
- callback=self.parse_other_page)
COOKIES_ENABLED
Default: True
Whether to enable the cookies middleware. If disabled, no cookies will be sentto web servers.
Notice that despite the value of COOKIES_ENABLED
setting ifRequest.
meta['dont_merge_cookies']
evaluates to True
the request cookies will not be sent to theweb server and received cookies in Response
willnot be merged with the existing cookies.
For more detailed information see the cookies
parameter inRequest
.
COOKIES_DEBUG
Default: False
If enabled, Scrapy will log all cookies sent in requests (i.e. Cookie
header) and all cookies received in responses (i.e. Set-Cookie
header).
Here’s an example of a log with COOKIES_DEBUG
enabled:
- 2011-04-06 14:35:10-0300 [scrapy.core.engine] INFO: Spider opened
- 2011-04-06 14:35:10-0300 [scrapy.downloadermiddlewares.cookies] DEBUG: Sending cookies to: <GET http://www.diningcity.com/netherlands/index.html>
- Cookie: clientlanguage_nl=en_EN
- 2011-04-06 14:35:14-0300 [scrapy.downloadermiddlewares.cookies] DEBUG: Received cookies from: <200 http://www.diningcity.com/netherlands/index.html>
- Set-Cookie: JSESSIONID=B~FA4DC0C496C8762AE4F1A620EAB34F38; Path=/
- Set-Cookie: ip_isocode=US
- Set-Cookie: clientlanguage_nl=en_EN; Expires=Thu, 07-Apr-2011 21:21:34 GMT; Path=/
- 2011-04-06 14:49:50-0300 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.diningcity.com/netherlands/index.html> (referer: None)
- [...]
DefaultHeadersMiddleware
- class
scrapy.downloadermiddlewares.defaultheaders.
DefaultHeadersMiddleware
[source] - This middleware sets all default requests headers specified in the
DEFAULT_REQUEST_HEADERS
setting.
DownloadTimeoutMiddleware
- class
scrapy.downloadermiddlewares.downloadtimeout.
DownloadTimeoutMiddleware
[source] - This middleware sets the download timeout for requests specified in the
DOWNLOAD_TIMEOUT
setting ordownload_timeout
spider attribute.
Note
You can also set download timeout per-request usingdownload_timeout
Request.meta key; this is supportedeven when DownloadTimeoutMiddleware is disabled.
HttpAuthMiddleware
- class
scrapy.downloadermiddlewares.httpauth.
HttpAuthMiddleware
[source] - This middleware authenticates all requests generated from certain spidersusing Basic access authentication (aka. HTTP auth).
To enable HTTP authentication from certain spiders, set the http_user
and http_pass
attributes of those spiders.
Example:
- from scrapy.spiders import CrawlSpider
- class SomeIntranetSiteSpider(CrawlSpider):
- http_user = 'someuser'
- http_pass = 'somepass'
- name = 'intranet.example.com'
- # .. rest of the spider code omitted ...
HttpCacheMiddleware
- class
scrapy.downloadermiddlewares.httpcache.
HttpCacheMiddleware
[source] - This middleware provides low-level cache to all HTTP requests and responses.It has to be combined with a cache storage backend as well as a cache policy.
Scrapy ships with three HTTP cache storage backends:
You can change the HTTP cache storage backend with the HTTPCACHE_STORAGE
setting. Or you can also implement your own storage backend.
Scrapy ships with two HTTP cache policies:
You can change the HTTP cache policy with the HTTPCACHE_POLICY
setting. Or you can also implement your own policy.
You can also avoid caching a response on every policy using dont_cache
meta key equals True
.
Dummy policy (default)
- class
scrapy.extensions.httpcache.
DummyPolicy
[source] - This policy has no awareness of any HTTP Cache-Control directives.Every request and its corresponding response are cached. When the samerequest is seen again, the response is returned without transferringanything from the Internet.
The Dummy policy is useful for testing spiders faster (without havingto wait for downloads every time) and for trying your spider offline,when an Internet connection is not available. The goal is to be able to“replay” a spider run exactly as it ran before.
RFC2616 policy
- class
scrapy.extensions.httpcache.
RFC2616Policy
[source] - This policy provides a RFC2616 compliant HTTP cache, i.e. with HTTPCache-Control awareness, aimed at production and used in continuousruns to avoid downloading unmodified data (to save bandwidth and speed upcrawls).
What is implemented:
- Do not attempt to store responses/requests with
no-store
cache-control directive set - Do not serve responses from cache if
no-cache
cache-control directive is set even for fresh responses - Compute freshness lifetime from
max-age
cache-control directive - Compute freshness lifetime from
Expires
response header - Compute freshness lifetime from
Last-Modified
response header (heuristic used by Firefox) - Compute current age from
Age
response header - Compute current age from
Date
header - Revalidate stale responses based on
Last-Modified
response header - Revalidate stale responses based on
ETag
response header - Set
Date
header for any received response missing it - Support
max-stale
cache-control directive in requestsThis allows spiders to be configured with the full RFC2616 cache policy,but avoid revalidation on a request-by-request basis, while remainingconformant with the HTTP spec.
Example:
Add Cache-Control: max-stale=600
to Request headers to accept responses thathave exceeded their expiration time by no more than 600 seconds.
See also: RFC2616, 14.9.3
What is missing:
Pragma: no-cache
support https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.1Vary
header support https://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.6- Invalidation after updates or deletes https://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.10
- … probably others ..
Filesystem storage backend (default)
- class
scrapy.extensions.httpcache.
FilesystemCacheStorage
[source] - File system storage backend is available for the HTTP cache middleware.
Each request/response pair is stored in a different directory containingthe following files:
request_body
- the plain request bodyrequest_headers
- the request headers (in raw HTTP format)response_body
- the plain response bodyresponse_headers
- the request headers (in raw HTTP format)meta
- some metadata of this cache resource in Pythonrepr()
format (grep-friendly format)pickled_meta
- the same metadata inmeta
but pickled for moreefficient deserializationThe directory name is made from the request fingerprint (seescrapy.utils.request.fingerprint
), and one level of subdirectories isused to avoid creating too many files into the same directory (which isinefficient in many file systems). An example directory could be:
- /path/to/cache/dir/example.com/72/72811f648e718090f041317756c03adb0ada46c7
DBM storage backend
- class
scrapy.extensions.httpcache.
DbmCacheStorage
[source]
New in version 0.13.
A DBM storage backend is also available for the HTTP cache middleware.
By default, it uses the dbm
, but you can change it with theHTTPCACHE_DBM_MODULE
setting.
Writing your own storage backend
You can implement a cache storage backend by creating a Python class thatdefines the methods described below.
- class
scrapy.extensions.httpcache.
CacheStorage
openspider
(_spider)- This method gets called after a spider has been opened for crawling. It handlesthe
open_spider
signal.
Parameters:spider (Spider
object) – the spider which has been opened
closespider
(_spider)- This method gets called after a spider has been closed. It handlesthe
close_spider
signal.
Parameters:spider (Spider
object) – the spider which has been closed
Parameters:
- **spider** ([<code>Spider</code>]($fb832e20e85f228c.md#scrapy.spiders.Spider) object) – the spider which generated the request
- **request** ([<code>Request</code>]($a8ff621e6a7e7cdf.md#scrapy.http.Request) object) – the request to find cached response for
Parameters:
- **spider** ([<code>Spider</code>]($fb832e20e85f228c.md#scrapy.spiders.Spider) object) – the spider for which the response is intended
- **request** ([<code>Request</code>]($a8ff621e6a7e7cdf.md#scrapy.http.Request) object) – the corresponding request the spider generated
- **response** ([<code>Response</code>]($a8ff621e6a7e7cdf.md#scrapy.http.Response) object) – the response to store in the cache
In order to use your storage backend, set:
HTTPCACHE_STORAGE
to the Python import path of your custom storage class.
HTTPCache middleware settings
The HttpCacheMiddleware
can be configured through the followingsettings:
HTTPCACHE_ENABLED
New in version 0.11.
Default: False
Whether the HTTP cache will be enabled.
Changed in version 0.11: Before 0.11, HTTPCACHE_DIR
was used to enable cache.
HTTPCACHE_EXPIRATION_SECS
Default: 0
Expiration time for cached requests, in seconds.
Cached requests older than this time will be re-downloaded. If zero, cachedrequests will never expire.
Changed in version 0.11: Before 0.11, zero meant cached requests always expire.
HTTPCACHE_DIR
Default: 'httpcache'
The directory to use for storing the (low-level) HTTP cache. If empty, the HTTPcache will be disabled. If a relative path is given, is taken relative to theproject data dir. For more info see: Default structure of Scrapy projects.
HTTPCACHE_IGNORE_HTTP_CODES
New in version 0.10.
Default: []
Don’t cache response with these HTTP codes.
HTTPCACHE_IGNORE_MISSING
Default: False
If enabled, requests not found in the cache will be ignored instead of downloaded.
HTTPCACHE_IGNORE_SCHEMES
New in version 0.10.
Default: ['file']
Don’t cache responses with these URI schemes.
HTTPCACHE_STORAGE
Default: 'scrapy.extensions.httpcache.FilesystemCacheStorage'
The class which implements the cache storage backend.
HTTPCACHE_DBM_MODULE
New in version 0.13.
Default: 'dbm'
The database module to use in the DBM storage backend. This setting is specific to the DBM backend.
HTTPCACHE_POLICY
New in version 0.18.
Default: 'scrapy.extensions.httpcache.DummyPolicy'
The class which implements the cache policy.
HTTPCACHE_GZIP
New in version 1.0.
Default: False
If enabled, will compress all cached data with gzip.This setting is specific to the Filesystem backend.
HTTPCACHE_ALWAYS_STORE
New in version 1.1.
Default: False
If enabled, will cache pages unconditionally.
A spider may wish to have all responses available in the cache, forfuture use with Cache-Control: max-stale
, for instance. TheDummyPolicy caches all responses but never revalidates them, andsometimes a more nuanced policy is desirable.
This setting still respects Cache-Control: no-store
directives in responses.If you don’t want that, filter no-store
out of the Cache-Control headers inresponses you feed to the cache middleware.
HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS
New in version 1.1.
Default: []
List of Cache-Control directives in responses to be ignored.
Sites often set “no-store”, “no-cache”, “must-revalidate”, etc., but getupset at the traffic a spider can generate if it actually respects thosedirectives. This allows to selectively ignore Cache-Control directivesthat are known to be unimportant for the sites being crawled.
We assume that the spider will not issue Cache-Control directivesin requests unless it actually needs them, so directives in requests arenot filtered.
HttpCompressionMiddleware
- class
scrapy.downloadermiddlewares.httpcompression.
HttpCompressionMiddleware
[source] - This middleware allows compressed (gzip, deflate) traffic to besent/received from web sites.
This middleware also supports decoding brotli-compressed responses,provided brotlipy is installed.
HttpCompressionMiddleware Settings
COMPRESSION_ENABLED
Default: True
Whether the Compression middleware will be enabled.
HttpProxyMiddleware
New in version 0.8.
- class
scrapy.downloadermiddlewares.httpproxy.
HttpProxyMiddleware
[source] - This middleware sets the HTTP proxy to use for requests, by setting the
proxy
meta value forRequest
objects.
Like the Python standard library modules urllib and urllib2, it obeysthe following environment variables:
http_proxy
https_proxy
no_proxy
You can also set the meta keyproxy
per-request, to a value likehttp://some_proxy_server:port
orhttp://username:password@some_proxy_server:port
.Keep in mind this value will take precedence overhttp_proxy
/https_proxy
environment variables, and it will also ignoreno_proxy
environment variable.
RedirectMiddleware
- class
scrapy.downloadermiddlewares.redirect.
RedirectMiddleware
[source] - This middleware handles redirection of requests based on response status.
The urls which the request goes through (while being redirected) can be foundin the redirect_urls
Request.meta
key.
The reason behind each redirect in redirect_urls
can be found in theredirect_reasons
Request.meta
key. Forexample: [301, 302, 307, 'meta refresh']
.
The format of a reason depends on the middleware that handled the correspondingredirect. For example, RedirectMiddleware
indicates the triggeringresponse status code as an integer, while MetaRefreshMiddleware
always uses the 'meta refresh'
string as reason.
The RedirectMiddleware
can be configured through the followingsettings (see the settings documentation for more info):
If Request.meta
has dont_redirect
key set to True, the request will be ignored by this middleware.
If you want to handle some redirect status codes in your spider, you canspecify these in the handle_httpstatus_list
spider attribute.
For example, if you want the redirect middleware to ignore 301 and 302responses (and pass them through to your spider) you can do this:
- class MySpider(CrawlSpider):
- handle_httpstatus_list = [301, 302]
The handle_httpstatus_list
key of Request.meta
can also be used to specify which response codes toallow on a per-request basis. You can also set the meta keyhandle_httpstatus_all
to True
if you want to allow any response codefor a request.
RedirectMiddleware settings
REDIRECT_ENABLED
New in version 0.13.
Default: True
Whether the Redirect middleware will be enabled.
REDIRECT_MAX_TIMES
Default: 20
The maximum number of redirections that will be followed for a single request.
MetaRefreshMiddleware
- class
scrapy.downloadermiddlewares.redirect.
MetaRefreshMiddleware
[source] - This middleware handles redirection of requests based on meta-refresh html tag.
The MetaRefreshMiddleware
can be configured through the followingsettings (see the settings documentation for more info):
This middleware obey REDIRECT_MAX_TIMES
setting, dont_redirect
,redirect_urls
and redirect_reasons
request meta keys as describedfor RedirectMiddleware
MetaRefreshMiddleware settings
METAREFRESH_ENABLED
New in version 0.17.
Default: True
Whether the Meta Refresh middleware will be enabled.
METAREFRESH_IGNORE_TAGS
Default: []
Meta tags within these tags are ignored.
Changed in version 2.0: The default value of METAREFRESH_IGNORE_TAGS
changed from['script', 'noscript']
to []
.
METAREFRESH_MAXDELAY
Default: 100
The maximum meta-refresh delay (in seconds) to follow the redirection.Some sites use meta-refresh for redirecting to a session expired page, so werestrict automatic redirection to the maximum delay.
RetryMiddleware
- class
scrapy.downloadermiddlewares.retry.
RetryMiddleware
[source] - A middleware to retry failed requests that are potentially caused bytemporary problems such as a connection timeout or HTTP 500 error.
Failed pages are collected on the scraping process and rescheduled at theend, once the spider has finished crawling all regular (non failed) pages.
The RetryMiddleware
can be configured through the followingsettings (see the settings documentation for more info):
If Request.meta
has dont_retry
keyset to True, the request will be ignored by this middleware.
RetryMiddleware Settings
RETRY_ENABLED
New in version 0.13.
Default: True
Whether the Retry middleware will be enabled.
RETRY_TIMES
Default: 2
Maximum number of times to retry, in addition to the first download.
Maximum number of retries can also be specified per-request usingmax_retry_times
attribute of Request.meta
.When initialized, the max_retry_times
meta key takes higherprecedence over the RETRY_TIMES
setting.
RETRY_HTTP_CODES
Default: [500, 502, 503, 504, 522, 524, 408, 429]
Which HTTP response codes to retry. Other errors (DNS lookup issues,connections lost, etc) are always retried.
In some cases you may want to add 400 to RETRY_HTTP_CODES
becauseit is a common code used to indicate server overload. It is not included bydefault because HTTP specs say so.
RobotsTxtMiddleware
- class
scrapy.downloadermiddlewares.robotstxt.
RobotsTxtMiddleware
[source] - This middleware filters out requests forbidden by the robots.txt exclusionstandard.
To make sure Scrapy respects robots.txt make sure the middleware is enabledand the ROBOTSTXT_OBEY
setting is enabled.
The ROBOTSTXT_USER_AGENT
setting can be used to specify theuser agent string to use for matching in the robots.txt file. If itis None
, the User-Agent header you are sending with the request or theUSER_AGENT
setting (in that order) will be used for determiningthe user agent to use in the robots.txt file.
This middleware has to be combined with a robots.txt parser.
Scrapy ships with support for the following robots.txt parsers:
- Protego (default)
- RobotFileParser
- Reppy
- RobotexclusionrulesparserYou can change the robots.txt parser with the
ROBOTSTXT_PARSER
setting. Or you can also implement support for a new parser.
If Request.meta
hasdont_obey_robotstxt
key set to Truethe request will be ignored by this middleware even ifROBOTSTXT_OBEY
is enabled.
Parsers vary in several aspects:
- Language of implementation
- Supported specification
- Support for wildcard matching
- Usage of length based rule:in particular for
Allow
andDisallow
directives, where the mostspecific rule based on the length of the path trumps the less specific(shorter) rule
Performance comparison of different parsers is available at the following link.
Protego parser
Based on Protego:
- implemented in Python
- is compliant with Google’s Robots.txt Specification
- supports wildcard matching
- uses the length based rule
Scrapy uses this parser by default.
RobotFileParser
Based on RobotFileParser:
- is Python’s built-in robots.txt parser
- is compliant with Martijn Koster’s 1996 draft specification
- lacks support for wildcard matching
- doesn’t use the length based rule
It is faster than Protego and backward-compatible with versions of Scrapy before 1.8.0.
In order to use this parser, set:
ROBOTSTXT_PARSER
toscrapy.robotstxt.PythonRobotParser
Reppy parser
Based on Reppy:
- is a Python wrapper around Robots Exclusion Protocol Parser for C++
- is compliant with Martijn Koster’s 1996 draft specification
- supports wildcard matching
- uses the length based rule
Native implementation, provides better speed than Protego.
In order to use this parser:
- Install Reppy by running
pip install reppy
- Set
ROBOTSTXT_PARSER
setting toscrapy.robotstxt.ReppyRobotParser
Robotexclusionrulesparser
Based on Robotexclusionrulesparser:
- implemented in Python
- is compliant with Martijn Koster’s 1996 draft specification
- supports wildcard matching
- doesn’t use the length based rule
In order to use this parser:
- Install Robotexclusionrulesparser by running
pip install robotexclusionrulesparser
- Set
ROBOTSTXT_PARSER
setting toscrapy.robotstxt.RerpRobotParser
Implementing support for a new parser
You can implement support for a new robots.txt parser by subclassingthe abstract base class RobotParser
andimplementing the methods described below.
- class
scrapy.robotstxt.
RobotParser
[source] - abstract
allowed
(url, user_agent)[source] - Return
True
ifuser_agent
is allowed to crawlurl
, otherwise returnFalse
.
- abstract
Parameters:
- **url** (_string_) – Absolute URL
- **user_agent** (_string_) – User agent
- abstract classmethod
fromcrawler
(_crawler, robotstxt_body)[source] - Parse the content of a robots.txt file as bytes. This must be a class method.It must return a new instance of the parser backend.
Parameters:
- **crawler** ([<code>Crawler</code>]($ceb8c09efd04cb82.md#scrapy.crawler.Crawler) instance) – crawler which made the request
- **robotstxt_body** ([_bytes_](https://docs.python.org/3/library/stdtypes.html#bytes)) – content of a [robots.txt](https://www.robotstxt.org/) file.
DownloaderStats
- class
scrapy.downloadermiddlewares.stats.
DownloaderStats
[source] - Middleware that stores stats of all requests, responses and exceptions thatpass through it.
To use this middleware you must enable the DOWNLOADER_STATS
setting.
UserAgentMiddleware
- class
scrapy.downloadermiddlewares.useragent.
UserAgentMiddleware
[source] - Middleware that allows spiders to override the default user agent.
In order for a spider to override the default user agent, its user_agent
attribute must be set.
AjaxCrawlMiddleware
- class
scrapy.downloadermiddlewares.ajaxcrawl.
AjaxCrawlMiddleware
[source] - Middleware that finds ‘AJAX crawlable’ page variants basedon meta-fragment html tag. Seehttps://developers.google.com/search/docs/ajax-crawling/docs/getting-startedfor more info.
Note
Scrapy finds ‘AJAX crawlable’ pages for URLs like'http://example.com/!#foo=bar'
even without this middleware.AjaxCrawlMiddleware is necessary when URL doesn’t contain '!#'
.This is often a case for ‘index’ or ‘main’ website pages.
AjaxCrawlMiddleware Settings
AJAXCRAWL_ENABLED
New in version 0.21.
Default: False
Whether the AjaxCrawlMiddleware will be enabled. You may want toenable it for broad crawls.
HttpProxyMiddleware settings
HTTPPROXY_ENABLED
Default: True
Whether or not to enable the HttpProxyMiddleware
.
HTTPPROXY_AUTH_ENCODING
Default: "latin-1"
The default encoding for proxy authentication on HttpProxyMiddleware
.