Requests and Responses
Scrapy uses Request
and Response
objects for crawling websites.
Typically, Request
objects are generated in the spiders and passacross the system until they reach the Downloader, which executes the requestand returns a Response
object which travels back to the spider thatissued the request.
Both Request
and Response
classes have subclasses which addfunctionality not required in the base classes. These are describedbelow in Request subclasses andResponse subclasses.
Request objects
- class
scrapy.http.
Request
(url[, callback, method='GET', headers, body, cookies, meta, encoding='utf-8', priority=0, dont_filter=False, errback])
ARequest
object represents an HTTP request, which is usuallygenerated in the Spider and executed by the Downloader, and thus generatingaResponse
.参数:
- url (string) – the URL of this request
- callback (callable) – the function that will be called with the response of thisrequest (once its downloaded) as its first parameter. For more informationsee Passing additional data to callback functions below.If a Request doesn’t specify a callback, the spider’sparse()
method will be used.Note that if exceptions are raised during processing, errback is called instead.
- method (string) – the HTTP method of this request. Defaults to'GET'
.
- meta (dict) – the initial values for theRequest.meta
attribute. Ifgiven, the dict passed in this parameter will be shallow copied.
- body (str or unicode) – the request body. If aunicode
is passed, then it’s encoded tostr
using the encoding passed (which defaults toutf-8
). Ifbody
is not given,, an empty string is stored. Regardless of thetype of this argument, the final value stored will be astr
(neverunicode
orNone
).
- headers (dict) – the headers of this request. The dict values can be strings(for single valued headers) or lists (for multi-valued headers). IfNone
is passed as value, the HTTP header will not be sent at all.
- cookies (dict or list) –
the request cookies. These can be sent in two forms.
- Using a dict:- requestwith_cookies = Request(url="http://www.example.com",
cookies={'currency': 'USD', 'country': 'UY'})
- Using a list of dicts:- request_with_cookies = Request(url="http://www.example.com",
cookies=[{'name': 'currency',
'value': 'USD',
'domain': 'example.com',
'path': '/currency'}])
The latter form allows for customizing thedomain
andpath
attributes of the cookie. This is only useful if the cookies are savedfor later requests.
When some site returns cookies (in a response) those are stored in thecookies for that domain and will be sent again in future requests. That’sthe typical behaviour of any regular web browser. However, if, for somereason, you want to avoid merging with existing cookies you can instructScrapy to do so by setting thedont_merge_cookies
key in theRequest.meta
.
Example of request without merging cookies:- request_with_cookies = Request(url="http://www.example.com",
cookies={'currency': 'USD', 'country': 'UY'},
meta={'dont_merge_cookies': True})
For more info see CookiesMiddleware.
- encoding (_string) – the encoding of this request (defaults to'utf-8'
).This encoding will be used to percent-encode the URL and to convert thebody tostr
(if given asunicode
).
- priority (int) – the priority of this request (defaults to0
).The priority is used by the scheduler to define the order used to processrequests. Requests with a higher priority value will execute earlier.Negative values are allowed in order to indicate relatively low-priority.
- dont_filter (boolean) – indicates that this request should not be filtered bythe scheduler. This is used when you want to perform an identicalrequest multiple times, to ignore the duplicates filter. Use it withcare, or you will get into crawling loops. Default toFalse
.
- errback (callable) – a function that will be called if any exception wasraised while processing the request. This includes pages that failedwith 404 HTTP errors and such. It receives a Twisted Failure instanceas first parameter.url
A string containing the URL of this request. Keep in mind that thisattribute contains the escaped URL, so it can differ from the URL passed inthe constructor.
This attribute is read-only. To change the URL of a Request usereplace()
.
method
A string representing the HTTP method in the request. This is guaranteed tobe uppercase. Example:"GET"
,"POST"
,"PUT"
, etc
body
A str that contains the request body.
This attribute is read-only. To change the body of a Request usereplace()
.
meta
A dict that contains arbitrary metadata for this request. This dict isempty for new Requests, and is usually populated by different Scrapycomponents (extensions, middlewares, etc). So the data contained in thisdict depends on the extensions you have enabled.
See Request.meta special keys for a list of special meta keysrecognized by Scrapy.
This dict is shallow copied when the request is cloned using thecopy()
orreplace()
methods, and can also be accessed, in yourspider, from theresponse.meta
attribute.
copy
()
Return a new Request which is a copy of this Request. See also:Passing additional data to callback functions.
replace
([url, method, headers, body, cookies, meta, encoding, dont_filter, callback, errback])
Return a Request object with the same members, except for those membersgiven new values by whichever keyword arguments are specified. TheattributeRequest.meta
is copied by default (unless a new valueis given in themeta
argument). See alsoPassing additional data to callback functions.
- requestwith_cookies = Request(url="http://www.example.com",
Passing additional data to callback functions
The callback of a request is a function that will be called when the responseof that request is downloaded. The callback function will be called with thedownloaded Response
object as its first argument.
Example:
- def parse_page1(self, response):
- return scrapy.Request("http://www.example.com/some_page.html",
- callback=self.parse_page2)
- def parse_page2(self, response):
- # this would log http://www.example.com/some_page.html
- self.logger.info("Visited %s", response.url)
In some cases you may be interested in passing arguments to those callbackfunctions so you can receive the arguments later, in the second callback. Youcan use the Request.meta
attribute for that.
Here’s an example of how to pass an item using this mechanism, to populatedifferent fields from different pages:
- def parse_page1(self, response):
- item = MyItem()
- item['main_url'] = response.url
- request = scrapy.Request("http://www.example.com/some_page.html",
- callback=self.parse_page2)
- request.meta['item'] = item
- return request
- def parse_page2(self, response):
- item = response.meta['item']
- item['other_url'] = response.url
- return item
Request.meta special keys
The Request.meta
attribute can contain any arbitrary data, but thereare some special keys recognized by Scrapy and its built-in extensions.
Those are:
dont_redirect
dont_retry
handle_httpstatus_list
handle_httpstatus_all
dont_merge_cookies
(seecookies
parameter ofRequest
constructor)cookiejar
dont_cache
redirect_urls
bindaddress
dont_obey_robotstxt
download_timeout
download_maxsize
proxy
bindaddress
The IP of the outgoing IP address to use for the performing the request.
download_timeout
The amount of time (in secs) that the downloader will wait before timing out.See also: DOWNLOAD_TIMEOUT
.
Request subclasses
Here is the list of built-in Request
subclasses. You can also subclassit to implement your own custom functionality.
FormRequest objects
The FormRequest class extends the base Request
with functionality fordealing with HTML forms. It uses lxml.html forms to pre-populate formfields with form data from Response
objects.
- class
scrapy.http.
FormRequest
(url[, formdata, …])
TheFormRequest
class adds a new argument to the constructor. Theremaining arguments are the same as for theRequest
class and arenot documented here.参数: formdata (dict or iterable of tuples) – is a dictionary (or iterable of (key, value) tuples)containing HTML Form data which will be url-encoded and assigned to thebody of the request.
TheFormRequest
objects support the following class method inaddition to the standardRequest
methods:- classmethod
fromresponse
(_response[, formname=None, formnumber=0, formdata=None, formxpath=None, clickdata=None, dont_click=False, …])
Returns a newFormRequest
object with its form field valuespre-populated with those found in the HTML<form>
element containedin the given response. For an example see使用FormRequest.from_response()方法模拟用户登录.
The policy is to automatically simulate a click, by default, on any formcontrol that looks clickable, like a<input type="submit">
. Eventhough this is quite convenient, and often the desired behaviour,sometimes it can cause problems which could be hard to debug. Forexample, when working with forms that are filled and/or submitted usingjavascript, the defaultfrom_response()
behaviour may not be themost appropriate. To disable this behaviour you can set thedontclick
argument toTrue
. Also, if you want to change thecontrol clicked (instead of disabling it) you can also use theclickdata
argument.参数:
- response (Response
object) – the response containing a HTML form which will be usedto pre-populate the form fields
- formname (_string) – if given, the form with name attribute set to this value will be used.
- formxpath (string) – if given, the first form that matches the xpath will be used.
- formnumber (integer) – the number of form to use, when the response containsmultiple forms. The first one (and also the default) is0
.
- formdata (dict) – fields to override in the form data. If a field wasalready present in the response<form>
element, its value isoverridden by the one passed in this parameter.
- clickdata (dict) – attributes to lookup the control clicked. If it’s notgiven, the form data will be submitted simulating a click on thefirst clickable element. In addition to html attributes, the controlcan be identified by its zero-based index relative to othersubmittable inputs inside the form, via thenr
attribute.
- dont_click (boolean) – If True, the form data will be submitted withoutclicking in any element.
The other parameters of this class method are passed directly to theFormRequest
constructor.
0.10.3 新版功能: Theformname
parameter.
0.17 新版功能: Theformxpath
parameter.
- classmethod
Request usage examples
Using FormRequest to send data via HTTP POST
If you want to simulate a HTML Form POST in your spider and send a couple ofkey-value fields, you can return a FormRequest
object (from yourspider) like this:
- return [FormRequest(url="http://www.example.com/post/action",
- formdata={'name': 'John Doe', 'age': '27'},
- callback=self.after_post)]
使用FormRequest.from_response()方法模拟用户登录
通常网站通过 <input type="hidden">
实现对某些表单字段(如数据或是登录界面中的认证令牌等)的预填充。使用Scrapy抓取网页时,如果想要预填充或重写像用户名、用户密码这些表单字段,可以使用 FormRequest.from_response()
方法实现。下面是使用这种方法的爬虫例子:
- import scrapy
- class LoginSpider(scrapy.Spider):
- name = 'example.com'
- start_urls = ['http://www.example.com/users/login.php']
- def parse(self, response):
- return scrapy.FormRequest.from_response(
- response,
- formdata={'username': 'john', 'password': 'secret'},
- callback=self.after_login
- )
- def after_login(self, response):
- # check login succeed before going on
- if "authentication failed" in response.body:
- self.logger.error("Login failed")
- return
- # continue scraping with authenticated session...
Response objects
- class
scrapy.http.
Response
(url[, status=200, headers, body, flags])
AResponse
object represents an HTTP response, which is usuallydownloaded (by the Downloader) and fed to the Spiders for processing.参数:
- url (string) – the URL of this response
- headers (dict) – the headers of this response. The dict values can be strings(for single valued headers) or lists (for multi-valued headers).
- status (integer) – the HTTP status of the response. Defaults to200
.
- body (str) – the response body. It must be str, not unicode, unless you’reusing a encoding-aware Response subclass, such asTextResponse
.
- meta (dict) – the initial values for theResponse.meta
attribute. Ifgiven, the dict will be shallow copied.
- flags (list) – is a list containing the initial values for theResponse.flags
attribute. If given, the list will be shallowcopied.url
A string containing the URL of the response.
This attribute is read-only. To change the URL of a Response usereplace()
.
body
A str containing the body of this Response. Keep in mind that Response.bodyis always a str. If you want the unicode version useTextResponse.body_as_unicode()
(only available inTextResponse
and subclasses).
This attribute is read-only. To change the body of a Response usereplace()
.
request
TheRequest
object that generated this response. This attribute isassigned in the Scrapy engine, after the response and the request have passedthrough all Downloader Middlewares.In particular, this means that:
- HTTP redirections will cause the original request (to the URL beforeredirection) to be assigned to the redirected response (with the finalURL after redirection).
- Response.request.url doesn’t always equal Response.url
- This attribute is only available in the spider code, and in theSpider Middlewares, but not inDownloader Middlewares (although you have the Request available there byother means) and handlers of theresponse_downloaded
signal.
meta
A shortcut to theRequest.meta
attribute of theResponse.request
object (ie.self.request.meta
).
Unlike theResponse.request
attribute, theResponse.meta
attribute is propagated along redirects and retries, so you will getthe originalRequest.meta
sent from your spider.
参见Request.meta
attribute
flags
A list that contains flags for this response. Flags are labels used fortagging Responses. For example: ‘cached’, ‘redirected‘, etc. Andthey’re shown on the string representation of the Response (strmethod) which is used by the engine for logging.
replace
([url, status, headers, body, request, flags, cls])
Returns a Response object with the same members, except for those membersgiven new values by whichever keyword arguments are specified. TheattributeResponse.meta
is copied by default.
urljoin
(url)
Constructs an absolute url by combining the Response’surl
witha possible relative url.
This is a wrapper over urlparse.urljoin, it’s merely an alias formaking this call:- urlparse.urljoin(response.url, url)
- urlparse.urljoin(response.url, url)
Response subclasses
Here is the list of available built-in Response subclasses. You can alsosubclass the Response class to implement your own functionality.
TextResponse objects
- class
scrapy.http.
TextResponse
(url[, encoding[, …]]) TextResponse
objects adds encoding capabilities to the baseResponse
class, which is meant to be used only for binary data,such as images, sounds or any media file.TextResponse
objects support a new constructor argument, inaddition to the baseResponse
objects. The remaining functionalityis the same as for theResponse
class and is not documented here.参数: encoding (string) – is a string which contains the encoding to use for thisresponse. If you create a TextResponse
object with a unicodebody, it will be encoded using this encoding (remember the body attributeis always a string). Ifencoding
isNone
(default value), theencoding will be looked up in the response headers and body instead.TextResponse
objects support the following attributes in additionto the standardResponse
ones:encoding
A string with the encoding of this response. The encoding is resolved bytrying the following mechanisms, in order:
- the encoding passed in the constructor encoding argument
- the encoding declared in the Content-Type HTTP header. If thisencoding is not valid (ie. unknown), it is ignored and the nextresolution mechanism is tried.
- the encoding declared in the response body. The TextResponse classdoesn’t provide any special functionality for this. However, theHtmlResponse
andXmlResponse
classes do.
- the encoding inferred by looking at the response body. This is the morefragile method but also the last one tried.
selector
ASelector
instance using the response astarget. The selector is lazily instantiated on first access.
TextResponse
objects support the following methods in addition tothe standardResponse
ones:bodyas_unicode
()
Returns the body of the response as unicode. This is equivalent to:- response.body.decode(response.encoding)
But not equivalent to:- unicode(response.body)
Since, in the latter case, you would be using you system default encoding(typically _ascii) to convert the body to unicode, instead of the responseencoding.- response.body.decode(response.encoding)
HtmlResponse objects
- class
scrapy.http.
HtmlResponse
(url[, …])
TheHtmlResponse
class is a subclass ofTextResponse
which adds encoding auto-discovering support by looking into the HTML metahttp-equiv attribute. SeeTextResponse.encoding
.
XmlResponse objects
- class
scrapy.http.
XmlResponse
(url[, …])
TheXmlResponse
class is a subclass ofTextResponse
whichadds encoding auto-discovering support by looking into the XML declarationline. SeeTextResponse.encoding
.