Requests and Responses
Scrapy uses Request
and Response
objects for crawling websites.
Typically, Request
objects are generated in the spiders and passacross the system until they reach the Downloader, which executes the requestand returns a Response
object which travels back to the spider thatissued the request.
Both Request
and Response
classes have subclasses which addfunctionality not required in the base classes. These are describedbelow in Request subclasses andResponse subclasses.
Request objects
- class
scrapy.http.
Request
(url, callback=None, method='GET', headers=None, body=None, cookies=None, meta=None, encoding='utf-8', priority=0, dont_filter=False, errback=None, flags=None, cb_kwargs=None)[source] - A
Request
object represents an HTTP request, which is usuallygenerated in the Spider and executed by the Downloader, and thus generatingaResponse
.
Parameters:
- url (string) –the URL of this request
If the URL is invalid, a ValueError
exception is raised.
- callback (callable) – the function that will be called with the response of thisrequest (once its downloaded) as its first parameter. For more informationsee Passing additional data to callback functions below.If a Request doesn’t specify a callback, the spider’s
parse()
method will be used.Note that if exceptions are raised during processing, errback is called instead. - method (string) – the HTTP method of this request. Defaults to
'GET'
. - meta (dict) – the initial values for the
Request.meta
attribute. Ifgiven, the dict passed in this parameter will be shallow copied. - body (str or __unicode) – the request body. If a
unicode
is passed, then it’s encoded tostr
using theencoding
passed (which defaults toutf-8
). Ifbody
is not given, an empty string is stored. Regardless of thetype of this argument, the final value stored will be astr
(neverunicode
orNone
). - headers (dict) – the headers of this request. The dict values can be strings(for single valued headers) or lists (for multi-valued headers). If
None
is passed as value, the HTTP header will not be sent at all. cookies (dict or list) –the request cookies. These can be sent in two forms.
- Using a dict:
- request_with_cookies = Request(url="http://www.example.com",
- cookies={'currency': 'USD', 'country': 'UY'})
- Using a list of dicts:
- request_with_cookies = Request(url="http://www.example.com",
- cookies=[{'name': 'currency',
- 'value': 'USD',
- 'domain': 'example.com',
- 'path': '/currency'}])
The latter form allows for customizing the domain
and path
attributes of the cookie. This is only useful if the cookies are savedfor later requests.
When some site returns cookies (in a response) those are stored in thecookies for that domain and will be sent again in future requests.That’s the typical behaviour of any regular web browser.
To create a request that does not send stored cookies and does notstore received cookies, set the dont_merge_cookies
key to True
in request.meta
.
Example of a request that sends manually-defined cookies and ignorescookie storage:
- Request(
- url="http://www.example.com",
- cookies={'currency': 'USD', 'country': 'UY'},
- meta={'dont_merge_cookies': True},
- )
For more info see CookiesMiddleware.
- encoding (string) – the encoding of this request (defaults to
'utf-8'
).This encoding will be used to percent-encode the URL and to convert thebody tostr
(if given asunicode
). - priority (int) – the priority of this request (defaults to
0
).The priority is used by the scheduler to define the order used to processrequests. Requests with a higher priority value will execute earlier.Negative values are allowed in order to indicate relatively low-priority. - dont_filter (boolean) – indicates that this request should not be filtered bythe scheduler. This is used when you want to perform an identicalrequest multiple times, to ignore the duplicates filter. Use it withcare, or you will get into crawling loops. Default to
False
. - errback (callable) –a function that will be called if any exception wasraised while processing the request. This includes pages that failedwith 404 HTTP errors and such. It receives a
Failure
as first parameter.For more information,see Using errbacks to catch exceptions in request processing below.
Changed in version 2.0: The callback parameter is no longer required when the _errback_parameter is specified.
- flags (list) – Flags sent to the request, can be used for logging or similar purposes.
- cb_kwargs (dict) – A dict with arbitrary data that will be passed as keyword arguments to the Request’s callback.
url
- A string containing the URL of this request. Keep in mind that thisattribute contains the escaped URL, so it can differ from the URL passed inthe
init
method.
This attribute is read-only. To change the URL of a Request usereplace()
.
method
A string representing the HTTP method in the request. This is guaranteed tobe uppercase. Example:
"GET"
,"POST"
,"PUT"
, etcA dictionary-like object which contains the request headers.
- A str that contains the request body.
This attribute is read-only. To change the body of a Request usereplace()
.
meta
- A dict that contains arbitrary metadata for this request. This dict isempty for new Requests, and is usually populated by different Scrapycomponents (extensions, middlewares, etc). So the data contained in thisdict depends on the extensions you have enabled.
See Request.meta special keys for a list of special meta keysrecognized by Scrapy.
This dict is shallow copied when the request is cloned using thecopy()
or replace()
methods, and can also be accessed, in yourspider, from the response.meta
attribute.
cb_kwargs
- A dictionary that contains arbitrary metadata for this request. Its contentswill be passed to the Request’s callback as keyword arguments. It is emptyfor new Requests, which means by default callbacks only get a
Response
object as argument.
This dict is shallow copied when the request is cloned using thecopy()
or replace()
methods, and can also be accessed, in yourspider, from the response.cb_kwargs
attribute.
copy
()[source]Return a new Request which is a copy of this Request. See also:Passing additional data to callback functions.
replace
([url, method, headers, body, cookies, meta, flags, encoding, priority, dont_filter, callback, errback, cb_kwargs])[source]Return a Request object with the same members, except for those membersgiven new values by whichever keyword arguments are specified. The
Request.cb_kwargs
andRequest.meta
attributes are shallowcopied by default (unless new values are given as arguments). See alsoPassing additional data to callback functions.classmethod
fromcurl
(_curl_command, ignore_unknown_options=True, **kwargs)[source]- Create a Request object from a string containing a cURL command. It populates the HTTP method, theURL, the headers, the cookies and the body. It accepts the samearguments as the
Request
class, taking preference andoverriding the values of the same arguments contained in the cURLcommand.
Unrecognized options are ignored by default. To raise an error whenfinding unknown options call this method by passingignore_unknown_options=False
.
Caution
Using from_curl()
from Request
subclasses, such as JSONRequest
, orXmlRpcRequest
, as well as havingdownloader middlewaresandspider middlewaresenabled, such asDefaultHeadersMiddleware
,UserAgentMiddleware
,orHttpCompressionMiddleware
,may modify the Request
object.
Passing additional data to callback functions
The callback of a request is a function that will be called when the responseof that request is downloaded. The callback function will be called with thedownloaded Response
object as its first argument.
Example:
- def parse_page1(self, response):
- return scrapy.Request("http://www.example.com/some_page.html",
- callback=self.parse_page2)
- def parse_page2(self, response):
- # this would log http://www.example.com/some_page.html
- self.logger.info("Visited %s", response.url)
In some cases you may be interested in passing arguments to those callbackfunctions so you can receive the arguments later, in the second callback.The following example shows how to achieve this by using theRequest.cb_kwargs
attribute:
- def parse(self, response):
- request = scrapy.Request('http://www.example.com/index.html',
- callback=self.parse_page2,
- cb_kwargs=dict(main_url=response.url))
- request.cb_kwargs['foo'] = 'bar' # add more arguments for the callback
- yield request
- def parse_page2(self, response, main_url, foo):
- yield dict(
- main_url=main_url,
- other_url=response.url,
- foo=foo,
- )
Caution
Request.cb_kwargs
was introduced in version 1.7
.Prior to that, using Request.meta
was recommended for passinginformation around callbacks. After 1.7
, Request.cb_kwargs
became the preferred way for handling user information, leaving Request.meta
for communication with components like middlewares and extensions.
Using errbacks to catch exceptions in request processing
The errback of a request is a function that will be called when an exceptionis raise while processing it.
It receives a Failure
as first parameter and canbe used to track connection establishment timeouts, DNS errors etc.
Here’s an example spider logging all errors and catching some specificerrors if needed:
- import scrapy
- from scrapy.spidermiddlewares.httperror import HttpError
- from twisted.internet.error import DNSLookupError
- from twisted.internet.error import TimeoutError, TCPTimedOutError
- class ErrbackSpider(scrapy.Spider):
- name = "errback_example"
- start_urls = [
- "http://www.httpbin.org/", # HTTP 200 expected
- "http://www.httpbin.org/status/404", # Not found error
- "http://www.httpbin.org/status/500", # server issue
- "http://www.httpbin.org:12345/", # non-responding host, timeout expected
- "http://www.httphttpbinbin.org/", # DNS error expected
- ]
- def start_requests(self):
- for u in self.start_urls:
- yield scrapy.Request(u, callback=self.parse_httpbin,
- errback=self.errback_httpbin,
- dont_filter=True)
- def parse_httpbin(self, response):
- self.logger.info('Got successful response from {}'.format(response.url))
- # do something useful here...
- def errback_httpbin(self, failure):
- # log all failures
- self.logger.error(repr(failure))
- # in case you want to do something special for some errors,
- # you may need the failure's type:
- if failure.check(HttpError):
- # these exceptions come from HttpError spider middleware
- # you can get the non-200 response
- response = failure.value.response
- self.logger.error('HttpError on %s', response.url)
- elif failure.check(DNSLookupError):
- # this is the original request
- request = failure.request
- self.logger.error('DNSLookupError on %s', request.url)
- elif failure.check(TimeoutError, TCPTimedOutError):
- request = failure.request
- self.logger.error('TimeoutError on %s', request.url)
Request.meta special keys
The Request.meta
attribute can contain any arbitrary data, but thereare some special keys recognized by Scrapy and its built-in extensions.
Those are:
dont_redirect
dont_retry
handle_httpstatus_list
handle_httpstatus_all
dont_merge_cookies
cookiejar
dont_cache
redirect_reasons
redirect_urls
bindaddress
dont_obey_robotstxt
download_timeout
download_maxsize
download_latency
download_fail_on_dataloss
proxy
ftp_user
(SeeFTP_USER
for more info)ftp_password
(SeeFTP_PASSWORD
for more info)referrer_policy
max_retry_times
bindaddress
The IP of the outgoing IP address to use for the performing the request.
download_timeout
The amount of time (in secs) that the downloader will wait before timing out.See also: DOWNLOAD_TIMEOUT
.
download_latency
The amount of time spent to fetch the response, since the request has beenstarted, i.e. HTTP message sent over the network. This meta key only becomesavailable when the response has been downloaded. While most other meta keys areused to control Scrapy behavior, this one is supposed to be read-only.
download_fail_on_dataloss
Whether or not to fail on broken responses. See:DOWNLOAD_FAIL_ON_DATALOSS
.
max_retry_times
The meta key is used set retry times per request. When initialized, themax_retry_times
meta key takes higher precedence over theRETRY_TIMES
setting.
Request subclasses
Here is the list of built-in Request
subclasses. You can also subclassit to implement your own custom functionality.
FormRequest objects
The FormRequest class extends the base Request
with functionality fordealing with HTML forms. It uses lxml.html forms to pre-populate formfields with form data from Response
objects.
- class
scrapy.http.
FormRequest
(url[, formdata, …])[source] - The
FormRequest
class adds a new keyword parameter to theinit
method. Theremaining arguments are the same as for theRequest
class and arenot documented here.
Parameters:formdata (dict or __iterable of tuples) – is a dictionary (or iterable of (key, value) tuples)containing HTML Form data which will be url-encoded and assigned to thebody of the request.
The FormRequest
objects support the following class method inaddition to the standard Request
methods:
- classmethod
fromresponse
(_response[, formname=None, formid=None, formnumber=0, formdata=None, formxpath=None, formcss=None, clickdata=None, dont_click=False, …])[source] - Returns a new
FormRequest
object with its form field valuespre-populated with those found in the HTML<form>
element containedin the given response. For an example seeUsing FormRequest.from_response() to simulate a user login.
The policy is to automatically simulate a click, by default, on any formcontrol that looks clickable, like a <input type="submit">
. Eventhough this is quite convenient, and often the desired behaviour,sometimes it can cause problems which could be hard to debug. Forexample, when working with forms that are filled and/or submitted usingjavascript, the default from_response()
behaviour may not be themost appropriate. To disable this behaviour you can set thedont_click
argument to True
. Also, if you want to change thecontrol clicked (instead of disabling it) you can also use theclickdata
argument.
Caution
Using this method with select elements which have leadingor trailing whitespace in the option values will not work due to abug in lxml, which should be fixed in lxml 3.8 and above.
Parameters:
- **response** ([<code>Response</code>](#scrapy.http.Response) object) – the response containing a HTML form which will be usedto pre-populate the form fields
- **formname** (_string_) – if given, the form with name attribute set to this value will be used.
- **formid** (_string_) – if given, the form with id attribute set to this value will be used.
- **formxpath** (_string_) – if given, the first form that matches the xpath will be used.
- **formcss** (_string_) – if given, the first form that matches the css selector will be used.
- **formnumber** (_integer_) – the number of form to use, when the response containsmultiple forms. The first one (and also the default) is <code>0</code>.
- **formdata** ([_dict_](https://docs.python.org/3/library/stdtypes.html#dict)) – fields to override in the form data. If a field wasalready present in the response <code><form></code> element, its value isoverridden by the one passed in this parameter. If a value passed inthis parameter is <code>None</code>, the field will not be included in therequest, even if it was present in the response <code><form></code> element.
- **clickdata** ([_dict_](https://docs.python.org/3/library/stdtypes.html#dict)) – attributes to lookup the control clicked. If it’s notgiven, the form data will be submitted simulating a click on thefirst clickable element. In addition to html attributes, the controlcan be identified by its zero-based index relative to othersubmittable inputs inside the form, via the <code>nr</code> attribute.
- **dont_click** (_boolean_) – If True, the form data will be submitted withoutclicking in any element.
The other parameters of this class method are passed directly to theFormRequest
init
method.
New in version 0.10.3: The formname
parameter.
New in version 0.17: The formxpath
parameter.
New in version 1.1.0: The formcss
parameter.
New in version 1.1.0: The formid
parameter.
Request usage examples
Using FormRequest to send data via HTTP POST
If you want to simulate a HTML Form POST in your spider and send a couple ofkey-value fields, you can return a FormRequest
object (from yourspider) like this:
- return [FormRequest(url="http://www.example.com/post/action",
- formdata={'name': 'John Doe', 'age': '27'},
- callback=self.after_post)]
Using FormRequest.from_response() to simulate a user login
It is usual for web sites to provide pre-populated form fields through <inputtype="hidden">
elements, such as session related data or authenticationtokens (for login pages). When scraping, you’ll want these fields to beautomatically pre-populated and only override a couple of them, such as theuser name and password. You can use the FormRequest.from_response()
method for this job. Here’s an example spider which uses it:
- import scrapy
- def authentication_failed(response):
- # TODO: Check the contents of the response and return True if it failed
- # or False if it succeeded.
- pass
- class LoginSpider(scrapy.Spider):
- name = 'example.com'
- start_urls = ['http://www.example.com/users/login.php']
- def parse(self, response):
- return scrapy.FormRequest.from_response(
- response,
- formdata={'username': 'john', 'password': 'secret'},
- callback=self.after_login
- )
- def after_login(self, response):
- if authentication_failed(response):
- self.logger.error("Login failed")
- return
- # continue scraping with authenticated session...
JsonRequest
The JsonRequest class extends the base Request
class with functionality fordealing with JSON requests.
- class
scrapy.http.
JsonRequest
(url[, … data, dumps_kwargs])[source] - The
JsonRequest
class adds two new keyword parameters to theinit
method. Theremaining arguments are the same as for theRequest
class and arenot documented here.
Using the JsonRequest
will set the Content-Type
header to application/json
and Accept
header to application/json, text/javascript, /; q=0.01
Parameters:
- data (JSON serializable object) – is any JSON serializable object that needs to be JSON encoded and assigned to body.if
Request.body
argument is provided this parameter will be ignored.ifRequest.body
argument is not provided and data argument is providedRequest.method
will beset to'POST'
automatically. - dumps_kwargs (dict) – Parameters that will be passed to underlying json.dumps method which is used to serializedata into JSON format.
JsonRequest usage example
Sending a JSON POST request with a JSON payload:
- data = {
- 'name1': 'value1',
- 'name2': 'value2',
- }
- yield JsonRequest(url='http://www.example.com/post/action', data=data)
Response objects
- class
scrapy.http.
Response
(url, status=200, headers=None, body=b'', flags=None, request=None, certificate=None)[source] - A
Response
object represents an HTTP response, which is usuallydownloaded (by the Downloader) and fed to the Spiders for processing.
Parameters:
- url (string) – the URL of this response
- status (integer) – the HTTP status of the response. Defaults to
200
. - headers (dict) – the headers of this response. The dict values can be strings(for single valued headers) or lists (for multi-valued headers).
- body (bytes) – the response body. To access the decoded text as str you can use
response.text
from an encoding-awareResponse subclass,such asTextResponse
. - flags (list) – is a list containing the initial values for the
Response.flags
attribute. If given, the list will be shallowcopied. - request (scrapy.http.Request) – the initial value of the
Response.request
attribute.This represents theRequest
that generated this response. - certificate (twisted.internet.ssl.Certificate) – an object representing the server’s SSL certificate.
This attribute is read-only. To change the URL of a Response usereplace()
.
status
An integer representing the HTTP status of the response. Example:
200
,404
.- A dictionary-like object which contains the response headers. Values canbe accessed using
get()
to return the first header value with thespecified name orgetlist()
to return all header values with thespecified name. For example, this call will give you all cookies in theheaders:
- response.headers.getlist('Set-Cookie')
body
- The body of this Response. Keep in mind that Response.bodyis always a bytes object. If you want the unicode version use
TextResponse.text
(only available inTextResponse
and subclasses).
This attribute is read-only. To change the body of a Response usereplace()
.
request
The
Request
object that generated this response. This attribute isassigned in the Scrapy engine, after the response and the request have passedthrough all Downloader Middlewares.In particular, this means that:- HTTP redirections will cause the original request (to the URL beforeredirection) to be assigned to the redirected response (with the finalURL after redirection).
- Response.request.url doesn’t always equal Response.url
- This attribute is only available in the spider code, and in theSpider Middlewares, but not inDownloader Middlewares (although you have the Request available there byother means) and handlers of the
response_downloaded
signal.
meta
- A shortcut to the
Request.meta
attribute of theResponse.request
object (i.e.self.request.meta
).
Unlike the Response.request
attribute, the Response.meta
attribute is propagated along redirects and retries, so you will getthe original Request.meta
sent from your spider.
See also
Request.meta
attribute
New in version 2.0.
A shortcut to the Request.cb_kwargs
attribute of theResponse.request
object (i.e. self.request.cb_kwargs
).
Unlike the Response.request
attribute, theResponse.cb_kwargs
attribute is propagated along redirects andretries, so you will get the original Request.cb_kwargs
sentfrom your spider.
See also
Request.cb_kwargs
attribute
flags
A list that contains flags for this response. Flags are labels used fortagging Responses. For example:
'cached'
,'redirected
’, etc. Andthey’re shown on the string representation of the Response (strmethod) which is used by the engine for logging.- A
twisted.internet.ssl.Certificate
object representingthe server’s SSL certificate.
Only populated for https
responses, None
otherwise.
copy
()[source]Returns a new Response which is a copy of this Response.
replace
([url, status, headers, body, request, flags, cls])[source]Returns a Response object with the same members, except for those membersgiven new values by whichever keyword arguments are specified. Theattribute
Response.meta
is copied by default.urljoin
(url)[source]- Constructs an absolute url by combining the Response’s
url
witha possible relative url.
This is a wrapper over urlparse.urljoin, it’s merely an alias formaking this call:
- urlparse.urljoin(response.url, url)
follow
(url, callback=None, method='GET', headers=None, body=None, cookies=None, meta=None, encoding='utf-8', priority=0, dont_filter=False, errback=None, cb_kwargs=None, flags=None)[source]- Return a
Request
instance to follow a linkurl
.It accepts the same arguments asRequest.init
method,buturl
can be a relative URL or ascrapy.link.Link
object,not only an absolute URL.
TextResponse
provides a follow()
method which supports selectors in addition to absolute/relative URLsand Link objects.
New in version 2.0: The flags parameter.
followall
(_urls, callback=None, method='GET', headers=None, body=None, cookies=None, meta=None, encoding='utf-8', priority=0, dont_filter=False, errback=None, cb_kwargs=None, flags=None)[source]
New in version 2.0.
Return an iterable of Request
instances to follow all linksin urls
. It accepts the same arguments as Request.init
method,but elements of urls
can be relative URLs or Link
objects,not only absolute URLs.
TextResponse
provides a follow_all()
method which supports selectors in addition to absolute/relative URLsand Link objects.
Response subclasses
Here is the list of available built-in Response subclasses. You can alsosubclass the Response class to implement your own functionality.
TextResponse objects
- class
scrapy.http.
TextResponse
(url[, encoding[, …]])[source] TextResponse
objects adds encoding capabilities to the baseResponse
class, which is meant to be used only for binary data,such as images, sounds or any media file.
TextResponse
objects support a new init
method argument, inaddition to the base Response
objects. The remaining functionalityis the same as for the Response
class and is not documented here.
Parameters:encoding (string) – is a string which contains the encoding to use for thisresponse. If you create a TextResponse
object with a unicodebody, it will be encoded using this encoding (remember the body attributeis always a string). If encoding
is None
(default value), theencoding will be looked up in the response headers and body instead.
TextResponse
objects support the following attributes in additionto the standard Response
ones:
The same as response.body.decode(response.encoding)
, but theresult is cached after the first call, so you can accessresponse.text
multiple times without extra overhead.
Note
unicode(response.body)
is not a correct way to convert responsebody to unicode: you would be using the system default encoding(typically ascii
) instead of the response encoding.
encoding
A string with the encoding of this response. The encoding is resolved bytrying the following mechanisms, in order:
- the encoding passed in the
init
methodencoding
argument - the encoding declared in the Content-Type HTTP header. If thisencoding is not valid (i.e. unknown), it is ignored and the nextresolution mechanism is tried.
- the encoding declared in the response body. The TextResponse classdoesn’t provide any special functionality for this. However, the
HtmlResponse
andXmlResponse
classes do. - the encoding inferred by looking at the response body. This is the morefragile method but also the last one tried.
- the encoding passed in the
selector
- A
Selector
instance using the response astarget. The selector is lazily instantiated on first access.
TextResponse
objects support the following methods in addition tothe standard Response
ones:
xpath
(query)[source]- A shortcut to
TextResponse.selector.xpath(query)
:
- response.xpath('//p')
css
(query)[source]- A shortcut to
TextResponse.selector.css(query)
:
- response.css('p')
follow
(url, callback=None, method='GET', headers=None, body=None, cookies=None, meta=None, encoding=None, priority=0, dont_filter=False, errback=None, cb_kwargs=None, flags=None)[source]Return a
Request
instance to follow a linkurl
.It accepts the same arguments asRequest.init
method,buturl
can be not only an absolute URL, but also- a relative URL
- a
Link
object, e.g. the result ofLink Extractors - a
Selector
object for a<link>
or<a>
element, e.g.response.css('a.my_link')[0]
- an attribute
Selector
(not SelectorList), e.g.response.css('a::attr(href)')[0]
orresponse.xpath('//img/@src')[0]
See A shortcut for creating Requests for usage examples.
followall
(_urls=None, callback=None, method='GET', headers=None, body=None, cookies=None, meta=None, encoding=None, priority=0, dont_filter=False, errback=None, cb_kwargs=None, flags=None, css=None, xpath=None)[source]A generator that produces
Request
instances to follow alllinks inurls
. It accepts the same arguments as theRequest
’sinit
method, except that eachurls
element does not need to bean absolute URL, it can be any of the following:- a relative URL
- a
Link
object, e.g. the result ofLink Extractors - a
Selector
object for a<link>
or<a>
element, e.g.response.css('a.my_link')[0]
- an attribute
Selector
(not SelectorList), e.g.response.css('a::attr(href)')[0]
orresponse.xpath('//img/@src')[0]
In addition,css
andxpath
arguments are accepted to perform the link extractionwithin thefollow_all
method (only one ofurls
,css
andxpath
is accepted).
Note that when passing a SelectorList
as argument for the urls
parameter orusing the css
or xpath
parameters, this method will not produce requests forselectors from which links cannot be obtained (for instance, anchor tags without anhref
attribute)
body_as_unicode
()[source]- The same as
text
, but available as a method. This method iskept for backward compatibility; please preferresponse.text
.
HtmlResponse objects
- class
scrapy.http.
HtmlResponse
(url[, …])[source] - The
HtmlResponse
class is a subclass ofTextResponse
which adds encoding auto-discovering support by looking into the HTML metahttp-equiv attribute. SeeTextResponse.encoding
.
XmlResponse objects
- class
scrapy.http.
XmlResponse
(url[, …])[source] - The
XmlResponse
class is a subclass ofTextResponse
whichadds encoding auto-discovering support by looking into the XML declarationline. SeeTextResponse.encoding
.