- Frequently Asked Questions
- How does Scrapy compare to BeautifulSoup or lxml?
- Can I use Scrapy with BeautifulSoup?
- What Python versions does Scrapy support?
- Did Scrapy “steal” X from Django?
- Does Scrapy work with HTTP proxies?
- How can I scrape an item with attributes in different pages?
- Scrapy crashes with: ImportError: No module named win32api
- How can I simulate a user login in my spider?
- Does Scrapy crawl in breadth-first or depth-first order?
- My Scrapy crawler has memory leaks. What can I do?
- How can I make Scrapy consume less memory?
- Can I use Basic HTTP Authentication in my spiders?
- Why does Scrapy download pages in English instead of my native language?
- Where can I find some example Scrapy projects?
- Can I run a spider without creating a project?
- I get “Filtered offsite request” messages. How can I fix them?
- What is the recommended way to deploy a Scrapy crawler in production?
- Can I use JSON for large exports?
- Can I return (Twisted) deferreds from signal handlers?
- What does the response status code 999 means?
- Can I call pdb.set_trace() from my spiders to debug them?
- Simplest way to dump all my scraped items into a JSON/CSV/XML file?
- What’s this huge cryptic __VIEWSTATE parameter used in some forms?
- What’s the best way to parse big XML/CSV data feeds?
- Does Scrapy manage cookies automatically?
- How can I see the cookies being sent and received from Scrapy?
- How can I instruct a spider to stop itself?
- How can I prevent my Scrapy bot from getting banned?
- Should I use spider arguments or settings to configure my spider?
- I’m scraping a XML document and my XPath selector doesn’t return any items
- How to split an item into multiple items in an item pipeline?
- Does Scrapy support IPv6 addresses?
- How to deal with <class 'ValueError'>: filedescriptor out of range in select() exceptions?
Frequently Asked Questions
How does Scrapy compare to BeautifulSoup or lxml?
BeautifulSoup and lxml are libraries for parsing HTML and XML. Scrapy isan application framework for writing web spiders that crawl web sites andextract data from them.
Scrapy provides a built-in mechanism for extracting data (calledselectors) but you can easily use BeautifulSoup(or lxml) instead, if you feel more comfortable working with them. Afterall, they’re just parsing libraries which can be imported and used from anyPython code.
In other words, comparing BeautifulSoup (or lxml) to Scrapy is likecomparing jinja2 to Django.
Can I use Scrapy with BeautifulSoup?
Yes, you can.As mentioned above, BeautifulSoup can be usedfor parsing HTML responses in Scrapy callbacks.You just have to feed the response’s body into a BeautifulSoup
objectand extract whatever data you need from it.
Here’s an example spider using BeautifulSoup API, with lxml
as the HTML parser:
- from bs4 import BeautifulSoup
- import scrapy
- class ExampleSpider(scrapy.Spider):
- name = "example"
- allowed_domains = ["example.com"]
- start_urls = (
- 'http://www.example.com/',
- )
- def parse(self, response):
- # use lxml to get decent HTML parsing speed
- soup = BeautifulSoup(response.text, 'lxml')
- yield {
- "url": response.url,
- "title": soup.h1.string
- }
Note
BeautifulSoup
supports several HTML/XML parsers.See BeautifulSoup’s official documentation on which ones are available.
What Python versions does Scrapy support?
Scrapy is supported under Python 3.5+under CPython (default Python implementation) and PyPy (starting with PyPy 5.9).Python 3 support was added in Scrapy 1.1.PyPy support was added in Scrapy 1.4, PyPy3 support was added in Scrapy 1.5.Python 2 support was dropped in Scrapy 2.0.
Note
For Python 3 support on Windows, it is recommended to useAnaconda/Miniconda as outlined in the installation guide.
Did Scrapy “steal” X from Django?
Probably, but we don’t like that word. We think Django is a great open sourceproject and an example to follow, so we’ve used it as an inspiration forScrapy.
We believe that, if something is already done well, there’s no need to reinventit. This concept, besides being one of the foundations for open source and freesoftware, not only applies to software but also to documentation, procedures,policies, etc. So, instead of going through each problem ourselves, we chooseto copy ideas from those projects that have already solved them properly, andfocus on the real problems we need to solve.
We’d be proud if Scrapy serves as an inspiration for other projects. Feel freeto steal from us!
Does Scrapy work with HTTP proxies?
Yes. Support for HTTP proxies is provided (since Scrapy 0.8) through the HTTPProxy downloader middleware. SeeHttpProxyMiddleware
.
How can I scrape an item with attributes in different pages?
See Passing additional data to callback functions.
Scrapy crashes with: ImportError: No module named win32api
You need to install pywin32 because of this Twisted bug.
How can I simulate a user login in my spider?
See Using FormRequest.from_response() to simulate a user login.
Does Scrapy crawl in breadth-first or depth-first order?
By default, Scrapy uses a LIFO) queue for storing pending requests, whichbasically means that it crawls in DFO order. This order is more convenientin most cases.
If you do want to crawl in true BFO order, you can do it bysetting the following settings:
- DEPTH_PRIORITY = 1
- SCHEDULER_DISK_QUEUE = 'scrapy.squeues.PickleFifoDiskQueue'
- SCHEDULER_MEMORY_QUEUE = 'scrapy.squeues.FifoMemoryQueue'
While pending requests are below the configured values ofCONCURRENT_REQUESTS
, CONCURRENT_REQUESTS_PER_DOMAIN
orCONCURRENT_REQUESTS_PER_IP
, those requests are sentconcurrently. As a result, the first few requests of a crawl rarely follow thedesired order. Lowering those settings to 1
enforces the desired order, butit significantly slows down the crawl as a whole.
My Scrapy crawler has memory leaks. What can I do?
Also, Python has a builtin memory leak issue which is described inLeaks without leaks.
How can I make Scrapy consume less memory?
See previous question.
Can I use Basic HTTP Authentication in my spiders?
Yes, see HttpAuthMiddleware
.
Why does Scrapy download pages in English instead of my native language?
Try changing the default Accept-Language request header by overriding theDEFAULT_REQUEST_HEADERS
setting.
Where can I find some example Scrapy projects?
See Examples.
Can I run a spider without creating a project?
Yes. You can use the runspider
command. For example, if you have aspider written in a my_spider.py
file you can run it with:
- scrapy runspider my_spider.py
See runspider
command for more info.
I get “Filtered offsite request” messages. How can I fix them?
Those messages (logged with DEBUG
level) don’t necessarily mean there is aproblem, so you may not need to fix them.
Those messages are thrown by the Offsite Spider Middleware, which is a spidermiddleware (enabled by default) whose purpose is to filter out requests todomains outside the ones covered by the spider.
For more info see:OffsiteMiddleware
.
What is the recommended way to deploy a Scrapy crawler in production?
See Deploying Spiders.
Can I use JSON for large exports?
It’ll depend on how large your output is. See this warning in JsonItemExporter
documentation.
Can I return (Twisted) deferreds from signal handlers?
Some signals support returning deferreds from their handlers, others don’t. Seethe Built-in signals reference to know which ones.
What does the response status code 999 means?
999 is a custom response status code used by Yahoo sites to throttle requests.Try slowing down the crawling speed by using a download delay of 2
(orhigher) in your spider:
- class MySpider(CrawlSpider):
- name = 'myspider'
- download_delay = 2
- # [ ... rest of the spider code ... ]
Or by setting a global download delay in your project with theDOWNLOAD_DELAY
setting.
Can I call pdb.set_trace() from my spiders to debug them?
Yes, but you can also use the Scrapy shell which allows you to quickly analyze(and even modify) the response being processed by your spider, which is, quiteoften, more useful than plain old pdb.set_trace()
.
For more info see Invoking the shell from spiders to inspect responses.
Simplest way to dump all my scraped items into a JSON/CSV/XML file?
To dump into a JSON file:
- scrapy crawl myspider -o items.json
To dump into a CSV file:
- scrapy crawl myspider -o items.csv
To dump into a XML file:
- scrapy crawl myspider -o items.xml
For more information see Feed exports
What’s this huge cryptic __VIEWSTATE parameter used in some forms?
The __VIEWSTATE
parameter is used in sites built with ASP.NET/VB.NET. Formore info on how it works see this page. Also, here’s an example spiderwhich scrapes one of these sites.
What’s the best way to parse big XML/CSV data feeds?
Parsing big feeds with XPath selectors can be problematic since they need tobuild the DOM of the entire feed in memory, and this can be quite slow andconsume a lot of memory.
In order to avoid parsing all the entire feed at once in memory, you can usethe functions xmliter
and csviter
from scrapy.utils.iterators
module. In fact, this is what the feed spiders (see Spiders) useunder the cover.
Does Scrapy manage cookies automatically?
Yes, Scrapy receives and keeps track of cookies sent by servers, and sends themback on subsequent requests, like any regular web browser does.
For more info see Requests and Responses and CookiesMiddleware.
How can I see the cookies being sent and received from Scrapy?
Enable the COOKIES_DEBUG
setting.
How can I instruct a spider to stop itself?
Raise the CloseSpider
exception from a callback. Formore info see: CloseSpider
.
How can I prevent my Scrapy bot from getting banned?
Should I use spider arguments or settings to configure my spider?
Both spider arguments and settingscan be used to configure your spider. There is no strict rule that mandates touse one or the other, but settings are more suited for parameters that, onceset, don’t change much, while spider arguments are meant to change more often,even on each spider run and sometimes are required for the spider to run at all(for example, to set the start url of a spider).
To illustrate with an example, assuming you have a spider that needs to loginto a site to scrape data, and you only want to scrape data from a certainsection of the site (which varies each time). In that case, the credentials tolog in would be settings, while the url of the section to scrape would be aspider argument.
I’m scraping a XML document and my XPath selector doesn’t return any items
You may need to remove namespaces. See Removing namespaces.
How to split an item into multiple items in an item pipeline?
Item pipelines cannot yield multiple items perinput item. Create a spider middlewareinstead, and use itsprocess_spider_output()
method for this purpose. For example:
- from copy import deepcopy
- from scrapy.item import BaseItem
- class MultiplyItemsMiddleware:
- def process_spider_output(self, response, result, spider):
- for item in result:
- if isinstance(item, (BaseItem, dict)):
- for _ in range(item['multiply_by']):
- yield deepcopy(item)
Does Scrapy support IPv6 addresses?
Yes, by setting DNS_RESOLVER
to scrapy.resolver.CachingHostnameResolver
.Note that by doing so, you lose the ability to set a specific timeout for DNS requests(the value of the DNS_TIMEOUT
setting is ignored).
How to deal with <class 'ValueError'>: filedescriptor out of range in select() exceptions?
This issue has been reported to appear when running broad crawls in macOS, where the defaultTwisted reactor is twisted.internet.selectreactor.SelectReactor
. Switching to adifferent reactor is possible by using the TWISTED_REACTOR
setting.