Coroutines
New in version 2.0.
Scrapy has partial support for thecoroutine syntax.
Warning
asyncio
support in Scrapy is experimental. Future Scrapyversions may introduce related API and behavior changes without adeprecation period or warning.
Supported callables
The following callables may be defined as coroutines using async def
, andhence use coroutine syntax (e.g. await
, async for
, async with
):
Request
callbacks.
The following are known caveats of the current implementation that we aimto address in future versions of Scrapy:
- The callback output is not processed until the whole callback finishes.
As a side effect, if the callback raises an exception, none of itsoutput is processed.
- Because asynchronous generators were introduced in Python 3.6, youcan only use
yield
if you are using Python 3.6 or later.
If you need to output multiple items or requests and you are usingPython 3.5, return an iterable (e.g. a list) instead.
The
process_item()
method ofitem pipelines.The
process_request()
,process_response()
,andprocess_exception()
methods ofdownloader middlewares.
Usage
There are several use cases for coroutines in Scrapy. Code that wouldreturn Deferreds when written for previous Scrapy versions, such as downloadermiddlewares and signal handlers, can be rewritten to be shorter and cleaner:
- class DbPipeline:
- def _update_item(self, data, item):
- item['field'] = data
- return item
- def process_item(self, item, spider):
- dfd = db.get_some_data(item['id'])
- dfd.addCallback(self._update_item, item)
- return dfd
becomes:
- class DbPipeline:
- async def process_item(self, item, spider):
- item['field'] = await db.get_some_data(item['id'])
- return item
Coroutines may be used to call asynchronous code. This includes othercoroutines, functions that return Deferreds and functions that returnawaitable objects such as Future
. This means you can usemany useful Python libraries providing such code:
- class MySpider(Spider):
- # ...
- async def parse_with_deferred(self, response):
- additional_response = await treq.get('https://additional.url')
- additional_data = await treq.content(additional_response)
- # ... use response and additional_data to yield items and requests
- async def parse_with_asyncio(self, response):
- async with aiohttp.ClientSession() as session:
- async with session.get('https://additional.url') as additional_response:
- additional_data = await r.text()
- # ... use response and additional_data to yield items and requests
Note
Many libraries that use coroutines, such as aio-libs, require theasyncio
loop and to use them you need toenable asyncio support in Scrapy.
Common use cases for asynchronous code include:
- requesting data from websites, databases and other services (in callbacks,pipelines and middlewares);
- storing data in databases (in pipelines and middlewares);
- delaying the spider initialization until some external event (in the
spider_opened
handler); - calling asynchronous Scrapy methods like
ExecutionEngine.download
(seethe screenshot pipeline example).