Coroutines
Added in version 2.0.
Scrapy supports the coroutine syntax
(i.e. async def
).
Supported callables
The following callables may be defined as coroutines using async def
, and
hence use coroutine syntax (e.g. await
, async for
, async with
):
The
start()
spider method, which must be defined as an asynchronous generator.Request
callbacks.If you are using any custom or third-party spider middleware, see Mixing synchronous and asynchronous spider middlewares.
Changed in version 2.7: Output of async callbacks is now processed asynchronously instead of collecting all of it first.
The
process_item()
method of item pipelines.The
process_request()
,process_response()
, andprocess_exception()
methods of downloader middlewares.The
process_spider_output()
method of spider middlewares.If defined as a coroutine, it must be an asynchronous generator. The input
result
parameter is an asynchronous iterable.See also Mixing synchronous and asynchronous spider middlewares and Universal spider middlewares.
Added in version 2.7.
The
process_start()
method of spider middlewares, which must be defined as an asynchronous generator.Added in version 2.13.
Using Deferred-based APIs
In addition to native coroutine APIs Scrapy has some APIs that return a
Deferred
object or take a user-supplied
function that returns a Deferred
object. These
APIs are also asynchronous but don’t yet support native async def
syntax.
In the future we plan to add support for the async def
syntax to these APIs
or replace them with other APIs where changing the existing ones is
possible.
The following Scrapy methods return Deferred
objects (this list is not complete as it only includes methods that we think
may be useful for user code):
scrapy.crawler.CrawlerRunner
(also inherited byscrapy.crawler.CrawlerProcess
):scrapy.core.engine.ExecutionEngine
:download()
The following user-supplied methods can return
Deferred
objects (the methods that can also
return coroutines are listed in Supported callables):
Custom download handlers (see
DOWNLOAD_HANDLERS
):download_request()
close()
Custom downloader implementations (see
DOWNLOADER
):fetch()
Custom scheduler implementations (see
SCHEDULER
):Custom dupefilters (see
DUPEFILTER_CLASS
):open()
close()
Custom feed storages (see
FEED_STORAGES
):store()
Subclasses of
scrapy.pipelines.media.MediaPipeline
:media_to_download()
item_completed()
Custom storages used by subclasses of
scrapy.pipelines.files.FilesPipeline
:persist_file()
stat_file()
In most cases you can use these APIs in code that otherwise uses coroutines, by
wrapping a Deferred
object into a
Future
object or vice versa. See Integrating Deferred code and asyncio code for
more information about this.
For example:
The
ExecutionEngine.download()
method returns aDeferred
object that fires with the downloaded response. You can use this object directly in Deferred-based code or convert it into aFuture
object withmaybe_deferred_to_future()
.A custom download handler needs to define a
download_request()
method that returns aDeferred
object. You can write a method that works with Deferreds and returns one directly, or you can write a coroutine and convert it into a function that returns a Deferred withdeferred_f_from_coro_f()
.
General usage
There are several use cases for coroutines in Scrapy.
Code that would return Deferreds when written for previous Scrapy versions, such as downloader middlewares and signal handlers, can be rewritten to be shorter and cleaner:
from itemadapter import ItemAdapter
class DbPipeline:
def _update_item(self, data, item):
adapter = ItemAdapter(item)
adapter["field"] = data
return item
def process_item(self, item, spider):
adapter = ItemAdapter(item)
dfd = db.get_some_data(adapter["id"])
dfd.addCallback(self._update_item, item)
return dfd
becomes:
from itemadapter import ItemAdapter
class DbPipeline:
async def process_item(self, item, spider):
adapter = ItemAdapter(item)
adapter["field"] = await db.get_some_data(adapter["id"])
return item
Coroutines may be used to call asynchronous code. This includes other
coroutines, functions that return Deferreds and functions that return
awaitable objects such as Future
.
This means you can use many useful Python libraries providing such code:
class MySpiderDeferred(Spider):
# ...
async def parse(self, response):
additional_response = await treq.get("https://additional.url")
additional_data = await treq.content(additional_response)
# ... use response and additional_data to yield items and requests
class MySpiderAsyncio(Spider):
# ...
async def parse(self, response):
async with aiohttp.ClientSession() as session:
async with session.get("https://additional.url") as additional_response:
additional_data = await additional_response.text()
# ... use response and additional_data to yield items and requests
Note
Many libraries that use coroutines, such as aio-libs, require the
asyncio
loop and to use them you need to
enable asyncio support in Scrapy.
Note
If you want to await
on Deferreds while using the asyncio reactor,
you need to wrap them.
Common use cases for asynchronous code include:
requesting data from websites, databases and other services (in
start()
, callbacks, pipelines and middlewares);storing data in databases (in pipelines and middlewares);
delaying the spider initialization until some external event (in the
spider_opened
handler);calling asynchronous Scrapy methods like
ExecutionEngine.download()
(see the screenshot pipeline example).
Inline requests
The spider below shows how to send a request and await its response all from within a spider callback:
from scrapy import Spider, Request
from scrapy.utils.defer import maybe_deferred_to_future
class SingleRequestSpider(Spider):
name = "single"
start_urls = ["https://example.org/product"]
async def parse(self, response, **kwargs):
additional_request = Request("https://example.org/price")
deferred = self.crawler.engine.download(additional_request)
additional_response = await maybe_deferred_to_future(deferred)
yield {
"h1": response.css("h1").get(),
"price": additional_response.css("#price").get(),
}
You can also send multiple requests in parallel:
from scrapy import Spider, Request
from scrapy.utils.defer import maybe_deferred_to_future
from twisted.internet.defer import DeferredList
class MultipleRequestsSpider(Spider):
name = "multiple"
start_urls = ["https://example.com/product"]
async def parse(self, response, **kwargs):
additional_requests = [
Request("https://example.com/price"),
Request("https://example.com/color"),
]
deferreds = []
for r in additional_requests:
deferred = self.crawler.engine.download(r)
deferreds.append(deferred)
responses = await maybe_deferred_to_future(DeferredList(deferreds))
yield {
"h1": response.css("h1::text").get(),
"price": responses[0][1].css(".price::text").get(),
"price2": responses[1][1].css(".color::text").get(),
}
Mixing synchronous and asynchronous spider middlewares
Added in version 2.7.
The output of a Request
callback is passed as the result
parameter to the
process_spider_output()
method
of the first spider middleware from the
list of active spider middlewares.
Then the output of that process_spider_output
method is passed to the
process_spider_output
method of the next spider middleware, and so on for
every active spider middleware.
Scrapy supports mixing coroutine methods and synchronous methods in this chain of calls.
However, if any of the process_spider_output
methods is defined as a
synchronous method, and the previous Request
callback or
process_spider_output
method is a coroutine, there are some drawbacks to
the asynchronous-to-synchronous conversion that Scrapy does so that the
synchronous process_spider_output
method gets a synchronous iterable as its
result
parameter:
The whole output of the previous
Request
callback orprocess_spider_output
method is awaited at this point.If an exception raises while awaiting the output of the previous
Request
callback orprocess_spider_output
method, none of that output will be processed.This contrasts with the regular behavior, where all items yielded before an exception raises are processed.
Asynchronous-to-synchronous conversions are supported for backward compatibility, but they are deprecated and will stop working in a future version of Scrapy.
To avoid asynchronous-to-synchronous conversions, when defining Request
callbacks as coroutine methods or when using spider middlewares whose
process_spider_output
method is an asynchronous generator, all
active spider middlewares must either have their process_spider_output
method defined as an asynchronous generator or define a
process_spider_output_async method.
For middleware users
If you have asynchronous callbacks or use asynchronous-only spider middlewares you should make sure the asynchronous-to-synchronous conversions described above don’t happen. To do this, make sure all spider middlewares you use support asynchronous spider output. Even if you don’t have asynchronous callbacks and don’t use asynchronous-only spider middlewares in your project, it’s still a good idea to make sure all middlewares you use support asynchronous spider output, so that it will be easy to start using asynchronous callbacks in the future. Because of this, Scrapy logs a warning when it detects a synchronous-only spider middleware.
If you want to update middlewares you wrote, see the following section. If you have 3rd-party middlewares that aren’t yet updated by their authors, you can subclass them to make them universal and use the subclasses in your projects.
Universal spider middlewares
Added in version 2.7.
To allow writing a spider middleware that supports asynchronous execution of
its process_spider_output
method in Scrapy 2.7 and later (avoiding
asynchronous-to-synchronous conversions)
while maintaining support for older Scrapy versions, you may define
process_spider_output
as a synchronous method and define an
asynchronous generator version of that method with an alternative name:
process_spider_output_async
.
For example:
class UniversalSpiderMiddleware:
def process_spider_output(self, response, result, spider):
for r in result:
# ... do something with r
yield r
async def process_spider_output_async(self, response, result, spider):
async for r in result:
# ... do something with r
yield r
Note
This is an interim measure to allow, for a time, to write code that works in Scrapy 2.7 and later without requiring asynchronous-to-synchronous conversions, and works in earlier Scrapy versions as well.
In some future version of Scrapy, however, this feature will be
deprecated and, eventually, in a later version of Scrapy, this
feature will be removed, and all spider middlewares will be expected
to define their process_spider_output
method as an asynchronous
generator.
Since 2.13.0, Scrapy provides a base class,
BaseSpiderMiddleware
, which implements
the process_spider_output()
and process_spider_output_async()
methods,
so instead of duplicating the processing code you can override the
get_processed_request()
and/or the get_processed_item()
method.