Spider Middleware¶
The spider middleware is a framework of hooks into Scrapy’s spider processing mechanism where you can plug custom functionality to process the responses that are sent to Spiders for processing and to process the requests and items that are generated from spiders.
Activating a spider middleware¶
To activate a spider middleware component, add it to the
SPIDER_MIDDLEWARES
setting, which is a dict whose keys are the
middleware class path and their values are the middleware orders.
Here’s an example:
SPIDER_MIDDLEWARES = {
'myproject.middlewares.CustomSpiderMiddleware': 543,
}
The SPIDER_MIDDLEWARES
setting is merged with the
SPIDER_MIDDLEWARES_BASE
setting defined in Scrapy (and not meant to
be overridden) and then sorted by order to get the final sorted list of enabled
middlewares: the first middleware is the one closer to the engine and the last
is the one closer to the spider. In other words,
the process_spider_input()
method of each middleware will be invoked in increasing
middleware order (100, 200, 300, …), and the
process_spider_output()
method
of each middleware will be invoked in decreasing order.
To decide which order to assign to your middleware see the
SPIDER_MIDDLEWARES_BASE
setting and pick a value according to where
you want to insert the middleware. The order does matter because each
middleware performs a different action and your middleware could depend on some
previous (or subsequent) middleware being applied.
If you want to disable a builtin middleware (the ones defined in
SPIDER_MIDDLEWARES_BASE
, and enabled by default) you must define it
in your project SPIDER_MIDDLEWARES
setting and assign None
as its
value. For example, if you want to disable the off-site middleware:
SPIDER_MIDDLEWARES = {
'myproject.middlewares.CustomSpiderMiddleware': 543,
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': None,
}
Finally, keep in mind that some middlewares may need to be enabled through a particular setting. See each middleware documentation for more info.
Writing your own spider middleware¶
Each spider middleware is a Python class that defines one or more of the methods defined below.
The main entry point is the from_crawler
class method, which receives a
Crawler
instance. The Crawler
object gives you access, for example, to the settings.
- class scrapy.spidermiddlewares.SpiderMiddleware¶
- process_spider_input(response, spider)¶
This method is called for each response that goes through the spider middleware and into the spider, for processing.
process_spider_input()
should returnNone
or raise an exception.If it returns
None
, Scrapy will continue processing this response, executing all other middlewares until, finally, the response is handed to the spider for processing.If it raises an exception, Scrapy won’t bother calling any other spider middleware
process_spider_input()
and will call the request errback if there is one, otherwise it will start theprocess_spider_exception()
chain. The output of the errback is chained back in the other direction forprocess_spider_output()
to process it, orprocess_spider_exception()
if it raised an exception.
- process_spider_output(response, result, spider)¶
This method is called with the results returned from the Spider, after it has processed the response.
process_spider_output()
must return an iterable ofRequest
, dict orItem
objects.
- process_spider_exception(response, exception, spider)¶
This method is called when a spider or
process_spider_output()
method (from a previous spider middleware) raises an exception.process_spider_exception()
should return eitherNone
or an iterable ofRequest
, dict orItem
objects.If it returns
None
, Scrapy will continue processing this exception, executing any otherprocess_spider_exception()
in the following middleware components, until no middleware components are left and the exception reaches the engine (where it’s logged and discarded).If it returns an iterable the
process_spider_output()
pipeline kicks in, starting from the next spider middleware, and no otherprocess_spider_exception()
will be called.
- process_start_requests(start_requests, spider)¶
New in version 0.15.
This method is called with the start requests of the spider, and works similarly to the
process_spider_output()
method, except that it doesn’t have a response associated and must return only requests (not items).It receives an iterable (in the
start_requests
parameter) and must return another iterable ofRequest
objects.Note
When implementing this method in your spider middleware, you should always return an iterable (that follows the input one) and not consume all
start_requests
iterator because it can be very large (or even unbounded) and cause a memory overflow. The Scrapy engine is designed to pull start requests while it has capacity to process them, so the start requests iterator can be effectively endless where there is some other condition for stopping the spider (like a time limit or item/page count).
- from_crawler(cls, crawler)¶
If present, this classmethod is called to create a middleware instance from a
Crawler
. It must return a new instance of the middleware. Crawler object provides access to all Scrapy core components like settings and signals; it is a way for middleware to access them and hook its functionality into Scrapy.- Parameters:
crawler (
Crawler
object) – crawler that uses this middleware
Built-in spider middleware reference¶
This page describes all spider middleware components that come with Scrapy. For information on how to use them and how to write your own spider middleware, see the spider middleware usage guide.
For a list of the components enabled by default (and their orders) see the
SPIDER_MIDDLEWARES_BASE
setting.
DepthMiddleware¶
- class scrapy.spidermiddlewares.depth.DepthMiddleware¶
DepthMiddleware is used for tracking the depth of each Request inside the site being scraped. It works by setting
request.meta['depth'] = 0
whenever there is no value previously set (usually just the first Request) and incrementing it by 1 otherwise.It can be used to limit the maximum depth to scrape, control Request priority based on their depth, and things like that.
The
DepthMiddleware
can be configured through the following settings (see the settings documentation for more info):DEPTH_LIMIT
- The maximum depth that will be allowed to crawl for any site. If zero, no limit will be imposed.DEPTH_STATS_VERBOSE
- Whether to collect the number of requests for each depth.DEPTH_PRIORITY
- Whether to prioritize the requests based on their depth.
HttpErrorMiddleware¶
- class scrapy.spidermiddlewares.httperror.HttpErrorMiddleware¶
Filter out unsuccessful (erroneous) HTTP responses so that spiders don’t have to deal with them, which (most of the time) imposes an overhead, consumes more resources, and makes the spider logic more complex.
According to the HTTP standard, successful responses are those whose status codes are in the 200-300 range.
If you still want to process response codes outside that range, you can
specify which response codes the spider is able to handle using the
handle_httpstatus_list
spider attribute or
HTTPERROR_ALLOWED_CODES
setting.
For example, if you want your spider to handle 404 responses you can do this:
class MySpider(CrawlSpider):
handle_httpstatus_list = [404]
The handle_httpstatus_list
key of Request.meta
can also be used to specify which response codes to
allow on a per-request basis. You can also set the meta key handle_httpstatus_all
to True
if you want to allow any response code for a request.
Keep in mind, however, that it’s usually a bad idea to handle non-200 responses, unless you really know what you’re doing.
For more information see: HTTP Status Code Definitions.
HttpErrorMiddleware settings¶
HTTPERROR_ALLOWED_CODES¶
Default: []
Pass all responses with non-200 status codes contained in this list.
HTTPERROR_ALLOW_ALL¶
Default: False
Pass all responses, regardless of its status code.
OffsiteMiddleware¶
- class scrapy.spidermiddlewares.offsite.OffsiteMiddleware¶
Filters out Requests for URLs outside the domains covered by the spider.
This middleware filters out every request whose host names aren’t in the spider’s
allowed_domains
attribute. All subdomains of any domain in the list are also allowed. E.g. the rulewww.example.org
will also allowbob.www.example.org
but notwww2.example.com
norexample.com
.When your spider returns a request for a domain not belonging to those covered by the spider, this middleware will log a debug message similar to this one:
DEBUG: Filtered offsite request to 'www.othersite.com': <GET http://www.othersite.com/some/page.html>
To avoid filling the log with too much noise, it will only print one of these messages for each new domain filtered. So, for example, if another request for
www.othersite.com
is filtered, no log message will be printed. But if a request forsomeothersite.com
is filtered, a message will be printed (but only for the first request filtered).If the spider doesn’t define an
allowed_domains
attribute, or the attribute is empty, the offsite middleware will allow all requests.If the request has the
dont_filter
attribute set, the offsite middleware will allow the request even if its domain is not listed in allowed domains.
RefererMiddleware¶
- class scrapy.spidermiddlewares.referer.RefererMiddleware¶
Populates Request
Referer
header, based on the URL of the Response which generated it.
RefererMiddleware settings¶
REFERER_ENABLED¶
New in version 0.15.
Default: True
Whether to enable referer middleware.
REFERRER_POLICY¶
New in version 1.4.
Default: 'scrapy.spidermiddlewares.referer.DefaultReferrerPolicy'
Referrer Policy to apply when populating Request “Referer” header.
Note
You can also set the Referrer Policy per request,
using the special "referrer_policy"
Request.meta key,
with the same acceptable values as for the REFERRER_POLICY
setting.
Acceptable values for REFERRER_POLICY¶
either a path to a
scrapy.spidermiddlewares.referer.ReferrerPolicy
subclass — a custom policy or one of the built-in ones (see classes below),or one of the standard W3C-defined string values,
or the special
"scrapy-default"
.
String value |
Class name (as a string) |
---|---|
|
|
|
|
|
|
|
|
- class scrapy.spidermiddlewares.referer.DefaultReferrerPolicy¶
A variant of “no-referrer-when-downgrade”, with the addition that “Referer” is not sent if the parent request was using
file://
ors3://
scheme.
Warning
Scrapy’s default referrer policy — just like “no-referrer-when-downgrade”,
the W3C-recommended value for browsers — will send a non-empty
“Referer” header from any http(s)://
to any https://
URL,
even if the domain is different.
“same-origin” may be a better choice if you want to remove referrer information for cross-domain requests.
- class scrapy.spidermiddlewares.referer.NoReferrerPolicy¶
https://www.w3.org/TR/referrer-policy/#referrer-policy-no-referrer
The simplest policy is “no-referrer”, which specifies that no referrer information is to be sent along with requests made from a particular request client to any origin. The header will be omitted entirely.
- class scrapy.spidermiddlewares.referer.NoReferrerWhenDowngradePolicy¶
https://www.w3.org/TR/referrer-policy/#referrer-policy-no-referrer-when-downgrade
The “no-referrer-when-downgrade” policy sends a full URL along with requests from a TLS-protected environment settings object to a potentially trustworthy URL, and requests from clients which are not TLS-protected to any origin.
Requests from TLS-protected clients to non-potentially trustworthy URLs, on the other hand, will contain no referrer information. A Referer HTTP header will not be sent.
This is a user agent’s default behavior, if no policy is otherwise specified.
Note
“no-referrer-when-downgrade” policy is the W3C-recommended default, and is used by major web browsers.
However, it is NOT Scrapy’s default referrer policy (see DefaultReferrerPolicy
).
- class scrapy.spidermiddlewares.referer.SameOriginPolicy¶
https://www.w3.org/TR/referrer-policy/#referrer-policy-same-origin
The “same-origin” policy specifies that a full URL, stripped for use as a referrer, is sent as referrer information when making same-origin requests from a particular request client.
Cross-origin requests, on the other hand, will contain no referrer information. A Referer HTTP header will not be sent.
- class scrapy.spidermiddlewares.referer.OriginPolicy¶
https://www.w3.org/TR/referrer-policy/#referrer-policy-origin
The “origin” policy specifies that only the ASCII serialization of the origin of the request client is sent as referrer information when making both same-origin requests and cross-origin requests from a particular request client.
- class scrapy.spidermiddlewares.referer.StrictOriginPolicy¶
https://www.w3.org/TR/referrer-policy/#referrer-policy-strict-origin
The “strict-origin” policy sends the ASCII serialization of the origin of the request client when making requests: - from a TLS-protected environment settings object to a potentially trustworthy URL, and - from non-TLS-protected environment settings objects to any origin.
Requests from TLS-protected request clients to non- potentially trustworthy URLs, on the other hand, will contain no referrer information. A Referer HTTP header will not be sent.
- class scrapy.spidermiddlewares.referer.OriginWhenCrossOriginPolicy¶
https://www.w3.org/TR/referrer-policy/#referrer-policy-origin-when-cross-origin
The “origin-when-cross-origin” policy specifies that a full URL, stripped for use as a referrer, is sent as referrer information when making same-origin requests from a particular request client, and only the ASCII serialization of the origin of the request client is sent as referrer information when making cross-origin requests from a particular request client.
- class scrapy.spidermiddlewares.referer.StrictOriginWhenCrossOriginPolicy¶
https://www.w3.org/TR/referrer-policy/#referrer-policy-strict-origin-when-cross-origin
The “strict-origin-when-cross-origin” policy specifies that a full URL, stripped for use as a referrer, is sent as referrer information when making same-origin requests from a particular request client, and only the ASCII serialization of the origin of the request client when making cross-origin requests:
from a TLS-protected environment settings object to a potentially trustworthy URL, and
from non-TLS-protected environment settings objects to any origin.
Requests from TLS-protected clients to non- potentially trustworthy URLs, on the other hand, will contain no referrer information. A Referer HTTP header will not be sent.
- class scrapy.spidermiddlewares.referer.UnsafeUrlPolicy¶
https://www.w3.org/TR/referrer-policy/#referrer-policy-unsafe-url
The “unsafe-url” policy specifies that a full URL, stripped for use as a referrer, is sent along with both cross-origin requests and same-origin requests made from a particular request client.
Note: The policy’s name doesn’t lie; it is unsafe. This policy will leak origins and paths from TLS-protected resources to insecure origins. Carefully consider the impact of setting such a policy for potentially sensitive documents.
Warning
“unsafe-url” policy is NOT recommended.
UrlLengthMiddleware¶
- class scrapy.spidermiddlewares.urllength.UrlLengthMiddleware¶
Filters out requests with URLs longer than URLLENGTH_LIMIT
The
UrlLengthMiddleware
can be configured through the following settings (see the settings documentation for more info):URLLENGTH_LIMIT
- The maximum URL length to allow for crawled URLs.