Settings
The Scrapy settings allows you to customize the behaviour of all Scrapy components, including the core, extensions, pipelines and spiders themselves.
The infrastructure of the settings provides a global namespace of key-value mappings that the code can use to pull configuration values from. The settings can be populated through different mechanisms, which are described below.
The settings are also the mechanism for selecting the currently active Scrapy project (in case you have many).
For a list of available built-in settings see: Built-in settings reference.
Designating the settings
When you use Scrapy, you have to tell it which settings you’re using. You can
do this by using an environment variable, SCRAPY_SETTINGS_MODULE
.
The value of SCRAPY_SETTINGS_MODULE
should be in Python path syntax, e.g.
myproject.settings
. Note that the settings module should be on the
Python import search path.
Populating the settings
Settings can be populated using different mechanisms, each of which has a different precedence:
Command-line settings (highest precedence)
Global default settings (lowest precedence)
1. Command-line settings
Settings set in the command line have the highest precedence, overriding any other settings.
You can explicitly override one or more settings using the -s
(or
--set
) command-line option.
Example:
scrapy crawl myspider -s LOG_LEVEL=INFO -s LOG_FILE=scrapy.log
2. Spider settings
Spiders can define their own settings that will take precedence and override the project ones.
Note
Pre-crawler settings cannot be defined per spider, and reactor settings should not have a different value per spider when running multiple spiders in the same process.
One way to do so is by setting their custom_settings
attribute:
import scrapy
class MySpider(scrapy.Spider):
name = "myspider"
custom_settings = {
"SOME_SETTING": "some value",
}
It’s often better to implement update_settings()
instead,
and settings set there should use the "spider"
priority explicitly:
import scrapy
class MySpider(scrapy.Spider):
name = "myspider"
@classmethod
def update_settings(cls, settings):
super().update_settings(settings)
settings.set("SOME_SETTING", "some value", priority="spider")
Added in version 2.11.
It’s also possible to modify the settings in the
from_crawler()
method, e.g. based on spider
arguments or other logic:
import scrapy
class MySpider(scrapy.Spider):
name = "myspider"
@classmethod
def from_crawler(cls, crawler, *args, **kwargs):
spider = super().from_crawler(crawler, *args, **kwargs)
if "some_argument" in kwargs:
spider.settings.set(
"SOME_SETTING", kwargs["some_argument"], priority="spider"
)
return spider
3. Project settings
Scrapy projects include a settings module, usually a file called
settings.py
, where you should populate most settings that apply to all your
spiders.
See also
4. Add-on settings
Add-ons can modify settings. They should do this with
"addon"
priority where possible.
5. Command-specific default settings
Each Scrapy command can have its own default settings, which override the global default settings.
Those command-specific default settings are specified in the
default_settings
attribute of each command class.
6. Default global settings
The scrapy.settings.default_settings
module defines global default values
for some built-in settings.
Note
startproject
generates a settings.py
file that sets
some settings to different values.
The reference documentation of settings indicates the default value if one
exists. If startproject
sets a value, that value is documented
as default, and the value from scrapy.settings.default_settings
is
documented as “fallback”.
Compatibility with pickle
Setting values must be picklable.
Import paths and classes
Added in version 2.4.0.
When a setting references a callable object to be imported by Scrapy, such as a class or a function, there are two different ways you can specify that object:
As a string containing the import path of that object
As the object itself
For example:
from mybot.pipelines.validate import ValidateMyItem
ITEM_PIPELINES = {
# passing the classname...
ValidateMyItem: 300,
# ...equals passing the class path
"mybot.pipelines.validate.ValidateMyItem": 300,
}
Note
Passing non-callable objects is not supported.
How to access settings
In a spider, settings are available through self.settings
:
class MySpider(scrapy.Spider):
name = "myspider"
start_urls = ["http://example.com"]
def parse(self, response):
print(f"Existing settings: {self.settings.attributes.keys()}")
Note
The settings
attribute is set in the base Spider class after the spider
is initialized. If you want to use settings before the initialization
(e.g., in your spider’s __init__()
method), you’ll need to override the
from_crawler()
method.
Components can also access settings.
The settings
object can be used like a dict
(e.g.
settings["LOG_ENABLED"]
). However, to support non-string setting values,
which may be passed from the command line as strings, it is recommended to use
one of the methods provided by the Settings
API.
Component priority dictionaries
A component priority dictionary is a dict
where keys are
components and values are component priorities. For
example:
{
"path.to.ComponentA": None,
ComponentB: 100,
}
A component can be specified either as a class object or through an import path.
Warning
Component priority dictionaries are regular dict
objects.
Be careful not to define the same component more than once, e.g. with
different import path strings or defining both an import path and a
type
object.
A priority can be an int
or None
.
A component with priority 1 goes before a component with priority 2. What
going before entails, however, depends on the corresponding setting. For
example, in the DOWNLOADER_MIDDLEWARES
setting, components have
their
process_request()
method executed before that of later components, but have their
process_response()
method executed after that of later components.
A component with priority None
is disabled.
Some component priority dictionaries get merged with some built-in value. For
example, DOWNLOADER_MIDDLEWARES
is merged with
DOWNLOADER_MIDDLEWARES_BASE
. This is where None
comes in
handy, allowing you to disable a component from the base setting in the regular
setting:
DOWNLOADER_MIDDLEWARES = {
"scrapy.downloadermiddlewares.offsite.OffsiteMiddleware": None,
}
Special settings
The following settings work slightly differently than all other settings.
Pre-crawler settings
Pre-crawler settings are settings used before the
Crawler
object is created.
These settings cannot be set from a spider.
These settings are SPIDER_LOADER_CLASS
and settings used by the
corresponding component, e.g.
SPIDER_MODULES
and SPIDER_LOADER_WARN_ONLY
for the
default component.
Reactor settings
Reactor settings are settings tied to the Twisted reactor.
These settings can be defined from a spider. However, because only 1 reactor can be used per process, these settings cannot use a different value per spider when running multiple spiders in the same process.
In general, if different spiders define different values, the first defined value is used. However, if two spiders request a different reactor, an exception is raised.
These settings are:
DNS_RESOLVER
and settings used by the corresponding component, e.g.DNSCACHE_ENABLED
,DNSCACHE_SIZE
andDNS_TIMEOUT
for the default one.
ASYNCIO_EVENT_LOOP
and TWISTED_REACTOR
are used upon
installing the reactor. The rest of the settings are applied when starting
the reactor.
Built-in settings reference
Here’s a list of all available Scrapy settings, in alphabetical order, along with their default values and the scope where they apply.
The scope, where available, shows where the setting is being used, if it’s tied to any particular component. In that case the module of that component will be shown, typically an extension, middleware or pipeline. It also means that the component must be enabled in order for the setting to have any effect.
ADDONS
Default: {}
A dict containing paths to the add-ons enabled in your project and their priorities. For more information, see Add-ons.
AWS_ACCESS_KEY_ID
Default: None
The AWS access key used by code that requires access to Amazon Web services, such as the S3 feed storage backend.
AWS_SECRET_ACCESS_KEY
Default: None
The AWS secret key used by code that requires access to Amazon Web services, such as the S3 feed storage backend.
AWS_SESSION_TOKEN
Default: None
The AWS security token used by code that requires access to Amazon Web services, such as the S3 feed storage backend, when using temporary security credentials.
AWS_ENDPOINT_URL
Default: None
Endpoint URL used for S3-like storage, for example Minio or s3.scality.
AWS_USE_SSL
Default: None
Use this option if you want to disable SSL connection for communication with S3 or S3-like storage. By default SSL will be used.
AWS_VERIFY
Default: None
Verify SSL connection between Scrapy and S3 or S3-like storage. By default SSL verification will occur.
AWS_REGION_NAME
Default: None
The name of the region associated with the AWS client.
ASYNCIO_EVENT_LOOP
Default: None
Import path of a given asyncio
event loop class.
If the asyncio reactor is enabled (see TWISTED_REACTOR
) this setting can be used to specify the
asyncio event loop to be used with it. Set the setting to the import path of the
desired asyncio event loop class. If the setting is set to None
the default asyncio
event loop will be used.
If you are installing the asyncio reactor manually using the install_reactor()
function, you can use the event_loop_path
parameter to indicate the import path of the event loop
class to be used.
Note that the event loop class must inherit from asyncio.AbstractEventLoop
.
Caution
Please be aware that, when using a non-default event loop
(either defined via ASYNCIO_EVENT_LOOP
or installed with
install_reactor()
), Scrapy will call
asyncio.set_event_loop()
, which will set the specified event loop
as the current loop for the current OS thread.
BOT_NAME
Default: <project name>
(fallback: 'scrapybot'
)
The name of the bot implemented by this Scrapy project (also known as the project name). This name will be used for the logging too.
It’s automatically populated with your project name when you create your
project with the startproject
command.
CONCURRENT_ITEMS
Default: 100
Maximum number of concurrent items (per response) to process in parallel in item pipelines.
CONCURRENT_REQUESTS
Default: 16
The maximum number of concurrent (i.e. simultaneous) requests that will be performed by the Scrapy downloader.
CONCURRENT_REQUESTS_PER_DOMAIN
Default: 8
The maximum number of concurrent (i.e. simultaneous) requests that will be performed to any single domain.
See also: AutoThrottle extension and its
AUTOTHROTTLE_TARGET_CONCURRENCY
option.
CONCURRENT_REQUESTS_PER_IP
Default: 0
The maximum number of concurrent (i.e. simultaneous) requests that will be
performed to any single IP. If non-zero, the
CONCURRENT_REQUESTS_PER_DOMAIN
setting is ignored, and this one is
used instead. In other words, concurrency limits will be applied per IP, not
per domain.
This setting also affects DOWNLOAD_DELAY
and
AutoThrottle extension: if CONCURRENT_REQUESTS_PER_IP
is non-zero, download delay is enforced per IP, not per domain.
DEFAULT_DROPITEM_LOG_LEVEL
Default: "WARNING"
Default log level of messages about dropped items.
When an item is dropped by raising scrapy.exceptions.DropItem
from the
process_item()
method of an item pipeline,
a message is logged, and by default its log level is the one configured in this
setting.
You may specify this log level as an integer (e.g. 20
), as a log level
constant (e.g. logging.INFO
) or as a string with the name of a log level
constant (e.g. "INFO"
).
When writing an item pipeline, you can force a different log level by setting
scrapy.exceptions.DropItem.log_level
in your
scrapy.exceptions.DropItem
exception. For example:
from scrapy.exceptions import DropItem
class MyPipeline:
def process_item(self, item, spider):
if not item.get("price"):
raise DropItem("Missing price data", log_level="INFO")
return item
DEFAULT_ITEM_CLASS
Default: 'scrapy.Item'
The default class that will be used for instantiating items in the the Scrapy shell.
DEFAULT_REQUEST_HEADERS
Default:
{
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Language": "en",
}
The default headers used for Scrapy HTTP Requests. They’re populated in the
DefaultHeadersMiddleware
.
Caution
Cookies set via the Cookie
header are not considered by the
CookiesMiddleware. If you need to set cookies for a request, use the
Request.cookies
parameter. This is a known
current limitation that is being worked on.
DEPTH_LIMIT
Default: 0
Scope: scrapy.spidermiddlewares.depth.DepthMiddleware
The maximum depth that will be allowed to crawl for any site. If zero, no limit will be imposed.
DEPTH_PRIORITY
Default: 0
Scope: scrapy.spidermiddlewares.depth.DepthMiddleware
An integer that is used to adjust the priority
of
a Request
based on its depth.
The priority of a request is adjusted as follows:
request.priority = request.priority - (depth * DEPTH_PRIORITY)
As depth increases, positive values of DEPTH_PRIORITY
decrease request
priority (BFO), while negative values increase request priority (DFO). See
also Does Scrapy crawl in breadth-first or depth-first order?.
Note
This setting adjusts priority in the opposite way compared to
other priority settings REDIRECT_PRIORITY_ADJUST
and RETRY_PRIORITY_ADJUST
.
DEPTH_STATS_VERBOSE
Default: False
Scope: scrapy.spidermiddlewares.depth.DepthMiddleware
Whether to collect verbose depth stats. If this is enabled, the number of requests for each depth is collected in the stats.
DNSCACHE_ENABLED
Default: True
Whether to enable DNS in-memory cache.
DNSCACHE_SIZE
Default: 10000
DNS in-memory cache size.
DNS_RESOLVER
Added in version 2.0.
Default: 'scrapy.resolver.CachingThreadedResolver'
The class to be used to resolve DNS names. The default scrapy.resolver.CachingThreadedResolver
supports specifying a timeout for DNS requests via the DNS_TIMEOUT
setting,
but works only with IPv4 addresses. Scrapy provides an alternative resolver,
scrapy.resolver.CachingHostnameResolver
, which supports IPv4/IPv6 addresses but does not
take the DNS_TIMEOUT
setting into account.
DNS_TIMEOUT
Default: 60
Timeout for processing of DNS queries in seconds. Float is supported.
DOWNLOADER
Default: 'scrapy.core.downloader.Downloader'
The downloader to use for crawling.
DOWNLOADER_HTTPCLIENTFACTORY
Default: 'scrapy.core.downloader.webclient.ScrapyHTTPClientFactory'
Defines a Twisted protocol.ClientFactory
class to use for HTTP/1.0
connections (for HTTP10DownloadHandler
).
Note
HTTP/1.0 is rarely used nowadays and its Scrapy support is deprecated,
so you can safely ignore this setting,
unless you really want to use HTTP/1.0 and override
DOWNLOAD_HANDLERS
for http(s)
scheme accordingly,
i.e. to 'scrapy.core.downloader.handlers.http.HTTP10DownloadHandler'
.
DOWNLOADER_CLIENTCONTEXTFACTORY
Default: 'scrapy.core.downloader.contextfactory.ScrapyClientContextFactory'
Represents the classpath to the ContextFactory to use.
Here, “ContextFactory” is a Twisted term for SSL/TLS contexts, defining the TLS/SSL protocol version to use, whether to do certificate verification, or even enable client-side authentication (and various other things).
Note
Scrapy default context factory does NOT perform remote server certificate verification. This is usually fine for web scraping.
If you do need remote server certificate verification enabled,
Scrapy also has another context factory class that you can set,
'scrapy.core.downloader.contextfactory.BrowserLikeContextFactory'
,
which uses the platform’s certificates to validate remote endpoints.
If you do use a custom ContextFactory, make sure its __init__
method
accepts a method
parameter (this is the OpenSSL.SSL
method mapping
DOWNLOADER_CLIENT_TLS_METHOD
), a tls_verbose_logging
parameter (bool
) and a tls_ciphers
parameter (see
DOWNLOADER_CLIENT_TLS_CIPHERS
).
DOWNLOADER_CLIENT_TLS_CIPHERS
Default: 'DEFAULT'
Use this setting to customize the TLS/SSL ciphers used by the default HTTP/1.1 downloader.
The setting should contain a string in the OpenSSL cipher list format,
these ciphers will be used as client ciphers. Changing this setting may be
necessary to access certain HTTPS websites: for example, you may need to use
'DEFAULT:!DH'
for a website with weak DH parameters or enable a
specific cipher that is not included in DEFAULT
if a website requires it.
DOWNLOADER_CLIENT_TLS_METHOD
Default: 'TLS'
Use this setting to customize the TLS/SSL method used by the default HTTP/1.1 downloader.
This setting must be one of these string values:
'TLS'
: maps to OpenSSL’sTLS_method()
(a.k.aSSLv23_method()
), which allows protocol negotiation, starting from the highest supported by the platform; default, recommended'TLSv1.0'
: this value forces HTTPS connections to use TLS version 1.0 ; set this if you want the behavior of Scrapy<1.1'TLSv1.1'
: forces TLS version 1.1'TLSv1.2'
: forces TLS version 1.2
DOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING
Default: False
Setting this to True
will enable DEBUG level messages about TLS connection
parameters after establishing HTTPS connections. The kind of information logged
depends on the versions of OpenSSL and pyOpenSSL.
This setting is only used for the default
DOWNLOADER_CLIENTCONTEXTFACTORY
.
DOWNLOADER_MIDDLEWARES
Default:: {}
A dict containing the downloader middlewares enabled in your project, and their orders. For more info see Activating a downloader middleware.
DOWNLOADER_MIDDLEWARES_BASE
Default:
{
"scrapy.downloadermiddlewares.offsite.OffsiteMiddleware": 50,
"scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware": 100,
"scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware": 300,
"scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware": 350,
"scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware": 400,
"scrapy.downloadermiddlewares.useragent.UserAgentMiddleware": 500,
"scrapy.downloadermiddlewares.retry.RetryMiddleware": 550,
"scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware": 560,
"scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware": 580,
"scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware": 590,
"scrapy.downloadermiddlewares.redirect.RedirectMiddleware": 600,
"scrapy.downloadermiddlewares.cookies.CookiesMiddleware": 700,
"scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware": 750,
"scrapy.downloadermiddlewares.stats.DownloaderStats": 850,
"scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware": 900,
}
A dict containing the downloader middlewares enabled by default in Scrapy. Low
orders are closer to the engine, high orders are closer to the downloader. You
should never modify this setting in your project, modify
DOWNLOADER_MIDDLEWARES
instead. For more info see
Activating a downloader middleware.
DOWNLOADER_STATS
Default: True
Whether to enable downloader stats collection.
DOWNLOAD_DELAY
Default: 0
Minimum seconds to wait between 2 consecutive requests to the same domain.
Use DOWNLOAD_DELAY
to throttle your crawling speed, to avoid hitting
servers too hard.
Decimal numbers are supported. For example, to send a maximum of 4 requests every 10 seconds:
DOWNLOAD_DELAY = 2.5
This setting is also affected by the RANDOMIZE_DOWNLOAD_DELAY
setting, which is enabled by default.
When CONCURRENT_REQUESTS_PER_IP
is non-zero, delays are enforced
per IP address instead of per domain.
Note that DOWNLOAD_DELAY
can lower the effective per-domain
concurrency below CONCURRENT_REQUESTS_PER_DOMAIN
. If the response
time of a domain is lower than DOWNLOAD_DELAY
, the effective
concurrency for that domain is 1. When testing throttling configurations, it
usually makes sense to lower CONCURRENT_REQUESTS_PER_DOMAIN
first,
and only increase DOWNLOAD_DELAY
once
CONCURRENT_REQUESTS_PER_DOMAIN
is 1 but a higher throttling is
desired.
Note
This delay can be set per spider using download_delay
spider attribute.
It is also possible to change this setting per domain, although it requires non-trivial code. See the implementation of the AutoThrottle extension for an example.
DOWNLOAD_HANDLERS
Default: {}
A dict containing the request downloader handlers enabled in your project.
See DOWNLOAD_HANDLERS_BASE
for example format.
DOWNLOAD_HANDLERS_BASE
Default:
{
"data": "scrapy.core.downloader.handlers.datauri.DataURIDownloadHandler",
"file": "scrapy.core.downloader.handlers.file.FileDownloadHandler",
"http": "scrapy.core.downloader.handlers.http.HTTPDownloadHandler",
"https": "scrapy.core.downloader.handlers.http.HTTPDownloadHandler",
"s3": "scrapy.core.downloader.handlers.s3.S3DownloadHandler",
"ftp": "scrapy.core.downloader.handlers.ftp.FTPDownloadHandler",
}
A dict containing the request download handlers enabled by default in Scrapy.
You should never modify this setting in your project, modify
DOWNLOAD_HANDLERS
instead.
You can disable any of these download handlers by assigning None
to their
URI scheme in DOWNLOAD_HANDLERS
. E.g., to disable the built-in FTP
handler (without replacement), place this in your settings.py
:
DOWNLOAD_HANDLERS = {
"ftp": None,
}
The default HTTPS handler uses HTTP/1.1. To use HTTP/2:
Install
Twisted[http2]>=17.9.0
to install the packages required to enable HTTP/2 support in Twisted.Update
DOWNLOAD_HANDLERS
as follows:DOWNLOAD_HANDLERS = { "https": "scrapy.core.downloader.handlers.http2.H2DownloadHandler", }
Warning
HTTP/2 support in Scrapy is experimental, and not yet recommended for production environments. Future Scrapy versions may introduce related changes without a deprecation period or warning.
Note
Known limitations of the current HTTP/2 implementation of Scrapy include:
No support for HTTP/2 Cleartext (h2c), since no major browser supports HTTP/2 unencrypted (refer http2 faq).
No setting to specify a maximum frame size larger than the default value, 16384. Connections to servers that send a larger frame will fail.
No support for server pushes, which are ignored.
No support for the
bytes_received
andheaders_received
signals.
DOWNLOAD_SLOTS
Default: {}
Allows to define concurrency/delay parameters on per slot (domain) basis:
DOWNLOAD_SLOTS = { "quotes.toscrape.com": {"concurrency": 1, "delay": 2, "randomize_delay": False}, "books.toscrape.com": {"delay": 3, "randomize_delay": False}, }
Note
For other downloader slots default settings values will be used:
DOWNLOAD_DELAY
:delay
CONCURRENT_REQUESTS_PER_DOMAIN
:concurrency
RANDOMIZE_DOWNLOAD_DELAY
:randomize_delay
DOWNLOAD_TIMEOUT
Default: 180
The amount of time (in secs) that the downloader will wait before timing out.
Note
This timeout can be set per spider using download_timeout
spider attribute and per-request using download_timeout
Request.meta key.
DOWNLOAD_MAXSIZE
Default: 1073741824
(1 GiB)
The maximum response body size (in bytes) allowed. Bigger responses are aborted and ignored.
This applies both before and after compression. If decompressing a response body would exceed this limit, decompression is aborted and the response is ignored.
Use 0
to disable this limit.
This limit can be set per spider using the download_maxsize
spider
attribute and per request using the download_maxsize
Request.meta
key.
DOWNLOAD_WARNSIZE
Default: 33554432
(32 MiB)
If the size of a response exceeds this value, before or after compression, a warning will be logged about it.
Use 0
to disable this limit.
This limit can be set per spider using the download_warnsize
spider
attribute and per request using the download_warnsize
Request.meta
key.
DOWNLOAD_FAIL_ON_DATALOSS
Default: True
Whether or not to fail on broken responses, that is, declared
Content-Length
does not match content sent by the server or chunked
response was not properly finish. If True
, these responses raise a
ResponseFailed([_DataLoss])
error. If False
, these responses
are passed through and the flag dataloss
is added to the response, i.e.:
'dataloss' in response.flags
is True
.
Optionally, this can be set per-request basis by using the
download_fail_on_dataloss
Request.meta key to False
.
Note
A broken response, or data loss error, may happen under several
circumstances, from server misconfiguration to network errors to data
corruption. It is up to the user to decide if it makes sense to process
broken responses considering they may contain partial or incomplete content.
If RETRY_ENABLED
is True
and this setting is set to True
,
the ResponseFailed([_DataLoss])
failure will be retried as usual.
Warning
This setting is ignored by the
H2DownloadHandler
download handler (see DOWNLOAD_HANDLERS
). In case of a data loss
error, the corresponding HTTP/2 connection may be corrupted, affecting other
requests that use the same connection; hence, a ResponseFailed([InvalidBodyLengthError])
failure is always raised for every request that was using that connection.
DUPEFILTER_CLASS
Default: 'scrapy.dupefilters.RFPDupeFilter'
The class used to detect and filter duplicate requests.
The default, RFPDupeFilter
, filters based on the
REQUEST_FINGERPRINTER_CLASS
setting.
To change how duplicates are checked, you can point DUPEFILTER_CLASS
to a custom subclass of RFPDupeFilter
that
overrides its __init__
method to use a different request
fingerprinting class. For example:
from scrapy.dupefilters import RFPDupeFilter
from scrapy.utils.request import fingerprint
class CustomRequestFingerprinter:
def fingerprint(self, request):
return fingerprint(request, include_headers=["X-ID"])
class CustomDupeFilter(RFPDupeFilter):
def __init__(self, path=None, debug=False, *, fingerprinter=None):
super().__init__(
path=path, debug=debug, fingerprinter=CustomRequestFingerprinter()
)
To disable duplicate request filtering set DUPEFILTER_CLASS
to
'scrapy.dupefilters.BaseDupeFilter'
. Note that not filtering out duplicate
requests may cause crawling loops. It is usually better to set
the dont_filter
parameter to True
on the __init__
method of a
specific Request
object that should not be filtered out.
A class assigned to DUPEFILTER_CLASS
must implement the following
interface:
class MyDupeFilter:
@classmethod
def from_settings(cls, settings):
"""Returns an instance of this duplicate request filtering class
based on the current crawl settings."""
return cls()
def request_seen(self, request):
"""Returns ``True`` if *request* is a duplicate of another request
seen in a previous call to :meth:`request_seen`, or ``False``
otherwise."""
return False
def open(self):
"""Called before the spider opens. It may return a deferred."""
pass
def close(self, reason):
"""Called before the spider closes. It may return a deferred."""
pass
def log(self, request, spider):
"""Logs that a request has been filtered out.
It is called right after a call to :meth:`request_seen` that
returns ``True``.
If :meth:`request_seen` always returns ``False``, such as in the
case of :class:`~scrapy.dupefilters.BaseDupeFilter`, this method
may be omitted.
"""
pass
- class scrapy.dupefilters.BaseDupeFilter[source]
Dummy duplicate request filtering class (
DUPEFILTER_CLASS
) that does not filter out any request.
- class scrapy.dupefilters.RFPDupeFilter(path: str | None = None, debug: bool = False, *, fingerprinter: RequestFingerprinterProtocol | None = None)[source]
Duplicate request filtering class (
DUPEFILTER_CLASS
) that filters out requests with the canonical (w3lib.url.canonicalize_url()
)url
,method
andbody
.
DUPEFILTER_DEBUG
Default: False
By default, RFPDupeFilter
only logs the first duplicate request.
Setting DUPEFILTER_DEBUG
to True
will make it log all duplicate requests.
EDITOR
Default: vi
(on Unix systems) or the IDLE editor (on Windows)
The editor to use for editing spiders with the edit
command.
Additionally, if the EDITOR
environment variable is set, the edit
command will prefer it over the default setting.
EXTENSIONS
Default:: {}
Component priority dictionary of enabled extensions. See Extensions.
EXTENSIONS_BASE
Default:
{
"scrapy.extensions.corestats.CoreStats": 0,
"scrapy.extensions.telnet.TelnetConsole": 0,
"scrapy.extensions.memusage.MemoryUsage": 0,
"scrapy.extensions.memdebug.MemoryDebugger": 0,
"scrapy.extensions.closespider.CloseSpider": 0,
"scrapy.extensions.feedexport.FeedExporter": 0,
"scrapy.extensions.logstats.LogStats": 0,
"scrapy.extensions.spiderstate.SpiderState": 0,
"scrapy.extensions.throttle.AutoThrottle": 0,
}
A dict containing the extensions available by default in Scrapy, and their orders. This setting contains all stable built-in extensions. Keep in mind that some of them need to be enabled through a setting.
For more information See the extensions user guide and the list of available extensions.
FEED_TEMPDIR
The Feed Temp dir allows you to set a custom folder to save crawler temporary files before uploading with FTP feed storage and Amazon S3.
FEED_STORAGE_GCS_ACL
The Access Control List (ACL) used when storing items to Google Cloud Storage. For more information on how to set this value, please refer to the column JSON API in Google Cloud documentation.
FTP_PASSIVE_MODE
Default: True
Whether or not to use passive mode when initiating FTP transfers.
FTP_PASSWORD
Default: "guest"
The password to use for FTP connections when there is no "ftp_password"
in Request
meta.
Note
Paraphrasing RFC 1635, although it is common to use either the password “guest” or one’s e-mail address for anonymous FTP, some FTP servers explicitly ask for the user’s e-mail address and will not allow login with the “guest” password.
FTP_USER
Default: "anonymous"
The username to use for FTP connections when there is no "ftp_user"
in Request
meta.
GCS_PROJECT_ID
Default: None
The Project ID that will be used when storing data on Google Cloud Storage.
ITEM_PIPELINES
Default: {}
A dict containing the item pipelines to use, and their orders. Order values are arbitrary, but it is customary to define them in the 0-1000 range. Lower orders process before higher orders.
Example:
ITEM_PIPELINES = {
"mybot.pipelines.validate.ValidateMyItem": 300,
"mybot.pipelines.validate.StoreMyItem": 800,
}
ITEM_PIPELINES_BASE
Default: {}
A dict containing the pipelines enabled by default in Scrapy. You should never
modify this setting in your project, modify ITEM_PIPELINES
instead.
JOBDIR
Default: None
A string indicating the directory for storing the state of a crawl when pausing and resuming crawls.
LOG_ENABLED
Default: True
Whether to enable logging.
LOG_ENCODING
Default: 'utf-8'
The encoding to use for logging.
LOG_FILE
Default: None
File name to use for logging output. If None
, standard error will be used.
LOG_FILE_APPEND
Default: True
If False
, the log file specified with LOG_FILE
will be
overwritten (discarding the output from previous runs, if any).
LOG_FORMAT
Default: '%(asctime)s [%(name)s] %(levelname)s: %(message)s'
String for formatting log messages. Refer to the Python logging documentation for the whole list of available placeholders.
LOG_DATEFORMAT
Default: '%Y-%m-%d %H:%M:%S'
String for formatting date/time, expansion of the %(asctime)s
placeholder
in LOG_FORMAT
. Refer to the
Python datetime documentation for the
whole list of available directives.
LOG_FORMATTER
Default: scrapy.logformatter.LogFormatter
The class to use for formatting log messages for different actions.
LOG_LEVEL
Default: 'DEBUG'
Minimum level to log. Available levels are: CRITICAL, ERROR, WARNING, INFO, DEBUG. For more info see Logging.
LOG_STDOUT
Default: False
If True
, all standard output (and error) of your process will be redirected
to the log. For example if you print('hello')
it will appear in the Scrapy
log.
LOG_SHORT_NAMES
Default: False
If True
, the logs will just contain the root path. If it is set to False
then it displays the component responsible for the log output
LOG_VERSIONS
Default: ["lxml", "libxml2", "cssselect", "parsel", "w3lib", "Twisted", "Python", "pyOpenSSL", "cryptography", "Platform"]
Logs the installed versions of the specified items.
An item can be any installed Python package.
The following special items are also supported:
libxml2
Platform
(platform.platform()
)Python
LOGSTATS_INTERVAL
Default: 60.0
The interval (in seconds) between each logging printout of the stats
by LogStats
.
MEMDEBUG_ENABLED
Default: False
Whether to enable memory debugging.
MEMDEBUG_NOTIFY
Default: []
When memory debugging is enabled a memory report will be sent to the specified addresses if this setting is not empty, otherwise the report will be written to the log.
Example:
MEMDEBUG_NOTIFY = ['user@example.com']
MEMUSAGE_ENABLED
Default: True
Scope: scrapy.extensions.memusage
Whether to enable the memory usage extension. This extension keeps track of
a peak memory used by the process (it writes it to stats). It can also
optionally shutdown the Scrapy process when it exceeds a memory limit
(see MEMUSAGE_LIMIT_MB
), and notify by email when that happened
(see MEMUSAGE_NOTIFY_MAIL
).
MEMUSAGE_LIMIT_MB
Default: 0
Scope: scrapy.extensions.memusage
The maximum amount of memory to allow (in megabytes) before shutting down Scrapy (if MEMUSAGE_ENABLED is True). If zero, no check will be performed.
MEMUSAGE_CHECK_INTERVAL_SECONDS
Default: 60.0
Scope: scrapy.extensions.memusage
The Memory usage extension
checks the current memory usage, versus the limits set by
MEMUSAGE_LIMIT_MB
and MEMUSAGE_WARNING_MB
,
at fixed time intervals.
This sets the length of these intervals, in seconds.
MEMUSAGE_NOTIFY_MAIL
Default: False
Scope: scrapy.extensions.memusage
A list of emails to notify if the memory limit has been reached.
Example:
MEMUSAGE_NOTIFY_MAIL = ['user@example.com']
MEMUSAGE_WARNING_MB
Default: 0
Scope: scrapy.extensions.memusage
The maximum amount of memory to allow (in megabytes) before sending a warning email notifying about it. If zero, no warning will be produced.
NEWSPIDER_MODULE
Default: "<project name>.spiders"
(fallback: ""
)
Module where to create new spiders using the genspider
command.
Example:
NEWSPIDER_MODULE = 'mybot.spiders_dev'
RANDOMIZE_DOWNLOAD_DELAY
Default: True
If enabled, Scrapy will wait a random amount of time (between 0.5 * DOWNLOAD_DELAY
and 1.5 * DOWNLOAD_DELAY
) while fetching requests from the same
website.
This randomization decreases the chance of the crawler being detected (and subsequently blocked) by sites which analyze requests looking for statistically significant similarities in the time between their requests.
The randomization policy is the same used by wget --random-wait
option.
If DOWNLOAD_DELAY
is zero (default) this option has no effect.
REACTOR_THREADPOOL_MAXSIZE
Default: 10
The maximum limit for Twisted Reactor thread pool size. This is common multi-purpose thread pool used by various Scrapy components. Threaded DNS Resolver, BlockingFeedStorage, S3FilesStore just to name a few. Increase this value if you’re experiencing problems with insufficient blocking IO.
REDIRECT_PRIORITY_ADJUST
Default: +2
Scope: scrapy.downloadermiddlewares.redirect.RedirectMiddleware
Adjust redirect request priority relative to original request:
a positive priority adjust (default) means higher priority.
a negative priority adjust means lower priority.
ROBOTSTXT_OBEY
Default: True
(fallback: False
)
If enabled, Scrapy will respect robots.txt policies. For more information see RobotsTxtMiddleware.
Note
While the default value is False
for historical reasons,
this option is enabled by default in settings.py file generated
by scrapy startproject
command.
ROBOTSTXT_PARSER
Default: 'scrapy.robotstxt.ProtegoRobotParser'
The parser backend to use for parsing robots.txt
files. For more information see
RobotsTxtMiddleware.
ROBOTSTXT_USER_AGENT
Default: None
The user agent string to use for matching in the robots.txt file. If None
,
the User-Agent header you are sending with the request or the
USER_AGENT
setting (in that order) will be used for determining
the user agent to use in the robots.txt file.
SCHEDULER
Default: 'scrapy.core.scheduler.Scheduler'
The scheduler class to be used for crawling. See the Scheduler topic for details.
SCHEDULER_DEBUG
Default: False
Setting to True
will log debug information about the requests scheduler.
This currently logs (only once) if the requests cannot be serialized to disk.
Stats counter (scheduler/unserializable
) tracks the number of times this happens.
Example entry in logs:
1956-01-31 00:00:00+0800 [scrapy.core.scheduler] ERROR: Unable to serialize request:
<GET http://example.com> - reason: cannot serialize <Request at 0x9a7c7ec>
(type Request)> - no more unserializable requests will be logged
(see 'scheduler/unserializable' stats counter)
SCHEDULER_DISK_QUEUE
Default: 'scrapy.squeues.PickleLifoDiskQueue'
Type of disk queue that will be used by the scheduler. Other available types
are scrapy.squeues.PickleFifoDiskQueue
,
scrapy.squeues.MarshalFifoDiskQueue
,
scrapy.squeues.MarshalLifoDiskQueue
.
SCHEDULER_MEMORY_QUEUE
Default: 'scrapy.squeues.LifoMemoryQueue'
Type of in-memory queue used by the scheduler. Other available type is:
scrapy.squeues.FifoMemoryQueue
.
SCHEDULER_PRIORITY_QUEUE
Default: 'scrapy.pqueues.ScrapyPriorityQueue'
Type of priority queue used by the scheduler. Another available type is
scrapy.pqueues.DownloaderAwarePriorityQueue
.
scrapy.pqueues.DownloaderAwarePriorityQueue
works better than
scrapy.pqueues.ScrapyPriorityQueue
when you crawl many different
domains in parallel. But currently scrapy.pqueues.DownloaderAwarePriorityQueue
does not work together with CONCURRENT_REQUESTS_PER_IP
.
SCHEDULER_START_DISK_QUEUE
Default: 'scrapy.squeues.PickleFifoDiskQueue'
Type of disk queue (see JOBDIR
) that the scheduler uses for start requests.
For available choices, see SCHEDULER_DISK_QUEUE
.
Use None
or ""
to disable these separate queues entirely, and instead
have start requests share the same queues as other requests.
Note
Disabling separate start request queues makes start request order unintuitive: start requests will be sent in order
only until CONCURRENT_REQUESTS
is reached, then remaining start
requests will be sent in reverse order.
SCHEDULER_START_MEMORY_QUEUE
Default: 'scrapy.squeues.FifoMemoryQueue'
Type of in-memory queue that the scheduler uses for start requests.
For available choices, see SCHEDULER_MEMORY_QUEUE
.
Use None
or ""
to disable these separate queues entirely, and instead
have start requests share the same queues as other requests.
Note
Disabling separate start request queues makes start request order unintuitive: start requests will be sent in order
only until CONCURRENT_REQUESTS
is reached, then remaining start
requests will be sent in reverse order.
SCRAPER_SLOT_MAX_ACTIVE_SIZE
Added in version 2.0.
Default: 5_000_000
Soft limit (in bytes) for response data being processed.
While the sum of the sizes of all responses being processed is above this value, Scrapy does not process new requests.
SPIDER_CONTRACTS
Default:: {}
A dict containing the spider contracts enabled in your project, used for testing spiders. For more info see Spiders Contracts.
SPIDER_CONTRACTS_BASE
Default:
{
"scrapy.contracts.default.UrlContract": 1,
"scrapy.contracts.default.ReturnsContract": 2,
"scrapy.contracts.default.ScrapesContract": 3,
}
A dict containing the Scrapy contracts enabled by default in Scrapy. You should
never modify this setting in your project, modify SPIDER_CONTRACTS
instead. For more info see Spiders Contracts.
You can disable any of these contracts by assigning None
to their class
path in SPIDER_CONTRACTS
. E.g., to disable the built-in
ScrapesContract
, place this in your settings.py
:
SPIDER_CONTRACTS = {
"scrapy.contracts.default.ScrapesContract": None,
}
SPIDER_LOADER_CLASS
Default: 'scrapy.spiderloader.SpiderLoader'
The class that will be used for loading spiders, which must implement the SpiderLoader API.
SPIDER_LOADER_WARN_ONLY
Default: False
By default, when Scrapy tries to import spider classes from SPIDER_MODULES
,
it will fail loudly if there is any ImportError
or SyntaxError
exception.
But you can choose to silence this exception and turn it into a simple
warning by setting SPIDER_LOADER_WARN_ONLY = True
.
Note
Some scrapy commands run with this setting to True
already (i.e. they will only issue a warning and will not fail)
since they do not actually need to load spider classes to work:
scrapy runspider
,
scrapy settings
,
scrapy startproject
,
scrapy version
.
SPIDER_MIDDLEWARES
Default:: {}
A dict containing the spider middlewares enabled in your project, and their orders. For more info see Activating a spider middleware.
SPIDER_MIDDLEWARES_BASE
Default:
{
"scrapy.spidermiddlewares.httperror.HttpErrorMiddleware": 50,
"scrapy.spidermiddlewares.referer.RefererMiddleware": 700,
"scrapy.spidermiddlewares.urllength.UrlLengthMiddleware": 800,
"scrapy.spidermiddlewares.depth.DepthMiddleware": 900,
}
A dict containing the spider middlewares enabled by default in Scrapy, and their orders. Low orders are closer to the engine, high orders are closer to the spider. For more info see Activating a spider middleware.
SPIDER_MODULES
Default: ["<project name>.spiders"]
(fallback: []
)
A list of modules where Scrapy will look for spiders.
Example:
SPIDER_MODULES = ["mybot.spiders_prod", "mybot.spiders_dev"]
STATS_CLASS
Default: 'scrapy.statscollectors.MemoryStatsCollector'
The class to use for collecting stats, who must implement the Stats Collector API.
STATS_DUMP
Default: True
Dump the Scrapy stats (to the Scrapy log) once the spider finishes.
For more info see: Stats Collection.
STATSMAILER_RCPTS
Default: []
(empty list)
Send Scrapy stats after spiders finish scraping. See
StatsMailer
for more info.
TELNETCONSOLE_ENABLED
Default: True
A boolean which specifies if the telnet console will be enabled (provided its extension is also enabled).
TEMPLATES_DIR
Default: templates
dir inside scrapy module
The directory where to look for templates when creating new projects with
startproject
command and new spiders with genspider
command.
The project name must not conflict with the name of custom files or directories
in the project
subdirectory.
TWISTED_REACTOR
Added in version 2.0.
Default: "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
Import path of a given reactor
.
Scrapy will install this reactor if no other reactor is installed yet, such as
when the scrapy
CLI program is invoked or when using the
CrawlerProcess
class.
If you are using the CrawlerRunner
class, you also
need to install the correct reactor manually. You can do that using
install_reactor()
:
- scrapy.utils.reactor.install_reactor(reactor_path: str, event_loop_path: str | None = None) None [source]
Installs the
reactor
with the specified import path. Also installs the asyncio event loop with the specified import path if the asyncio reactor is enabled
If a reactor is already installed,
install_reactor()
has no effect.
CrawlerRunner.__init__
raises
Exception
if the installed reactor does not match the
TWISTED_REACTOR
setting; therefore, having top-level
reactor
imports in project files and imported
third-party libraries will make Scrapy raise Exception
when
it checks which reactor is installed.
In order to use the reactor installed by Scrapy:
import scrapy
from twisted.internet import reactor
class QuotesSpider(scrapy.Spider):
name = "quotes"
def __init__(self, *args, **kwargs):
self.timeout = int(kwargs.pop("timeout", "60"))
super(QuotesSpider, self).__init__(*args, **kwargs)
async def start(self):
reactor.callLater(self.timeout, self.stop)
urls = ["https://quotes.toscrape.com/page/1"]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
for quote in response.css("div.quote"):
yield {"text": quote.css("span.text::text").get()}
def stop(self):
self.crawler.engine.close_spider(self, "timeout")
which raises Exception
, becomes:
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
def __init__(self, *args, **kwargs):
self.timeout = int(kwargs.pop("timeout", "60"))
super(QuotesSpider, self).__init__(*args, **kwargs)
async def start(self):
from twisted.internet import reactor
reactor.callLater(self.timeout, self.stop)
urls = ["https://quotes.toscrape.com/page/1"]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
for quote in response.css("div.quote"):
yield {"text": quote.css("span.text::text").get()}
def stop(self):
self.crawler.engine.close_spider(self, "timeout")
If this setting is set None
, Scrapy will use the existing reactor if one is
already installed, or install the default reactor defined by Twisted for the
current platform.
Changed in version 2.7: The startproject
command now sets this setting to
twisted.internet.asyncioreactor.AsyncioSelectorReactor
in the generated
settings.py
file.
Changed in version 2.13: The default value was changed from None
to
"twisted.internet.asyncioreactor.AsyncioSelectorReactor"
.
For additional information, see Choosing a Reactor and GUI Toolkit Integration.
URLLENGTH_LIMIT
Default: 2083
Scope: spidermiddlewares.urllength
The maximum URL length to allow for crawled URLs.
This setting can act as a stopping condition in case of URLs of ever-increasing
length, which may be caused for example by a programming error either in the
target server or in your code. See also REDIRECT_MAX_TIMES
and
DEPTH_LIMIT
.
Use 0
to allow URLs of any length.
The default value is copied from the Microsoft Internet Explorer maximum URL length, even though this setting exists for different reasons.
USER_AGENT
Default: "Scrapy/VERSION (+https://scrapy.org)"
The default User-Agent to use when crawling, unless overridden. This user agent is
also used by RobotsTxtMiddleware
if ROBOTSTXT_USER_AGENT
setting is None
and
there is no overriding User-Agent header specified for the request.
WARN_ON_GENERATOR_RETURN_VALUE
Default: True
When enabled, Scrapy will warn if generator-based callback methods (like
parse
) contain return statements with non-None
values. This helps detect
potential mistakes in spider development.
Disable this setting to prevent syntax errors that may occur when dynamically modifying generator function source code during runtime, skip AST parsing of callback functions, or improve performance in auto-reloading development environments.
Settings documented elsewhere:
The following settings are documented elsewhere, please check each specific case to see how to enable and use them.