Core API

This section documents the Scrapy core API, and it’s intended for developers of extensions and middlewares.

Crawler API

The main entry point to Scrapy API is the Crawler object, passed to extensions through the from_crawler class method. This object provides access to all Scrapy core components, and it’s the only way for extensions to access them and hook their functionality into Scrapy.

The Extension Manager is responsible for loading and keeping track of installed extensions and it’s configured through the EXTENSIONS setting which contains a dictionary of all available extensions and their order similar to how you configure the downloader middlewares.

class scrapy.crawler.Crawler(spidercls, settings)[source]

The Crawler object must be instantiated with a scrapy.Spider subclass and a scrapy.settings.Settings object.

request_fingerprinter

The request fingerprint builder of this crawler.

This is used from extensions and middlewares to build short, unique identifiers for requests. See Request fingerprints.

settings

The settings manager of this crawler.

This is used by extensions & middlewares to access the Scrapy settings of this crawler.

For an introduction on Scrapy settings see Settings.

For the API see Settings class.

signals

The signals manager of this crawler.

This is used by extensions & middlewares to hook themselves into Scrapy functionality.

For an introduction on signals see Signals.

For the API see SignalManager class.

stats

The stats collector of this crawler.

This is used from extensions & middlewares to record stats of their behaviour, or access stats collected by other extensions.

For an introduction on stats collection see Stats Collection.

For the API see StatsCollector class.

extensions

The extension manager that keeps track of enabled extensions.

Most extensions won’t need to access this attribute.

For an introduction on extensions and a list of available extensions on Scrapy see Extensions.

engine

The execution engine, which coordinates the core crawling logic between the scheduler, downloader and spiders.

Some extension may want to access the Scrapy engine, to inspect or modify the downloader and scheduler behaviour, although this is an advanced use and this API is not yet stable.

spider

Spider currently being crawled. This is an instance of the spider class provided while constructing the crawler, and it is created after the arguments given in the crawl() method.

crawl(*args, **kwargs)[source]

Starts the crawler by instantiating its spider class with the given args and kwargs arguments, while setting the execution engine in motion. Should be called only once.

Returns a deferred that is fired when the crawl is finished.

stop() Generator[Deferred, Any, None][source]

Starts a graceful stop of the crawler and returns a deferred that is fired when the crawler is stopped.

class scrapy.crawler.CrawlerRunner(settings: Optional[Union[Dict[str, Any], Settings]] = None)[source]

This is a convenient helper class that keeps track of, manages and runs crawlers inside an already setup reactor.

The CrawlerRunner object must be instantiated with a Settings object.

This class shouldn’t be needed (since Scrapy is responsible of using it accordingly) unless writing scripts that manually handle the crawling process. See Run Scrapy from a script for an example.

crawl(crawler_or_spidercls: Union[Type[Spider], str, Crawler], *args: Any, **kwargs: Any) Deferred[source]

Run a crawler with the provided arguments.

It will call the given Crawler’s crawl() method, while keeping track of it so it can be stopped later.

If crawler_or_spidercls isn’t a Crawler instance, this method will try to create one using this parameter as the spider class given to it.

Returns a deferred that is fired when the crawling is finished.

Parameters
  • crawler_or_spidercls (Crawler instance, Spider subclass or string) – already created crawler, or a spider class or spider’s name inside the project to create it

  • args – arguments to initialize the spider

  • kwargs – keyword arguments to initialize the spider

property crawlers

Set of crawlers started by crawl() and managed by this class.

create_crawler(crawler_or_spidercls: Union[Type[Spider], str, Crawler]) Crawler[source]

Return a Crawler object.

  • If crawler_or_spidercls is a Crawler, it is returned as-is.

  • If crawler_or_spidercls is a Spider subclass, a new Crawler is constructed for it.

  • If crawler_or_spidercls is a string, this function finds a spider with this name in a Scrapy project (using spider loader), then creates a Crawler instance for it.

join()[source]

Returns a deferred that is fired when all managed crawlers have completed their executions.

stop() Deferred[source]

Stops simultaneously all the crawling jobs taking place.

Returns a deferred that is fired when they all have ended.

class scrapy.crawler.CrawlerProcess(settings: Optional[Union[Dict[str, Any], Settings]] = None, install_root_handler: bool = True)[source]

Bases: CrawlerRunner

A class to run multiple scrapy crawlers in a process simultaneously.

This class extends CrawlerRunner by adding support for starting a reactor and handling shutdown signals, like the keyboard interrupt command Ctrl-C. It also configures top-level logging.

This utility should be a better fit than CrawlerRunner if you aren’t running another reactor within your application.

The CrawlerProcess object must be instantiated with a Settings object.

Parameters

install_root_handler – whether to install root logging handler (default: True)

This class shouldn’t be needed (since Scrapy is responsible of using it accordingly) unless writing scripts that manually handle the crawling process. See Run Scrapy from a script for an example.

crawl(crawler_or_spidercls: Union[Type[Spider], str, Crawler], *args: Any, **kwargs: Any) Deferred

Run a crawler with the provided arguments.

It will call the given Crawler’s crawl() method, while keeping track of it so it can be stopped later.

If crawler_or_spidercls isn’t a Crawler instance, this method will try to create one using this parameter as the spider class given to it.

Returns a deferred that is fired when the crawling is finished.

Parameters
  • crawler_or_spidercls (Crawler instance, Spider subclass or string) – already created crawler, or a spider class or spider’s name inside the project to create it

  • args – arguments to initialize the spider

  • kwargs – keyword arguments to initialize the spider

property crawlers

Set of crawlers started by crawl() and managed by this class.

create_crawler(crawler_or_spidercls: Union[Type[Spider], str, Crawler]) Crawler

Return a Crawler object.

  • If crawler_or_spidercls is a Crawler, it is returned as-is.

  • If crawler_or_spidercls is a Spider subclass, a new Crawler is constructed for it.

  • If crawler_or_spidercls is a string, this function finds a spider with this name in a Scrapy project (using spider loader), then creates a Crawler instance for it.

join()

Returns a deferred that is fired when all managed crawlers have completed their executions.

start(stop_after_crawl: bool = True, install_signal_handlers: bool = True) None[source]

This method starts a reactor, adjusts its pool size to REACTOR_THREADPOOL_MAXSIZE, and installs a DNS cache based on DNSCACHE_ENABLED and DNSCACHE_SIZE.

If stop_after_crawl is True, the reactor will be stopped after all crawlers have finished, using join().

Parameters
  • stop_after_crawl (bool) – stop or not the reactor when all crawlers have finished

  • install_signal_handlers (bool) – whether to install the OS signal handlers from Twisted and Scrapy (default: True)

stop() Deferred

Stops simultaneously all the crawling jobs taking place.

Returns a deferred that is fired when they all have ended.

Settings API

scrapy.settings.SETTINGS_PRIORITIES

Dictionary that sets the key name and priority level of the default settings priorities used in Scrapy.

Each item defines a settings entry point, giving it a code name for identification and an integer priority. Greater priorities take more precedence over lesser ones when setting and retrieving values in the Settings class.

SETTINGS_PRIORITIES = {
    "default": 0,
    "command": 10,
    "addon": 15,
    "project": 20,
    "spider": 30,
    "cmdline": 40,
}

For a detailed explanation on each settings sources, see: Settings.

scrapy.settings.get_settings_priority(priority: Union[int, str]) int[source]

Small helper function that looks up a given string priority in the SETTINGS_PRIORITIES dictionary and returns its numerical value, or directly returns a given numerical priority.

class scrapy.settings.Settings(values: _SettingsInputT = None, priority: Union[int, str] = 'project')[source]

Bases: BaseSettings

This object stores Scrapy settings for the configuration of internal components, and can be used for any further customization.

It is a direct subclass and supports all methods of BaseSettings. Additionally, after instantiation of this class, the new object will have the global default settings described on Built-in settings reference already populated.

class scrapy.settings.BaseSettings(values: _SettingsInputT = None, priority: Union[int, str] = 'project')[source]

Instances of this class behave like dictionaries, but store priorities along with their (key, value) pairs, and can be frozen (i.e. marked immutable).

Key-value entries can be passed on initialization with the values argument, and they would take the priority level (unless values is already an instance of BaseSettings, in which case the existing priority levels will be kept). If the priority argument is a string, the priority name will be looked up in SETTINGS_PRIORITIES. Otherwise, a specific integer should be provided.

Once the object is created, new settings can be loaded or updated with the set() method, and can be accessed with the square bracket notation of dictionaries, or with the get() method of the instance and its value conversion variants. When requesting a stored key, the value with the highest priority will be retrieved.

copy() Self[source]

Make a deep copy of current settings.

This method returns a new instance of the Settings class, populated with the same values and their priorities.

Modifications to the new object won’t be reflected on the original settings.

copy_to_dict() Dict[Optional[Union[bool, float, int, str]], Any][source]

Make a copy of current settings and convert to a dict.

This method returns a new dict populated with the same values and their priorities as the current settings.

Modifications to the returned dict won’t be reflected on the original settings.

This method can be useful for example for printing settings in Scrapy shell.

freeze() None[source]

Disable further changes to the current settings.

After calling this method, the present state of the settings will become immutable. Trying to change values through the set() method and its variants won’t be possible and will be alerted.

frozencopy() Self[source]

Return an immutable copy of the current settings.

Alias for a freeze() call in the object returned by copy().

get(name: Optional[Union[bool, float, int, str]], default: Any = None) Any[source]

Get a setting value without affecting its original type.

Parameters
  • name (str) – the setting name

  • default (object) – the value to return if no setting is found

getbool(name: Optional[Union[bool, float, int, str]], default: bool = False) bool[source]

Get a setting value as a boolean.

1, '1', True` and 'True' return True, while 0, '0', False, 'False' and None return False.

For example, settings populated through environment variables set to '0' will return False when using this method.

Parameters
  • name (str) – the setting name

  • default (object) – the value to return if no setting is found

getdict(name: Optional[Union[bool, float, int, str]], default: Optional[Dict[Any, Any]] = None) Dict[Any, Any][source]

Get a setting value as a dictionary. If the setting original type is a dictionary, a copy of it will be returned. If it is a string it will be evaluated as a JSON dictionary. In the case that it is a BaseSettings instance itself, it will be converted to a dictionary, containing all its current settings values as they would be returned by get(), and losing all information about priority and mutability.

Parameters
  • name (str) – the setting name

  • default (object) – the value to return if no setting is found

getdictorlist(name: Optional[Union[bool, float, int, str]], default: Optional[Union[Dict[Any, Any], List[Any], Tuple[Any]]] = None) Union[Dict[Any, Any], List[Any]][source]

Get a setting value as either a dict or a list.

If the setting is already a dict or a list, a copy of it will be returned.

If it is a string it will be evaluated as JSON, or as a comma-separated list of strings as a fallback.

For example, settings populated from the command line will return:

  • {'key1': 'value1', 'key2': 'value2'} if set to '{"key1": "value1", "key2": "value2"}'

  • ['one', 'two'] if set to '["one", "two"]' or 'one,two'

Parameters
  • name (string) – the setting name

  • default (any) – the value to return if no setting is found

getfloat(name: Optional[Union[bool, float, int, str]], default: float = 0.0) float[source]

Get a setting value as a float.

Parameters
  • name (str) – the setting name

  • default (object) – the value to return if no setting is found

getint(name: Optional[Union[bool, float, int, str]], default: int = 0) int[source]

Get a setting value as an int.

Parameters
  • name (str) – the setting name

  • default (object) – the value to return if no setting is found

getlist(name: Optional[Union[bool, float, int, str]], default: Optional[List[Any]] = None) List[Any][source]

Get a setting value as a list. If the setting original type is a list, a copy of it will be returned. If it’s a string it will be split by “,”.

For example, settings populated through environment variables set to 'one,two' will return a list [‘one’, ‘two’] when using this method.

Parameters
  • name (str) – the setting name

  • default (object) – the value to return if no setting is found

getpriority(name: Optional[Union[bool, float, int, str]]) Optional[int][source]

Return the current numerical priority value of a setting, or None if the given name does not exist.

Parameters

name (str) – the setting name

getwithbase(name: Optional[Union[bool, float, int, str]]) BaseSettings[source]

Get a composition of a dictionary-like setting and its _BASE counterpart.

Parameters

name (str) – name of the dictionary-like setting

maxpriority() int[source]

Return the numerical value of the highest priority present throughout all settings, or the numerical value for default from SETTINGS_PRIORITIES if there are no settings stored.

pop(k[, d]) v, remove specified key and return the corresponding value.[source]

If key is not found, d is returned if given, otherwise KeyError is raised.

set(name: Optional[Union[bool, float, int, str]], value: Any, priority: Union[int, str] = 'project') None[source]

Store a key/value attribute with a given priority.

Settings should be populated before configuring the Crawler object (through the configure() method), otherwise they won’t have any effect.

Parameters
  • name (str) – the setting name

  • value (object) – the value to associate with the setting

  • priority (str or int) – the priority of the setting. Should be a key of SETTINGS_PRIORITIES or an integer

setdefault(k[, d]) D.get(k,d), also set D[k]=d if k not in D[source]
setmodule(module: Union[module, str], priority: Union[int, str] = 'project') None[source]

Store settings from a module with a given priority.

This is a helper function that calls set() for every globally declared uppercase variable of module with the provided priority.

Parameters
update(values: _SettingsInputT, priority: Union[int, str] = 'project') None[source]

Store key/value pairs with a given priority.

This is a helper function that calls set() for every item of values with the provided priority.

If values is a string, it is assumed to be JSON-encoded and parsed into a dict with json.loads() first. If it is a BaseSettings instance, the per-key priorities will be used and the priority parameter ignored. This allows inserting/updating settings with different priorities with a single command.

Parameters

SpiderLoader API

class scrapy.spiderloader.SpiderLoader[source]

This class is in charge of retrieving and handling the spider classes defined across the project.

Custom spider loaders can be employed by specifying their path in the SPIDER_LOADER_CLASS project setting. They must fully implement the scrapy.interfaces.ISpiderLoader interface to guarantee an errorless execution.

from_settings(settings)[source]

This class method is used by Scrapy to create an instance of the class. It’s called with the current project settings, and it loads the spiders found recursively in the modules of the SPIDER_MODULES setting.

Parameters

settings (Settings instance) – project settings

load(spider_name)[source]

Get the Spider class with the given name. It’ll look into the previously loaded spiders for a spider class with name spider_name and will raise a KeyError if not found.

Parameters

spider_name (str) – spider class name

list()[source]

Get the names of the available spiders in the project.

find_by_request(request)[source]

List the spiders’ names that can handle the given request. Will try to match the request’s url against the domains of the spiders.

Parameters

request (Request instance) – queried request

Signals API

class scrapy.signalmanager.SignalManager(sender: Any = _Anonymous)[source]
connect(receiver: Any, signal: Any, **kwargs: Any) None[source]

Connect a receiver function to a signal.

The signal can be any object, although Scrapy comes with some predefined signals that are documented in the Signals section.

Parameters
disconnect(receiver: Any, signal: Any, **kwargs: Any) None[source]

Disconnect a receiver function from a signal. This has the opposite effect of the connect() method, and the arguments are the same.

disconnect_all(signal: Any, **kwargs: Any) None[source]

Disconnect all receivers from the given signal.

Parameters

signal (object) – the signal to disconnect from

send_catch_log(signal: Any, **kwargs: Any) List[Tuple[Any, Any]][source]

Send a signal, catch exceptions and log them.

The keyword arguments are passed to the signal handlers (connected through the connect() method).

send_catch_log_deferred(signal: Any, **kwargs: Any) Deferred[source]

Like send_catch_log() but supports returning Deferred objects from signal handlers.

Returns a Deferred that gets fired once all signal handlers deferreds were fired. Send a signal, catch exceptions and log them.

The keyword arguments are passed to the signal handlers (connected through the connect() method).

Stats Collector API

There are several Stats Collectors available under the scrapy.statscollectors module and they all implement the Stats Collector API defined by the StatsCollector class (which they all inherit from).

class scrapy.statscollectors.StatsCollector[source]
get_value(key, default=None)[source]

Return the value for the given stats key or default if it doesn’t exist.

get_stats()[source]

Get all stats from the currently running spider as a dict.

set_value(key, value)[source]

Set the given value for the given stats key.

set_stats(stats)[source]

Override the current stats with the dict passed in stats argument.

inc_value(key, count=1, start=0)[source]

Increment the value of the given stats key, by the given count, assuming the start value given (when it’s not set).

max_value(key, value)[source]

Set the given value for the given key only if current value for the same key is lower than value. If there is no current value for the given key, the value is always set.

min_value(key, value)[source]

Set the given value for the given key only if current value for the same key is greater than value. If there is no current value for the given key, the value is always set.

clear_stats()[source]

Clear all stats.

The following methods are not part of the stats collection api but instead used when implementing custom stats collectors:

open_spider(spider)[source]

Open the given spider for stats collection.

close_spider(spider)[source]

Close the given spider. After this is called, no more specific stats can be accessed or collected.