Release notes¶
Scrapy 2.10.0 (2023-08-04)¶
Highlights:
Added Python 3.12 support, dropped Python 3.7 support.
The new add-ons framework simplifies configuring 3rd-party components that support it.
Exceptions to retry can now be configured.
Many fixes and improvements for feed exports.
Modified requirements¶
Dropped support for Python 3.7. (issue 5953)
Added support for the upcoming Python 3.12. (issue 5984)
Minimum versions increased for these dependencies:
lxml: 4.3.0 → 4.4.1
cryptography: 3.4.6 → 36.0.0
pkg_resourcesis no longer used. (issue 5956, issue 5958)boto3 is now recommended instead of botocore for exporting to S3. (issue 5833).
Backward-incompatible changes¶
The value of the
FEED_STORE_EMPTYsetting is nowTrueinstead ofFalse. In earlier Scrapy versions empty files were created even when this setting wasFalse(which was a bug that is now fixed), so the new default should keep the old behavior. (issue 872, issue 5847)
Deprecation removals¶
When a function is assigned to the
FEED_URI_PARAMSsetting, returningNoneor modifying theparamsinput parameter, deprecated in Scrapy 2.6, is no longer supported. (issue 5994, issue 5996)The
scrapy.utils.reqsermodule, deprecated in Scrapy 2.6, is removed. (issue 5994, issue 5996)The
scrapy.squeuesclassesPickleFifoDiskQueueNonRequest,PickleLifoDiskQueueNonRequest,MarshalFifoDiskQueueNonRequest, andMarshalLifoDiskQueueNonRequest, deprecated in Scrapy 2.6, are removed. (issue 5994, issue 5996)The property
open_spidersand the methodshas_capacityandscheduleofscrapy.core.engine.ExecutionEngine, deprecated in Scrapy 2.6, are removed. (issue 5994, issue 5998)Passing a
spiderargument to thespider_is_idle(),crawl()anddownload()methods ofscrapy.core.engine.ExecutionEngine, deprecated in Scrapy 2.6, is no longer supported. (issue 5994, issue 5998)
Deprecations¶
scrapy.utils.datatypes.CaselessDictis deprecated, usescrapy.utils.datatypes.CaseInsensitiveDictinstead. (issue 5146)Passing the
customargument toscrapy.utils.conf.build_component_list()is deprecated, it was used in the past to mergeFOOandFOO_BASEsetting values but now Scrapy usesscrapy.settings.BaseSettings.getwithbase()to do the same. Code that uses this argument and cannot be switched togetwithbase()can be switched to merging the values explicitly. (issue 5726, issue 5923)
New features¶
Added support for Scrapy add-ons. (issue 5950)
Added the
RETRY_EXCEPTIONSsetting that configures which exceptions will be retried byRetryMiddleware. (issue 2701, issue 5929)Added the possiiblity to close the spider if no items were produced in the specified time, configured by
CLOSESPIDER_TIMEOUT_NO_ITEM. (issue 5979)Added support for the
AWS_REGION_NAMEsetting to feed exports. (issue 5980)Added support for using
pathlib.Pathobjects that refer to absolute Windows paths in theFEEDSsetting. (issue 5939)
Bug fixes¶
Fixed creating empty feeds even with
FEED_STORE_EMPTY=False. (issue 872, issue 5847)Fixed using absolute Windows paths when specifying output files. (issue 5969, issue 5971)
Fixed problems with uploading large files to S3 by switching to multipart uploads (requires boto3). (issue 960, issue 5735, issue 5833)
Fixed the JSON exporter writing extra commas when some exceptions occur. (issue 3090, issue 5952)
Fixed the “read of closed file” error in the CSV exporter. (issue 5043, issue 5705)
Fixed an error when a component added by the class object throws
NotConfiguredwith a message. (issue 5950, issue 5992)Added the missing
scrapy.settings.BaseSettings.pop()method. (issue 5959, issue 5960, issue 5963)Added
CaseInsensitiveDictas a replacement forCaselessDictthat fixes some API inconsistencies. (issue 5146)
Documentation¶
Documented
scrapy.Spider.update_settings(). (issue 5745, issue 5846)Documented possible problems with early Twisted reactor installation and their solutions. (issue 5981, issue 6000)
Added examples of making additional requests in callbacks. (issue 5927)
Improved the feed export docs. (issue 5579, issue 5931)
Clarified the docs about request objects on redirection. (issue 5707, issue 5937)
Quality assurance¶
Added support for running tests against the installed Scrapy version. (issue 4914, issue 5949)
Extended typing hints. (issue 5925, issue 5977)
Fixed the
test_utils_asyncio.AsyncioTest.test_set_asyncio_event_looptest. (issue 5951)Fixed the
test_feedexport.BatchDeliveriesTest.test_batch_path_differtest on Windows. (issue 5847)Enabled CI runs for Python 3.11 on Windows. (issue 5999)
Simplified skipping tests that depend on
uvloop. (issue 5984)Fixed the
extra-deps-pinnedtox env. (issue 5948)Implemented cleanups. (issue 5965, issue 5986)
Scrapy 2.9.0 (2023-05-08)¶
Highlights:
Per-domain download settings.
Compatibility with new cryptography and new parsel.
JMESPath selectors from the new parsel.
Bug fixes.
Deprecations¶
scrapy.extensions.feedexport._FeedSlotis renamed toscrapy.extensions.feedexport.FeedSlotand the old name is deprecated. (issue 5876)
New features¶
Settings correponding to
DOWNLOAD_DELAY,CONCURRENT_REQUESTS_PER_DOMAINandRANDOMIZE_DOWNLOAD_DELAYcan now be set on a per-domain basis via the newDOWNLOAD_SLOTSsetting. (issue 5328)Added
TextResponse.jmespath(), a shortcut for JMESPath selectors available since parsel 1.8.1. (issue 5894, issue 5915)Added
feed_slot_closedandfeed_exporter_closedsignals. (issue 5876)Added
scrapy.utils.request.request_to_curl(), a function to produce a curl command from aRequestobject. (issue 5892)Values of
FILES_STOREandIMAGES_STOREcan now bepathlib.Pathinstances. (issue 5801)
Bug fixes¶
Fixed a warning with Parsel 1.8.1+. (issue 5903, issue 5918)
Fixed an error when using feed postprocessing with S3 storage. (issue 5500, issue 5581)
Added the missing
scrapy.settings.BaseSettings.setdefault()method. (issue 5811, issue 5821)Fixed an error when using cryptography 40.0.0+ and
DOWNLOADER_CLIENT_TLS_VERBOSE_LOGGINGis enabled. (issue 5857, issue 5858)The checksums returned by
FilesPipelinefor files on Google Cloud Storage are no longer Base64-encoded. (issue 5874, issue 5891)scrapy.utils.request.request_from_curl()now supports $-prefixed string values for the curl--data-rawargument, which are produced by browsers for data that includes certain symbols. (issue 5899, issue 5901)The
parsecommand now also works with async generator callbacks. (issue 5819, issue 5824)The
genspidercommand now properly works with HTTPS URLs. (issue 3553, issue 5808)Improved handling of asyncio loops. (issue 5831, issue 5832)
LinkExtractornow skips certain malformed URLs instead of raising an exception. (issue 5881)scrapy.utils.python.get_func_args()now supports more types of callables. (issue 5872, issue 5885)Fixed an error when processing non-UTF8 values of
Content-Typeheaders. (issue 5914, issue 5917)Fixed an error breaking user handling of send failures in
scrapy.mail.MailSender.send(). (issue 1611, issue 5880)
Documentation¶
Expanded contributing docs. (issue 5109, issue 5851)
Added blacken-docs to pre-commit and reformatted the docs with it. (issue 5813, issue 5816)
Fixed a JS issue. (issue 5875, issue 5877)
Fixed
make htmlview. (issue 5878, issue 5879)Fixed typos and other small errors. (issue 5827, issue 5839, issue 5883, issue 5890, issue 5895, issue 5904)
Quality assurance¶
Extended typing hints. (issue 5805, issue 5889, issue 5896)
Tests for most of the examples in the docs are now run as a part of CI, found problems were fixed. (issue 5816, issue 5826, issue 5919)
Removed usage of deprecated Python classes. (issue 5849)
Silenced
include-ignoredwarnings from coverage. (issue 5820)Fixed a random failure of the
test_feedexport.test_batch_path_differtest. (issue 5855, issue 5898)Updated docstrings to match output produced by parsel 1.8.1 so that they don’t cause test failures. (issue 5902, issue 5919)
Other CI and pre-commit improvements. (issue 5802, issue 5823, issue 5908)
Scrapy 2.8.0 (2023-02-02)¶
This is a maintenance release, with minor features, bug fixes, and cleanups.
Deprecation removals¶
The
scrapy.utils.gz.read1function, deprecated in Scrapy 2.0, has now been removed. Use theread1()method ofGzipFileinstead. (issue 5719)The
scrapy.utils.python.to_native_strfunction, deprecated in Scrapy 2.0, has now been removed. Usescrapy.utils.python.to_unicode()instead. (issue 5719)The
scrapy.utils.python.MutableChain.nextmethod, deprecated in Scrapy 2.0, has now been removed. Use__next__()instead. (issue 5719)The
scrapy.linkextractors.FilteringLinkExtractorclass, deprecated in Scrapy 2.0, has now been removed. UseLinkExtractorinstead. (issue 5720)Support for using environment variables prefixed with
SCRAPY_to override settings, deprecated in Scrapy 2.0, has now been removed. (issue 5724)Support for the
noconnectquery string argument in proxy URLs, deprecated in Scrapy 2.0, has now been removed. We expect proxies that used to need it to work fine without it. (issue 5731)The
scrapy.utils.python.retry_on_eintrfunction, deprecated in Scrapy 2.3, has now been removed. (issue 5719)The
scrapy.utils.python.WeakKeyCacheclass, deprecated in Scrapy 2.4, has now been removed. (issue 5719)
Deprecations¶
scrapy.pipelines.images.NoimagesDropis now deprecated. (issue 5368, issue 5489)ImagesPipeline.convert_imagemust now accept aresponse_bodyparameter. (issue 3055, issue 3689, issue 4753)
New features¶
Applied black coding style to files generated with the
genspiderandstartprojectcommands. (issue 5809, issue 5814)FEED_EXPORT_ENCODINGis now set to"utf-8"in thesettings.pyfile that thestartprojectcommand generates. With this value, JSON exports won’t force the use of escape sequences for non-ASCII characters. (issue 5797, issue 5800)The
MemoryUsageextension now logs the peak memory usage during checks, and the binary unit MiB is now used to avoid confusion. (issue 5717, issue 5722, issue 5727)The
callbackparameter ofRequestcan now be set toscrapy.http.request.NO_CALLBACK(), to distinguish it fromNone, as the latter indicates that the default spider callback (parse()) is to be used. (issue 5798)
Bug fixes¶
Enabled unsafe legacy SSL renegotiation to fix access to some outdated websites. (issue 5491, issue 5790)
Fixed STARTTLS-based email delivery not working with Twisted 21.2.0 and better. (issue 5386, issue 5406)
Fixed the
finish_exporting()method of item exporters not being called for empty files. (issue 5537, issue 5758)Fixed HTTP/2 responses getting only the last value for a header when multiple headers with the same name are received. (issue 5777)
Fixed an exception raised by the
shellcommand on some cases when using asyncio. (issue 5740, issue 5742, issue 5748, issue 5759, issue 5760, issue 5771)When using
CrawlSpider, callback keyword arguments (cb_kwargs) added to a request in theprocess_requestcallback of aRulewill no longer be ignored. (issue 5699)The images pipeline no longer re-encodes JPEG files. (issue 3055, issue 3689, issue 4753)
Fixed the handling of transparent WebP images by the images pipeline. (issue 3072, issue 5766, issue 5767)
scrapy.shell.inspect_response()no longer inhibitsSIGINT(Ctrl+C). (issue 2918)LinkExtractorwithunique=Falseno longer filters out links that have identical URL and text. (issue 3798, issue 3799, issue 4695, issue 5458)RobotsTxtMiddlewarenow ignores URL protocols that do not supportrobots.txt(data://,file://). (issue 5807)Silenced the
filelockdebug log messages introduced in Scrapy 2.6. (issue 5753, issue 5754)Fixed the output of
scrapy -hshowing an unintended**commands**line. (issue 5709, issue 5711, issue 5712)Made the active project indication in the output of commands more clear. (issue 5715)
Documentation¶
Documented how to debug spiders from Visual Studio Code. (issue 5721)
Documented how
DOWNLOAD_DELAYaffects per-domain concurrency. (issue 5083, issue 5540)Improved consistency. (issue 5761)
Fixed typos. (issue 5714, issue 5744, issue 5764)
Quality assurance¶
Applied black coding style, sorted import statements, and introduced pre-commit. (issue 4654, issue 4658, issue 5734, issue 5737, issue 5806, issue 5810)
Switched from
os.pathtopathlib. (issue 4916, issue 4497, issue 5682)Addressed many issues reported by Pylint. (issue 5677)
Improved code readability. (issue 5736)
Improved package metadata. (issue 5768)
Removed direct invocations of
setup.py. (issue 5774, issue 5776)Removed unnecessary
OrderedDictusages. (issue 5795)Removed unnecessary
__str__definitions. (issue 5150)Removed obsolete code and comments. (issue 5725, issue 5729, issue 5730, issue 5732)
Fixed test and CI issues. (issue 5749, issue 5750, issue 5756, issue 5762, issue 5765, issue 5780, issue 5781, issue 5782, issue 5783, issue 5785, issue 5786)
Scrapy 2.7.1 (2022-11-02)¶
New features¶
Relaxed the restriction introduced in 2.6.2 so that the
Proxy-Authorizationheader can again be set explicitly, as long as the proxy URL in theproxymetadata has no other credentials, and for as long as that proxy URL remains the same; this restores compatibility with scrapy-zyte-smartproxy 2.1.0 and older (issue 5626).
Bug fixes¶
Using
-O/--overwrite-outputand-t/--output-formatoptions together now produces an error instead of ignoring the former option (issue 5516, issue 5605).Replaced deprecated
asyncioAPIs that implicitly use the current event loop with code that explicitly requests a loop from the event loop policy (issue 5685, issue 5689).Fixed uses of deprecated Scrapy APIs in Scrapy itself (issue 5588, issue 5589).
Fixed uses of a deprecated Pillow API (issue 5684, issue 5692).
Improved code that checks if generators return values, so that it no longer fails on decorated methods and partial methods (issue 5323, issue 5592, issue 5599, issue 5691).
Documentation¶
Upgraded the Code of Conduct to Contributor Covenant v2.1 (issue 5698).
Fixed typos (issue 5681, issue 5694).
Quality assurance¶
Re-enabled some erroneously disabled flake8 checks (issue 5688).
Ignored harmless deprecation warnings from
typingin tests (issue 5686, issue 5697).Modernized our CI configuration (issue 5695, issue 5696).
Scrapy 2.7.0 (2022-10-17)¶
Highlights:
Added Python 3.11 support, dropped Python 3.6 support
Improved support for asynchronous callbacks
Asyncio support is enabled by default on new projects
Output names of item fields can now be arbitrary strings
Centralized request fingerprinting configuration is now possible
Modified requirements¶
Python 3.7 or greater is now required; support for Python 3.6 has been dropped. Support for the upcoming Python 3.11 has been added.
The minimum required version of some dependencies has changed as well:
lxml: 3.5.0 → 4.3.0
Pillow (images pipeline): 4.0.0 → 7.1.0
zope.interface: 5.0.0 → 5.1.0
(issue 5512, issue 5514, issue 5524, issue 5563, issue 5664, issue 5670, issue 5678)
Deprecations¶
ImagesPipeline.thumb_pathmust now accept anitemparameter (issue 5504, issue 5508).The
scrapy.downloadermiddlewares.decompressionmodule is now deprecated (issue 5546, issue 5547).
New features¶
The
process_spider_output()method of spider middlewares can now be defined as an asynchronous generator (issue 4978).The output of
Requestcallbacks defined as coroutines is now processed asynchronously (issue 4978).CrawlSpidernow supports asynchronous callbacks (issue 5657).New projects created with the
startprojectcommand have asyncio support enabled by default (issue 5590, issue 5679).The
FEED_EXPORT_FIELDSsetting can now be defined as a dictionary to customize the output name of item fields, lifting the restriction that required output names to be valid Python identifiers, e.g. preventing them to have whitespace (issue 1008, issue 3266, issue 3696).You can now customize request fingerprinting through the new
REQUEST_FINGERPRINTER_CLASSsetting, instead of having to change it on every Scrapy component that relies on request fingerprinting (issue 900, issue 3420, issue 4113, issue 4762, issue 4524).jsonlis now supported and encouraged as a file extension for JSON Lines files (issue 4848).ImagesPipeline.thumb_pathnow receives the source item (issue 5504, issue 5508).
Bug fixes¶
When using Google Cloud Storage with a media pipeline,
FILES_EXPIRESnow also works whenFILES_STOREdoes not point at the root of your Google Cloud Storage bucket (issue 5317, issue 5318).The
parsecommand now supports asynchronous callbacks (issue 5424, issue 5577).When using the
parsecommand with a URL for which there is no available spider, an exception is no longer raised (issue 3264, issue 3265, issue 5375, issue 5376, issue 5497).TextResponsenow gives higher priority to the byte order mark when determining the text encoding of the response body, following the HTML living standard (issue 5601, issue 5611).MIME sniffing takes the response body into account in FTP and HTTP/1.0 requests, as well as in cached requests (issue 4873).
MIME sniffing now detects valid HTML 5 documents even if the
htmltag is missing (issue 4873).An exception is now raised if
ASYNCIO_EVENT_LOOPhas a value that does not match the asyncio event loop actually installed (issue 5529).Fixed
Headers.getlistreturning only the last header (issue 5515, issue 5526).Fixed
LinkExtractornot ignoring thetar.gzfile extension by default (issue 1837, issue 2067, issue 4066)
Documentation¶
Clarified the return type of
Spider.parse(issue 5602, issue 5608).To enable
HttpCompressionMiddlewareto do brotli compression, installing brotli is now recommended instead of installing brotlipy, as the former provides a more recent version of brotli.Signal documentation now mentions coroutine support and uses it in code examples (issue 4852, issue 5358).
Avoiding getting banned now recommends Common Crawl instead of Google cache (issue 3582, issue 5432).
The new Components topic covers enforcing requirements on Scrapy components, like downloader middlewares, extensions, item pipelines, spider middlewares, and more; Enforcing asyncio as a requirement has also been added (issue 4978).
Settings now indicates that setting values must be picklable (issue 5607, issue 5629).
Removed outdated documentation (issue 5446, issue 5373, issue 5369, issue 5370, issue 5554).
Fixed typos (issue 5442, issue 5455, issue 5457, issue 5461, issue 5538, issue 5553, issue 5558, issue 5624, issue 5631).
Fixed other issues (issue 5283, issue 5284, issue 5559, issue 5567, issue 5648, issue 5659, issue 5665).
Quality assurance¶
Added a continuous integration job to run twine check (issue 5655, issue 5656).
Addressed test issues and warnings (issue 5560, issue 5561, issue 5612, issue 5617, issue 5639, issue 5645, issue 5662, issue 5671, issue 5675).
Cleaned up code (issue 4991, issue 4995, issue 5451, issue 5487, issue 5542, issue 5667, issue 5668, issue 5672).
Applied minor code improvements (issue 5661).
Scrapy 2.6.3 (2022-09-27)¶
Added support for pyOpenSSL 22.1.0, removing support for SSLv3 (issue 5634, issue 5635, issue 5636).
Upgraded the minimum versions of the following dependencies:
cryptography: 2.0 → 3.3
pyOpenSSL: 16.2.0 → 21.0.0
service_identity: 16.0.0 → 18.1.0
Twisted: 17.9.0 → 18.9.0
zope.interface: 4.1.3 → 5.0.0
Fixes test and documentation issues (issue 5612, issue 5617, issue 5631).
Scrapy 2.6.2 (2022-07-25)¶
Security bug fix:
When
HttpProxyMiddlewareprocesses a request withproxymetadata, and thatproxymetadata includes proxy credentials,HttpProxyMiddlewaresets theProxy-Authorizationheader, but only if that header is not already set.There are third-party proxy-rotation downloader middlewares that set different
proxymetadata every time they process a request.Because of request retries and redirects, the same request can be processed by downloader middlewares more than once, including both
HttpProxyMiddlewareand any third-party proxy-rotation downloader middleware.These third-party proxy-rotation downloader middlewares could change the
proxymetadata of a request to a new value, but fail to remove theProxy-Authorizationheader from the previous value of theproxymetadata, causing the credentials of one proxy to be sent to a different proxy.To prevent the unintended leaking of proxy credentials, the behavior of
HttpProxyMiddlewareis now as follows when processing a request:If the request being processed defines
proxymetadata that includes credentials, theProxy-Authorizationheader is always updated to feature those credentials.If the request being processed defines
proxymetadata without credentials, theProxy-Authorizationheader is removed unless it was originally defined for the same proxy URL.To remove proxy credentials while keeping the same proxy URL, remove the
Proxy-Authorizationheader.If the request has no
proxymetadata, or that metadata is a falsy value (e.g.None), theProxy-Authorizationheader is removed.It is no longer possible to set a proxy URL through the
proxymetadata but set the credentials through theProxy-Authorizationheader. Set proxy credentials through theproxymetadata instead.
Also fixes the following regressions introduced in 2.6.0:
CrawlerProcesssupports again crawling multiple spiders (issue 5435, issue 5436)Installing a Twisted reactor before Scrapy does (e.g. importing
twisted.internet.reactorsomewhere at the module level) no longer prevents Scrapy from starting, as long as a different reactor is not specified inTWISTED_REACTOR(issue 5525, issue 5528)Fixed an exception that was being logged after the spider finished under certain conditions (issue 5437, issue 5440)
The
--output/-ocommand-line parameter supports again a value starting with a hyphen (issue 5444, issue 5445)The
scrapy parse -hcommand no longer throws an error (issue 5481, issue 5482)
Scrapy 2.6.1 (2022-03-01)¶
Fixes a regression introduced in 2.6.0 that would unset the request method when following redirects.
Scrapy 2.6.0 (2022-03-01)¶
Highlights:
Python 3.10 support
asyncio support is no longer considered experimental, and works out-of-the-box on Windows regardless of your Python version
Feed exports now support
pathlib.Pathoutput paths and per-feed item filtering and post-processing
Security bug fixes¶
When a
Requestobject with cookies defined gets a redirect response causing a newRequestobject to be scheduled, the cookies defined in the originalRequestobject are no longer copied into the newRequestobject.If you manually set the
Cookieheader on aRequestobject and the domain name of the redirect URL is not an exact match for the domain of the URL of the originalRequestobject, yourCookieheader is now dropped from the newRequestobject.The old behavior could be exploited by an attacker to gain access to your cookies. Please, see the cjvr-mfj7-j4j8 security advisory for more information.
Note
It is still possible to enable the sharing of cookies between different domains with a shared domain suffix (e.g.
example.comand any subdomain) by defining the shared domain suffix (e.g.example.com) as the cookie domain when defining your cookies. See the documentation of theRequestclass for more information.When the domain of a cookie, either received in the
Set-Cookieheader of a response or defined in aRequestobject, is set to a public suffix, the cookie is now ignored unless the cookie domain is the same as the request domain.The old behavior could be exploited by an attacker to inject cookies from a controlled domain into your cookiejar that could be sent to other domains not controlled by the attacker. Please, see the mfjm-vh54-3f96 security advisory for more information.
Modified requirements¶
The h2 dependency is now optional, only needed to enable HTTP/2 support. (issue 5113)
Backward-incompatible changes¶
The
formdataparameter ofFormRequest, if specified for a non-POST request, now overrides the URL query string, instead of being appended to it. (issue 2919, issue 3579)When a function is assigned to the
FEED_URI_PARAMSsetting, now the return value of that function, and not theparamsinput parameter, will determine the feed URI parameters, unless that return value isNone. (issue 4962, issue 4966)In
scrapy.core.engine.ExecutionEngine, methodscrawl(),download(),schedule(), andspider_is_idle()now raiseRuntimeErrorif called beforeopen_spider(). (issue 5090)These methods used to assume that
ExecutionEngine.slothad been defined by a prior call toopen_spider(), so they were raisingAttributeErrorinstead.If the API of the configured scheduler does not meet expectations,
TypeErroris now raised at startup time. Before, other exceptions would be raised at run time. (issue 3559)The
_encodingfield of serializedRequestobjects is now namedencoding, in line with all other fields (issue 5130)
Deprecation removals¶
scrapy.http.TextResponse.body_as_unicode, deprecated in Scrapy 2.2, has now been removed. (issue 5393)scrapy.item.BaseItem, deprecated in Scrapy 2.2, has now been removed. (issue 5398)scrapy.item.DictItem, deprecated in Scrapy 1.8, has now been removed. (issue 5398)scrapy.Spider.make_requests_from_url, deprecated in Scrapy 1.4, has now been removed. (issue 4178, issue 4356)
Deprecations¶
When a function is assigned to the
FEED_URI_PARAMSsetting, returningNoneor modifying theparamsinput parameter is now deprecated. Return a new dictionary instead. (issue 4962, issue 4966)scrapy.utils.reqseris deprecated. (issue 5130)Instead of
request_to_dict(), use the newRequest.to_dictmethod.Instead of
request_from_dict(), use the newscrapy.utils.request.request_from_dict()function.
In
scrapy.squeues, the following queue classes are deprecated:PickleFifoDiskQueueNonRequest,PickleLifoDiskQueueNonRequest,MarshalFifoDiskQueueNonRequest, andMarshalLifoDiskQueueNonRequest. You should instead use:PickleFifoDiskQueue,PickleLifoDiskQueue,MarshalFifoDiskQueue, andMarshalLifoDiskQueue. (issue 5117)Many aspects of
scrapy.core.engine.ExecutionEnginethat come from a time when this class could handle multipleSpiderobjects at a time have been deprecated. (issue 5090)The
has_capacity()method is deprecated.The
schedule()method is deprecated, usecrawl()ordownload()instead.The
open_spidersattribute is deprecated, usespiderinstead.The
spiderparameter is deprecated for the following methods:spider_is_idle()crawl()download()
Instead, call
open_spider()first to set theSpiderobject.
New features¶
You can now use item filtering to control which items are exported to each output feed. (issue 4575, issue 5178, issue 5161, issue 5203)
You can now apply post-processing to feeds, and built-in post-processing plugins are provided for output file compression. (issue 2174, issue 5168, issue 5190)
The
FEEDSsetting now supportspathlib.Pathobjects as keys. (issue 5383, issue 5384)Enabling asyncio while using Windows and Python 3.8 or later will automatically switch the asyncio event loop to one that allows Scrapy to work. See Windows-specific notes. (issue 4976, issue 5315)
The
genspidercommand now supports a start URL instead of a domain name. (issue 4439)scrapy.utils.defergained 2 new functions,deferred_to_future()andmaybe_deferred_to_future(), to help await on Deferreds when using the asyncio reactor. (issue 5288)Amazon S3 feed export storage gained support for temporary security credentials (
AWS_SESSION_TOKEN) and endpoint customization (AWS_ENDPOINT_URL). (issue 4998, issue 5210)New
LOG_FILE_APPENDsetting to allow truncating the log file. (issue 5279)Request.cookiesvalues that arebool,floatorintare cast tostr. (issue 5252, issue 5253)You may now raise
CloseSpiderfrom a handler of thespider_idlesignal to customize the reason why the spider is stopping. (issue 5191)When using
HttpProxyMiddleware, the proxy URL for non-HTTPS HTTP/1.1 requests no longer needs to include a URL scheme. (issue 4505, issue 4649)All built-in queues now expose a
peekmethod that returns the next queue object (likepop) but does not remove the returned object from the queue. (issue 5112)If the underlying queue does not support peeking (e.g. because you are not using
queuelib1.6.1 or later), thepeekmethod raisesNotImplementedError.RequestandResponsenow have anattributesattribute that makes subclassing easier. ForRequest, it also allows subclasses to work withscrapy.utils.request.request_from_dict(). (issue 1877, issue 5130, issue 5218)The
open()andclose()methods of the scheduler are now optional. (issue 3559)HTTP/1.1
TunnelErrorexceptions now only truncate response bodies longer than 1000 characters, instead of those longer than 32 characters, making it easier to debug such errors. (issue 4881, issue 5007)ItemLoadernow supports non-text responses. (issue 5145, issue 5269)
Bug fixes¶
The
TWISTED_REACTORandASYNCIO_EVENT_LOOPsettings are no longer ignored if defined incustom_settings. (issue 4485, issue 5352)Removed a module-level Twisted reactor import that could prevent using the asyncio reactor. (issue 5357)
The
startprojectcommand works with existing folders again. (issue 4665, issue 4676)The
FEED_URI_PARAMSsetting now behaves as documented. (issue 4962, issue 4966)Request.cb_kwargsonce again allows thecallbackkeyword. (issue 5237, issue 5251, issue 5264)Made
scrapy.utils.response.open_in_browser()support more complex HTML. (issue 5319, issue 5320)Fixed
CSVFeedSpider.quotecharbeing interpreted as the CSV file encoding. (issue 5391, issue 5394)Added missing setuptools to the list of dependencies. (issue 5122)
LinkExtractornow also works as expected with links that have comma-separatedrelattribute values includingnofollow. (issue 5225)Fixed a
TypeErrorthat could be raised during feed export parameter parsing. (issue 5359)
Documentation¶
asyncio support is no longer considered experimental. (issue 5332)
Included Windows-specific help for asyncio usage. (issue 4976, issue 5315)
Rewrote Using a headless browser with up-to-date best practices. (issue 4484, issue 4613)
Documented local file naming in media pipelines. (issue 5069, issue 5152)
Frequently Asked Questions now covers spider file name collision issues. (issue 2680, issue 3669)
Provided better context and instructions to disable the
URLLENGTH_LIMITsetting. (issue 5135, issue 5250)Documented that Reppy parser does not support Python 3.9+. (issue 5226, issue 5231)
Documented the scheduler component. (issue 3537, issue 3559)
Documented the method used by media pipelines to determine if a file has expired. (issue 5120, issue 5254)
Running multiple spiders in the same process now features
scrapy.utils.project.get_project_settings()usage. (issue 5070)Running multiple spiders in the same process now covers what happens when you define different per-spider values for some settings that cannot differ at run time. (issue 4485, issue 5352)
Extended the documentation of the
StatsMailerextension. (issue 5199, issue 5217)Added
JOBDIRto Settings. (issue 5173, issue 5224)Documented
Spider.attribute. (issue 5174, issue 5244)Documented
TextResponse.urljoin. (issue 1582)Added the
body_lengthparameter to the documented signature of theheaders_receivedsignal. (issue 5270)Clarified
SelectorList.getusage in the tutorial. (issue 5256)The documentation now features the shortest import path of classes with multiple import paths. (issue 2733, issue 5099)
quotes.toscrape.comreferences now use HTTPS instead of HTTP. (issue 5395, issue 5396)Added a link to our Discord server to Getting help. (issue 5421, issue 5422)
The pronunciation of the project name is now officially /ˈskreɪpaɪ/. (issue 5280, issue 5281)
Added the Scrapy logo to the README. (issue 5255, issue 5258)
Fixed issues and implemented minor improvements. (issue 3155, issue 4335, issue 5074, issue 5098, issue 5134, issue 5180, issue 5194, issue 5239, issue 5266, issue 5271, issue 5273, issue 5274, issue 5276, issue 5347, issue 5356, issue 5414, issue 5415, issue 5416, issue 5419, issue 5420)
Quality Assurance¶
Added support for Python 3.10. (issue 5212, issue 5221, issue 5265)
Significantly reduced memory usage by
scrapy.utils.response.response_httprepr(), used by theDownloaderStatsdownloader middleware, which is enabled by default. (issue 4964, issue 4972)Removed uses of the deprecated
optparsemodule. (issue 5366, issue 5374)Extended typing hints. (issue 5077, issue 5090, issue 5100, issue 5108, issue 5171, issue 5215, issue 5334)
Improved tests, fixed CI issues, removed unused code. (issue 5094, issue 5157, issue 5162, issue 5198, issue 5207, issue 5208, issue 5229, issue 5298, issue 5299, issue 5310, issue 5316, issue 5333, issue 5388, issue 5389, issue 5400, issue 5401, issue 5404, issue 5405, issue 5407, issue 5410, issue 5412, issue 5425, issue 5427)
Implemented improvements for contributors. (issue 5080, issue 5082, issue 5177, issue 5200)
Implemented cleanups. (issue 5095, issue 5106, issue 5209, issue 5228, issue 5235, issue 5245, issue 5246, issue 5292, issue 5314, issue 5322)
Scrapy 2.5.1 (2021-10-05)¶
Security bug fix:
If you use
HttpAuthMiddleware(i.e. thehttp_userandhttp_passspider attributes) for HTTP authentication, any request exposes your credentials to the request target.To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute,
http_auth_domain, and point it to the specific domain to which the authentication credentials must be sent.If the
http_auth_domainspider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.If you need to send the same HTTP authentication credentials to multiple domains, you can use
w3lib.http.basic_auth_header()instead to set the value of theAuthorizationheader of your requests.If you really want your spider to send the same HTTP authentication credentials to any domain, set the
http_auth_domainspider attribute toNone.Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.
Scrapy 2.5.0 (2021-04-06)¶
Highlights:
Official Python 3.9 support
Experimental HTTP/2 support
New
get_retry_request()function to retry requests from spider callbacksNew
headers_receivedsignal that allows stopping downloads earlyNew
Response.protocolattribute
Deprecation removals¶
Removed all code that was deprecated in 1.7.0 and had not already been removed in 2.4.0. (issue 4901)
Removed support for the
SCRAPY_PICKLED_SETTINGS_TO_OVERRIDEenvironment variable, deprecated in 1.8.0. (issue 4912)
Deprecations¶
The
scrapy.utils.py36module is now deprecated in favor ofscrapy.utils.asyncgen. (issue 4900)
New features¶
Experimental HTTP/2 support through a new download handler that can be assigned to the
httpsprotocol in theDOWNLOAD_HANDLERSsetting. (issue 1854, issue 4769, issue 5058, issue 5059, issue 5066)The new
scrapy.downloadermiddlewares.retry.get_retry_request()function may be used from spider callbacks or middlewares to handle the retrying of a request beyond the scenarios thatRetryMiddlewaresupports. (issue 3590, issue 3685, issue 4902)The new
headers_receivedsignal gives early access to response headers and allows stopping downloads. (issue 1772, issue 4897)The new
Response.protocolattribute gives access to the string that identifies the protocol used to download a response. (issue 4878)Stats now include the following entries that indicate the number of successes and failures in storing feeds:
feedexport/success_count/<storage type> feedexport/failed_count/<storage type>
Where
<storage type>is the feed storage backend class name, such asFileFeedStorageorFTPFeedStorage.The
UrlLengthMiddlewarespider middleware now logs ignored URLs withINFOlogging level instead ofDEBUG, and it now includes the following entry into stats to keep track of the number of ignored URLs:urllength/request_ignored_count
The
HttpCompressionMiddlewaredownloader middleware now logs the number of decompressed responses and the total count of resulting bytes:httpcompression/response_bytes httpcompression/response_count
Bug fixes¶
Fixed installation on PyPy installing PyDispatcher in addition to PyPyDispatcher, which could prevent Scrapy from working depending on which package got imported. (issue 4710, issue 4814)
When inspecting a callback to check if it is a generator that also returns a value, an exception is no longer raised if the callback has a docstring with lower indentation than the following code. (issue 4477, issue 4935)
The Content-Length header is no longer omitted from responses when using the default, HTTP/1.1 download handler (see
DOWNLOAD_HANDLERS). (issue 5009, issue 5034, issue 5045, issue 5057, issue 5062)Setting the
handle_httpstatus_allrequest meta key toFalsenow has the same effect as not setting it at all, instead of having the same effect as setting it toTrue. (issue 3851, issue 4694)
Documentation¶
Added instructions to install Scrapy in Windows using pip. (issue 4715, issue 4736)
Logging documentation now includes additional ways to filter logs. (issue 4216, issue 4257, issue 4965)
Covered how to deal with long lists of allowed domains in the FAQ. (issue 2263, issue 3667)
Covered scrapy-bench in Benchmarking. (issue 4996, issue 5016)
Clarified that one extension instance is created per crawler. (issue 5014)
Fixed some errors in examples. (issue 4829, issue 4830, issue 4907, issue 4909, issue 5008)
Fixed some external links, typos, and so on. (issue 4892, issue 4899, issue 4936, issue 4942, issue 5005, issue 5063)
The list of Request.meta keys is now sorted alphabetically. (issue 5061, issue 5065)
Updated references to Scrapinghub, which is now called Zyte. (issue 4973, issue 5072)
Added a mention to contributors in the README. (issue 4956)
Reduced the top margin of lists. (issue 4974)
Quality Assurance¶
Made Python 3.9 support official (issue 4757, issue 4759)
Extended typing hints (issue 4895)
Fixed deprecated uses of the Twisted API. (issue 4940, issue 4950, issue 5073)
Made our tests run with the new pip resolver. (issue 4710, issue 4814)
Added tests to ensure that coroutine support is tested. (issue 4987)
Migrated from Travis CI to GitHub Actions. (issue 4924)
Fixed CI issues. (issue 4986, issue 5020, issue 5022, issue 5027, issue 5052, issue 5053)
Implemented code refactorings, style fixes and cleanups. (issue 4911, issue 4982, issue 5001, issue 5002, issue 5076)
Scrapy 2.4.1 (2020-11-17)¶
Fixed feed exports overwrite support (issue 4845, issue 4857, issue 4859)
Fixed the AsyncIO event loop handling, which could make code hang (issue 4855, issue 4872)
Fixed the IPv6-capable DNS resolver
CachingHostnameResolverfor download handlers that callreactor.resolve(issue 4802, issue 4803)Fixed the output of the
genspidercommand showing placeholders instead of the import path of the generated spider module (issue 4874)Migrated Windows CI from Azure Pipelines to GitHub Actions (issue 4869, issue 4876)
Scrapy 2.4.0 (2020-10-11)¶
Highlights:
Python 3.5 support has been dropped.
The
file_pathmethod of media pipelines can now access the source item.This allows you to set a download file path based on item data.
The new
item_export_kwargskey of theFEEDSsetting allows to define keyword parameters to pass to item exporter classesYou can now choose whether feed exports overwrite or append to the output file.
For example, when using the
crawlorrunspidercommands, you can use the-Ooption instead of-oto overwrite the output file.Zstd-compressed responses are now supported if zstandard is installed.
In settings, where the import path of a class is required, it is now possible to pass a class object instead.
Modified requirements¶
Python 3.6 or greater is now required; support for Python 3.5 has been dropped
As a result:
When using PyPy, PyPy 7.2.0 or greater is now required
For Amazon S3 storage support in feed exports or media pipelines, botocore 1.4.87 or greater is now required
To use the images pipeline, Pillow 4.0.0 or greater is now required
(issue 4718, issue 4732, issue 4733, issue 4742, issue 4743, issue 4764)
Backward-incompatible changes¶
CookiesMiddlewareonce again discards cookies defined inRequest.headers.We decided to revert this bug fix, introduced in Scrapy 2.2.0, because it was reported that the current implementation could break existing code.
If you need to set cookies for a request, use the
Request.cookiesparameter.A future version of Scrapy will include a new, better implementation of the reverted bug fix.
Deprecation removals¶
scrapy.extensions.feedexport.S3FeedStorageno longer reads the values ofaccess_keyandsecret_keyfrom the running project settings when they are not passed to its__init__method; you must either pass those parameters to its__init__method or useS3FeedStorage.from_crawler(issue 4356, issue 4411, issue 4688)Rule.process_requestno longer admits callables which expect a singlerequestparameter, rather than bothrequestandresponse(issue 4818)
Deprecations¶
In custom media pipelines, signatures that do not accept a keyword-only
itemparameter in any of the methods that now support this parameter are now deprecated (issue 4628, issue 4686)In custom feed storage backend classes,
__init__method signatures that do not accept a keyword-onlyfeed_optionsparameter are now deprecated (issue 547, issue 716, issue 4512)The
scrapy.utils.python.WeakKeyCacheclass is now deprecated (issue 4684, issue 4701)The
scrapy.utils.boto.is_botocore()function is now deprecated, usescrapy.utils.boto.is_botocore_available()instead (issue 4734, issue 4776)
New features¶
The following methods of media pipelines now accept an
itemkeyword-only parameter containing the source item:In
scrapy.pipelines.files.FilesPipeline:file_downloaded()media_downloaded()media_to_download()
In
scrapy.pipelines.images.ImagesPipeline:file_downloaded()get_images()image_downloaded()media_downloaded()media_to_download()
The new
item_export_kwargskey of theFEEDSsetting allows to define keyword parameters to pass to item exporter classes (issue 4606, issue 4768)Feed exports gained overwrite support:
When using the
crawlorrunspidercommands, you can use the-Ooption instead of-oto overwrite the output fileYou can use the
overwritekey in theFEEDSsetting to configure whether to overwrite the output file (True) or append to its content (False)The
__init__andfrom_crawlermethods of feed storage backend classes now receive a new keyword-only parameter,feed_options, which is a dictionary of feed options
Zstd-compressed responses are now supported if zstandard is installed (issue 4831)
In settings, where the import path of a class is required, it is now possible to pass a class object instead (issue 3870, issue 3873).
This includes also settings where only part of its value is made of an import path, such as
DOWNLOADER_MIDDLEWARESorDOWNLOAD_HANDLERS.Downloader middlewares can now override
response.request.If a downloader middleware returns a
Responseobject fromprocess_response()orprocess_exception()with a customRequestobject assigned toresponse.request:The response is handled by the callback of that custom
Requestobject, instead of being handled by the callback of the originalRequestobjectThat custom
Requestobject is now sent as therequestargument to theresponse_receivedsignal, instead of the originalRequestobject
When using the FTP feed storage backend:
It is now possible to set the new
overwritefeed option toFalseto append to an existing file instead of overwriting itThe FTP password can now be omitted if it is not necessary
The
__init__method ofCsvItemExporternow supports anerrorsparameter to indicate how to handle encoding errors (issue 4755)When using asyncio, it is now possible to set a custom asyncio loop (issue 4306, issue 4414)
Serialized requests (see Jobs: pausing and resuming crawls) now support callbacks that are spider methods that delegate on other callable (issue 4756)
When a response is larger than
DOWNLOAD_MAXSIZE, the logged message is now a warning, instead of an error (issue 3874, issue 3886, issue 4752)
Bug fixes¶
The
genspidercommand no longer overwrites existing files unless the--forceoption is used (issue 4561, issue 4616, issue 4623)Cookies with an empty value are no longer considered invalid cookies (issue 4772)
The
runspidercommand now supports files with the.pywfile extension (issue 4643, issue 4646)The
HttpProxyMiddlewaremiddleware now simply ignores unsupported proxy values (issue 3331, issue 4778)Checks for generator callbacks with a
returnstatement no longer warn aboutreturnstatements in nested functions (issue 4720, issue 4721)The system file mode creation mask no longer affects the permissions of files generated using the
startprojectcommand (issue 4722)scrapy.utils.iterators.xmliter()now supports namespaced node names (issue 861, issue 4746)Requestobjects can now haveabout:URLs, which can work when using a headless browser (issue 4835)
Documentation¶
The
FEED_URI_PARAMSsetting is now documented (issue 4671, issue 4724)Improved the documentation of link extractors with an usage example from a spider callback and reference documentation for the
Linkclass (issue 4751, issue 4775)Clarified the impact of
CONCURRENT_REQUESTSwhen using theCloseSpiderextension (issue 4836)Removed references to Python 2’s
unicodetype (issue 4547, issue 4703)We now have an official deprecation policy (issue 4705)
Our documentation policies now cover usage of Sphinx’s
versionaddedandversionchangeddirectives, and we have removed usages referencing Scrapy 1.4.0 and earlier versions (issue 3971, issue 4310)Other documentation cleanups (issue 4090, issue 4782, issue 4800, issue 4801, issue 4809, issue 4816, issue 4825)
Quality assurance¶
Extended typing hints (issue 4243, issue 4691)
Added tests for the
checkcommand (issue 4663)Fixed test failures on Debian (issue 4726, issue 4727, issue 4735)
Improved Windows test coverage (issue 4723)
Switched to formatted string literals where possible (issue 4307, issue 4324, issue 4672)
Modernized
super()usage (issue 4707)Other code and test cleanups (issue 1790, issue 3288, issue 4165, issue 4564, issue 4651, issue 4714, issue 4738, issue 4745, issue 4747, issue 4761, issue 4765, issue 4804, issue 4817, issue 4820, issue 4822, issue 4839)
Scrapy 2.3.0 (2020-08-04)¶
Highlights:
Feed exports now support Google Cloud Storage as a storage backend
The new
FEED_EXPORT_BATCH_ITEM_COUNTsetting allows to deliver output items in batches of up to the specified number of items.It also serves as a workaround for delayed file delivery, which causes Scrapy to only start item delivery after the crawl has finished when using certain storage backends (S3, FTP, and now GCS).
The base implementation of item loaders has been moved into a separate library, itemloaders, allowing usage from outside Scrapy and a separate release schedule
Deprecation removals¶
Removed the following classes and their parent modules from
scrapy.linkextractors:htmlparser.HtmlParserLinkExtractorregex.RegexLinkExtractorsgml.BaseSgmlLinkExtractorsgml.SgmlLinkExtractor
Use
LinkExtractorinstead (issue 4356, issue 4679)
Deprecations¶
The
scrapy.utils.python.retry_on_eintrfunction is now deprecated (issue 4683)
New features¶
Feed exports support Google Cloud Storage (issue 685, issue 3608)
New
FEED_EXPORT_BATCH_ITEM_COUNTsetting for batch deliveries (issue 4250, issue 4434)The
parsecommand now allows specifying an output file (issue 4317, issue 4377)Request.from_curlandcurl_to_request_kwargs()now also support--data-raw(issue 4612)A
parsecallback may now be used in built-in spider subclasses, such asCrawlSpider(issue 712, issue 732, issue 781, issue 4254 )
Bug fixes¶
Fixed the CSV exporting of dataclass items and attr.s items (issue 4667, issue 4668)
Request.from_curlandcurl_to_request_kwargs()now set the request method toPOSTwhen a request body is specified and no request method is specified (issue 4612)The processing of ANSI escape sequences in enabled in Windows 10.0.14393 and later, where it is required for colored output (issue 4393, issue 4403)
Documentation¶
Updated the OpenSSL cipher list format link in the documentation about the
DOWNLOADER_CLIENT_TLS_CIPHERSsetting (issue 4653)Simplified the code example in Working with dataclass items (issue 4652)
Quality assurance¶
The base implementation of item loaders has been moved into itemloaders (issue 4005, issue 4516)
Fixed a silenced error in some scheduler tests (issue 4644, issue 4645)
Renewed the localhost certificate used for SSL tests (issue 4650)
Removed cookie-handling code specific to Python 2 (issue 4682)
Stopped using Python 2 unicode literal syntax (issue 4704)
Stopped using a backlash for line continuation (issue 4673)
Removed unneeded entries from the MyPy exception list (issue 4690)
Automated tests now pass on Windows as part of our continuous integration system (issue 4458)
Automated tests now pass on the latest PyPy version for supported Python versions in our continuous integration system (issue 4504)
Scrapy 2.2.1 (2020-07-17)¶
The
startprojectcommand no longer makes unintended changes to the permissions of files in the destination folder, such as removing execution permissions (issue 4662, issue 4666)
Scrapy 2.2.0 (2020-06-24)¶
Highlights:
Python 3.5.2+ is required now
dataclass objects and attrs objects are now valid item types
New
TextResponse.jsonmethodNew
bytes_receivedsignal that allows canceling response downloadCookiesMiddlewarefixes
Backward-incompatible changes¶
Support for Python 3.5.0 and 3.5.1 has been dropped; Scrapy now refuses to run with a Python version lower than 3.5.2, which introduced
typing.Type(issue 4615)
Deprecations¶
TextResponse.body_as_unicodeis now deprecated, useTextResponse.textinstead (issue 4546, issue 4555, issue 4579)scrapy.item.BaseItemis now deprecated, usescrapy.item.Iteminstead (issue 4534)
New features¶
dataclass objects and attrs objects are now valid item types, and a new itemadapter library makes it easy to write code that supports any item type (issue 2749, issue 2807, issue 3761, issue 3881, issue 4642)
A new
TextResponse.jsonmethod allows to deserialize JSON responses (issue 2444, issue 4460, issue 4574)A new
bytes_receivedsignal allows monitoring response download progress and stopping downloads (issue 4205, issue 4559)The dictionaries in the result list of a media pipeline now include a new key,
status, which indicates if the file was downloaded or, if the file was not downloaded, why it was not downloaded; seeFilesPipeline.get_media_requestsfor more information (issue 2893, issue 4486)When using Google Cloud Storage for a media pipeline, a warning is now logged if the configured credentials do not grant the required permissions (issue 4346, issue 4508)
Link extractors are now serializable, as long as you do not use lambdas for parameters; for example, you can now pass link extractors in
Request.cb_kwargsorRequest.metawhen persisting scheduled requests (issue 4554)Upgraded the pickle protocol that Scrapy uses from protocol 2 to protocol 4, improving serialization capabilities and performance (issue 4135, issue 4541)
scrapy.utils.misc.create_instance()now raises aTypeErrorexception if the resulting instance isNone(issue 4528, issue 4532)
Bug fixes¶
CookiesMiddlewareno longer discards cookies defined inRequest.headers(issue 1992, issue 2400)CookiesMiddlewareno longer re-encodes cookies defined asbytesin thecookiesparameter of the__init__method ofRequest(issue 2400, issue 3575)When
FEEDSdefines multiple URIs,FEED_STORE_EMPTYisFalseand the crawl yields no items, Scrapy no longer stops feed exports after the first URI (issue 4621, issue 4626)Spidercallbacks defined using coroutine syntax no longer need to return an iterable, and may instead return aRequestobject, an item, orNone(issue 4609)The
startprojectcommand now ensures that the generated project folders and files have the right permissions (issue 4604)Fix a
KeyErrorexception being sometimes raised fromscrapy.utils.datatypes.LocalWeakReferencedCache(issue 4597, issue 4599)When
FEEDSdefines multiple URIs, log messages about items being stored now contain information from the corresponding feed, instead of always containing information about only one of the feeds (issue 4619, issue 4629)
Documentation¶
Added a new section about accessing cb_kwargs from errbacks (issue 4598, issue 4634)
Covered chompjs in Parsing JavaScript code (issue 4556, issue 4562)
Removed from Coroutines the warning about the API being experimental (issue 4511, issue 4513)
Removed references to unsupported versions of Twisted (issue 4533)
Updated the description of the screenshot pipeline example, which now uses coroutine syntax instead of returning a
Deferred(issue 4514, issue 4593)Removed a misleading import line from the
scrapy.utils.log.configure_logging()code example (issue 4510, issue 4587)The display-on-hover behavior of internal documentation references now also covers links to commands,
Request.metakeys, settings and signals (issue 4495, issue 4563)It is again possible to download the documentation for offline reading (issue 4578, issue 4585)
Removed backslashes preceding
*argsand**kwargsin some function and method signatures (issue 4592, issue 4596)
Quality assurance¶
Adjusted the code base further to our style guidelines (issue 4237, issue 4525, issue 4538, issue 4539, issue 4540, issue 4542, issue 4543, issue 4544, issue 4545, issue 4557, issue 4558, issue 4566, issue 4568, issue 4572)
Removed remnants of Python 2 support (issue 4550, issue 4553, issue 4568)
Improved code sharing between the
crawlandrunspidercommands (issue 4548, issue 4552)Replaced
chain(*iterable)withchain.from_iterable(iterable)(issue 4635)You may now run the
asynciotests with Tox on any Python version (issue 4521)Updated test requirements to reflect an incompatibility with pytest 5.4 and 5.4.1 (issue 4588)
Improved
SpiderLoadertest coverage for scenarios involving duplicate spider names (issue 4549, issue 4560)Configured Travis CI to also run the tests with Python 3.5.2 (issue 4518, issue 4615)
Added a Pylint job to Travis CI (issue 3727)
Added a Mypy job to Travis CI (issue 4637)
Made use of set literals in tests (issue 4573)
Cleaned up the Travis CI configuration (issue 4517, issue 4519, issue 4522, issue 4537)
Scrapy 2.1.0 (2020-04-24)¶
Highlights:
New
FEEDSsetting to export to multiple feedsNew
Response.ip_addressattribute
Backward-incompatible changes¶
AssertionErrorexceptions triggered by assert statements have been replaced by new exception types, to support running Python in optimized mode (see-O) without changing Scrapy’s behavior in any unexpected ways.If you catch an
AssertionErrorexception from Scrapy, update your code to catch the corresponding new exception.
Deprecation removals¶
The
LOG_UNSERIALIZABLE_REQUESTSsetting is no longer supported, useSCHEDULER_DEBUGinstead (issue 4385)The
REDIRECT_MAX_METAREFRESH_DELAYsetting is no longer supported, useMETAREFRESH_MAXDELAYinstead (issue 4385)The
ChunkedTransferMiddlewaremiddleware has been removed, including the entirescrapy.downloadermiddlewares.chunkedmodule; chunked transfers work out of the box (issue 4431)The
spidersproperty has been removed fromCrawler, useCrawlerRunner.spider_loaderor instantiateSPIDER_LOADER_CLASSwith your settings instead (issue 4398)The
MultiValueDict,MultiValueDictKeyError, andSiteNodeclasses have been removed fromscrapy.utils.datatypes(issue 4400)
Deprecations¶
The
FEED_FORMATandFEED_URIsettings have been deprecated in favor of the newFEEDSsetting (issue 1336, issue 3858, issue 4507)
New features¶
A new setting,
FEEDS, allows configuring multiple output feeds with different settings each (issue 1336, issue 3858, issue 4507)The
crawlandrunspidercommands now support multiple-oparameters (issue 1336, issue 3858, issue 4507)The
crawlandrunspidercommands now support specifying an output format by appending:<format>to the output file (issue 1336, issue 3858, issue 4507)The new
Response.ip_addressattribute gives access to the IP address that originated a response (issue 3903, issue 3940)A warning is now issued when a value in
allowed_domainsincludes a port (issue 50, issue 3198, issue 4413)Zsh completion now excludes used option aliases from the completion list (issue 4438)
Bug fixes¶
Request serialization no longer breaks for callbacks that are spider attributes which are assigned a function with a different name (issue 4500)
Nonevalues inallowed_domainsno longer cause aTypeErrorexception (issue 4410)Zsh completion no longer allows options after arguments (issue 4438)
zope.interface 5.0.0 and later versions are now supported (issue 4447, issue 4448)
Spider.make_requests_from_url, deprecated in Scrapy 1.4.0, now issues a warning when used (issue 4412)
Documentation¶
Improved the documentation about signals that allow their handlers to return a
Deferred(issue 4295, issue 4390)Our PyPI entry now includes links for our documentation, our source code repository and our issue tracker (issue 4456)
Covered the curl2scrapy service in the documentation (issue 4206, issue 4455)
Removed references to the Guppy library, which only works in Python 2 (issue 4285, issue 4343)
Extended use of InterSphinx to link to Python 3 documentation (issue 4444, issue 4445)
Added support for Sphinx 3.0 and later (issue 4475, issue 4480, issue 4496, issue 4503)
Quality assurance¶
Removed warnings about using old, removed settings (issue 4404)
Removed a warning about importing
StringTransportfromtwisted.test.proto_helpersin Twisted 19.7.0 or newer (issue 4409)Removed outdated Debian package build files (issue 4384)
Removed
objectusage as a base class (issue 4430)Removed code that added support for old versions of Twisted that we no longer support (issue 4472)
Fixed code style issues (issue 4468, issue 4469, issue 4471, issue 4481)
Removed
twisted.internet.defer.returnValue()calls (issue 4443, issue 4446, issue 4489)
Scrapy 2.0.1 (2020-03-18)¶
Response.follow_allnow supports an empty URL iterable as input (issue 4408, issue 4420)Removed top-level
reactorimports to prevent errors about the wrong Twisted reactor being installed when setting a different Twisted reactor usingTWISTED_REACTOR(issue 4401, issue 4406)Fixed tests (issue 4422)
Scrapy 2.0.0 (2020-03-03)¶
Highlights:
Python 2 support has been removed
Partial coroutine syntax support and experimental
asynciosupportNew
Response.follow_allmethodFTP support for media pipelines
New
Response.certificateattributeIPv6 support through
DNS_RESOLVER
Backward-incompatible changes¶
Python 2 support has been removed, following Python 2 end-of-life on January 1, 2020 (issue 4091, issue 4114, issue 4115, issue 4121, issue 4138, issue 4231, issue 4242, issue 4304, issue 4309, issue 4373)
Retry gaveups (see
RETRY_TIMES) are now logged as errors instead of as debug information (issue 3171, issue 3566)File extensions that
LinkExtractorignores by default now also include7z,7zip,apk,bz2,cdr,dmg,ico,iso,tar,tar.gz,webm, andxz(issue 1837, issue 2067, issue 4066)The
METAREFRESH_IGNORE_TAGSsetting is now an empty list by default, following web browser behavior (issue 3844, issue 4311)The
HttpCompressionMiddlewarenow includes spaces after commas in the value of theAccept-Encodingheader that it sets, following web browser behavior (issue 4293)The
__init__method of custom download handlers (seeDOWNLOAD_HANDLERS) or subclasses of the following downloader handlers no longer receives asettingsparameter:scrapy.core.downloader.handlers.datauri.DataURIDownloadHandlerscrapy.core.downloader.handlers.file.FileDownloadHandler
Use the
from_settingsorfrom_crawlerclass methods to expose such a parameter to your custom download handlers.We have refactored the
scrapy.core.scheduler.Schedulerclass and related queue classes (seeSCHEDULER_PRIORITY_QUEUE,SCHEDULER_DISK_QUEUEandSCHEDULER_MEMORY_QUEUE) to make it easier to implement custom scheduler queue classes. See Changes to scheduler queue classes below for details.Overridden settings are now logged in a different format. This is more in line with similar information logged at startup (issue 4199)
Deprecation removals¶
The Scrapy shell no longer provides a sel proxy object, use
response.selectorinstead (issue 4347)LevelDB support has been removed (issue 4112)
The following functions have been removed from
scrapy.utils.python:isbinarytext,is_writable,setattr_default,stringify_dict(issue 4362)
Deprecations¶
Using environment variables prefixed with
SCRAPY_to override settings is deprecated (issue 4300, issue 4374, issue 4375)scrapy.linkextractors.FilteringLinkExtractoris deprecated, usescrapy.linkextractors.LinkExtractorinstead (issue 4045)The
noconnectquery string argument of proxy URLs is deprecated and should be removed from proxy URLs (issue 4198)The
nextmethod ofscrapy.utils.python.MutableChainis deprecated, use the globalnext()function orMutableChain.__next__instead (issue 4153)
New features¶
Added partial support for Python’s coroutine syntax and experimental support for
asyncioandasyncio-powered libraries (issue 4010, issue 4259, issue 4269, issue 4270, issue 4271, issue 4316, issue 4318)The new
Response.follow_allmethod offers the same functionality asResponse.followbut supports an iterable of URLs as input and returns an iterable of requests (issue 2582, issue 4057, issue 4286)Media pipelines now support FTP storage (issue 3928, issue 3961)
The new
Response.certificateattribute exposes the SSL certificate of the server as atwisted.internet.ssl.Certificateobject for HTTPS responses (issue 2726, issue 4054)A new
DNS_RESOLVERsetting allows enabling IPv6 support (issue 1031, issue 4227)A new
SCRAPER_SLOT_MAX_ACTIVE_SIZEsetting allows configuring the existing soft limit that pauses request downloads when the total response data being processed is too high (issue 1410, issue 3551)A new
TWISTED_REACTORsetting allows customizing thereactorthat Scrapy uses, allowing to enable asyncio support or deal with a common macOS issue (issue 2905, issue 4294)Scheduler disk and memory queues may now use the class methods
from_crawlerorfrom_settings(issue 3884)The new
Response.cb_kwargsattribute serves as a shortcut forResponse.request.cb_kwargs(issue 4331)Response.follownow supports aflagsparameter, for consistency withRequest(issue 4277, issue 4279)Item loader processors can now be regular functions, they no longer need to be methods (issue 3899)
Rulenow accepts anerrbackparameter (issue 4000)Requestno longer requires acallbackparameter when anerrbackparameter is specified (issue 3586, issue 4008)LogFormatternow supports some additional methods:download_errorfor download errorsitem_errorfor exceptions raised during item processing by item pipelinesspider_errorfor exceptions raised from spider callbacks
The
FEED_URIsetting now supportspathlib.Pathvalues (issue 3731, issue 4074)A new
request_left_downloadersignal is sent when a request leaves the downloader (issue 4303)Scrapy logs a warning when it detects a request callback or errback that uses
yieldbut also returns a value, since the returned value would be lost (issue 3484, issue 3869)Spiderobjects now raise anAttributeErrorexception if they do not have astart_urlsattribute nor reimplementstart_requests, but have astart_urlattribute (issue 4133, issue 4170)BaseItemExportersubclasses may now usesuper().__init__(**kwargs)instead ofself._configure(kwargs)in their__init__method, passingdont_fail=Trueto the parent__init__method if needed, and accessingkwargsatself._kwargsafter calling their parent__init__method (issue 4193, issue 4370)A new
keep_fragmentsparameter ofscrapy.utils.request.request_fingerprintallows to generate different fingerprints for requests with different fragments in their URL (issue 4104)Download handlers (see
DOWNLOAD_HANDLERS) may now use thefrom_settingsandfrom_crawlerclass methods that other Scrapy components already supported (issue 4126)scrapy.utils.python.MutableChain.__iter__now returnsself, allowing it to be used as a sequence (issue 4153)
Bug fixes¶
The
crawlcommand now also exits with exit code 1 when an exception happens before the crawling starts (issue 4175, issue 4207)LinkExtractor.extract_linksno longer re-encodes the query string or URLs from non-UTF-8 responses in UTF-8 (issue 998, issue 1403, issue 1949, issue 4321)The first spider middleware (see
SPIDER_MIDDLEWARES) now also processes exceptions raised from callbacks that are generators (issue 4260, issue 4272)Redirects to URLs starting with 3 slashes (
///) are now supported (issue 4032, issue 4042)Requestno longer accepts strings asurlsimply because they have a colon (issue 2552, issue 4094)The correct encoding is now used for attach names in
MailSender(issue 4229, issue 4239)RFPDupeFilter, the defaultDUPEFILTER_CLASS, no longer writes an extra\rcharacter on each line in Windows, which made the size of therequests.seenfile unnecessarily large on that platform (issue 4283)Z shell auto-completion now looks for
.htmlfiles, not.httpfiles, and covers the-hcommand-line switch (issue 4122, issue 4291)Adding items to a
scrapy.utils.datatypes.LocalCacheobject without alimitdefined no longer raises aTypeErrorexception (issue 4123)Fixed a typo in the message of the
ValueErrorexception raised whenscrapy.utils.misc.create_instance()gets bothsettingsandcrawlerset toNone(issue 4128)
Documentation¶
API documentation now links to an online, syntax-highlighted view of the corresponding source code (issue 4148)
Links to unexisting documentation pages now allow access to the sidebar (issue 4152, issue 4169)
Cross-references within our documentation now display a tooltip when hovered (issue 4173, issue 4183)
Improved the documentation about
LinkExtractor.extract_linksand simplified Link Extractors (issue 4045)Clarified how
ItemLoader.itemworks (issue 3574, issue 4099)Clarified that
logging.basicConfig()should not be used when also usingCrawlerProcess(issue 2149, issue 2352, issue 3146, issue 3960)Clarified the requirements for
Requestobjects when using persistence (issue 4124, issue 4139)Clarified how to install a custom image pipeline (issue 4034, issue 4252)
Fixed the signatures of the
file_pathmethod in media pipeline examples (issue 4290)Covered a backward-incompatible change in Scrapy 1.7.0 affecting custom
scrapy.core.scheduler.Schedulersubclasses (issue 4274)Improved the
README.rstandCODE_OF_CONDUCT.mdfiles (issue 4059)Documentation examples are now checked as part of our test suite and we have fixed some of the issues detected (issue 4142, issue 4146, issue 4171, issue 4184, issue 4190)
Fixed logic issues, broken links and typos (issue 4247, issue 4258, issue 4282, issue 4288, issue 4305, issue 4308, issue 4323, issue 4338, issue 4359, issue 4361)
Improved consistency when referring to the
__init__method of an object (issue 4086, issue 4088)Fixed an inconsistency between code and output in Scrapy at a glance (issue 4213)
Extended
intersphinxusage (issue 4147, issue 4172, issue 4185, issue 4194, issue 4197)We now use a recent version of Python to build the documentation (issue 4140, issue 4249)
Cleaned up documentation (issue 4143, issue 4275)
Quality assurance¶
Re-enabled proxy
CONNECTtests (issue 2545, issue 4114)Added Bandit security checks to our test suite (issue 4162, issue 4181)
Added Flake8 style checks to our test suite and applied many of the corresponding changes (issue 3944, issue 3945, issue 4137, issue 4157, issue 4167, issue 4174, issue 4186, issue 4195, issue 4238, issue 4246, issue 4355, issue 4360, issue 4365)
Improved test coverage (issue 4097, issue 4218, issue 4236)
Started reporting slowest tests, and improved the performance of some of them (issue 4163, issue 4164)
Fixed broken tests and refactored some tests (issue 4014, issue 4095, issue 4244, issue 4268, issue 4372)
Modified the tox configuration to allow running tests with any Python version, run Bandit and Flake8 tests by default, and enforce a minimum tox version programmatically (issue 4179)
Cleaned up code (issue 3937, issue 4208, issue 4209, issue 4210, issue 4212, issue 4369, issue 4376, issue 4378)
Changes to scheduler queue classes¶
The following changes may impact any custom queue classes of all types:
The
pushmethod no longer receives a second positional parameter containingrequest.priority * -1. If you need that value, get it from the first positional parameter,request, instead, or use the newpriority()method inscrapy.core.scheduler.ScrapyPriorityQueuesubclasses.
The following changes may impact custom priority queue classes:
In the
__init__method or thefrom_crawlerorfrom_settingsclass methods:The parameter that used to contain a factory function,
qfactory, is now passed as a keyword parameter nameddownstream_queue_cls.A new keyword parameter has been added:
key. It is a string that is always an empty string for memory queues and indicates theJOB_DIRvalue for disk queues.The parameter for disk queues that contains data from the previous crawl,
startpriosorslot_startprios, is now passed as a keyword parameter namedstartprios.The
serializeparameter is no longer passed. The disk queue class must take care of request serialization on its own before writing to disk, using therequest_to_dict()andrequest_from_dict()functions from thescrapy.utils.reqsermodule.
The following changes may impact custom disk and memory queue classes:
The signature of the
__init__method is now__init__(self, crawler, key).
The following changes affect specifically the
ScrapyPriorityQueue and
DownloaderAwarePriorityQueue classes from
scrapy.core.scheduler and may affect subclasses:
In the
__init__method, most of the changes described above apply.__init__may still receive all parameters as positional parameters, however:downstream_queue_cls, which replacedqfactory, must be instantiated differently.qfactorywas instantiated with a priority value (integer).Instances of
downstream_queue_clsshould be created using the newScrapyPriorityQueue.qfactoryorDownloaderAwarePriorityQueue.pqfactorymethods.The new
keyparameter displaced thestartpriosparameter 1 position to the right.
The following class attributes have been added:
crawlerdownstream_queue_cls(details above)key(details above)
The
serializeattribute has been removed (details above)
The following changes affect specifically the
ScrapyPriorityQueue class and may affect
subclasses:
A new
priority()method has been added which, given a request, returnsrequest.priority * -1.It is used in
push()to make up for the removal of itspriorityparameter.The
spiderattribute has been removed. Usecrawler.spiderinstead.
The following changes affect specifically the
DownloaderAwarePriorityQueue class and may
affect subclasses:
A new
pqueuesattribute offers a mapping of downloader slot names to the corresponding instances ofdownstream_queue_cls.
Scrapy 1.8.3 (2022-07-25)¶
Security bug fix:
When
HttpProxyMiddlewareprocesses a request withproxymetadata, and thatproxymetadata includes proxy credentials,HttpProxyMiddlewaresets theProxy-Authorizationheader, but only if that header is not already set.There are third-party proxy-rotation downloader middlewares that set different
proxymetadata every time they process a request.Because of request retries and redirects, the same request can be processed by downloader middlewares more than once, including both
HttpProxyMiddlewareand any third-party proxy-rotation downloader middleware.These third-party proxy-rotation downloader middlewares could change the
proxymetadata of a request to a new value, but fail to remove theProxy-Authorizationheader from the previous value of theproxymetadata, causing the credentials of one proxy to be sent to a different proxy.To prevent the unintended leaking of proxy credentials, the behavior of
HttpProxyMiddlewareis now as follows when processing a request:If the request being processed defines
proxymetadata that includes credentials, theProxy-Authorizationheader is always updated to feature those credentials.If the request being processed defines
proxymetadata without credentials, theProxy-Authorizationheader is removed unless it was originally defined for the same proxy URL.To remove proxy credentials while keeping the same proxy URL, remove the
Proxy-Authorizationheader.If the request has no
proxymetadata, or that metadata is a falsy value (e.g.None), theProxy-Authorizationheader is removed.It is no longer possible to set a proxy URL through the
proxymetadata but set the credentials through theProxy-Authorizationheader. Set proxy credentials through theproxymetadata instead.
Scrapy 1.8.2 (2022-03-01)¶
Security bug fixes:
When a
Requestobject with cookies defined gets a redirect response causing a newRequestobject to be scheduled, the cookies defined in the originalRequestobject are no longer copied into the newRequestobject.If you manually set the
Cookieheader on aRequestobject and the domain name of the redirect URL is not an exact match for the domain of the URL of the originalRequestobject, yourCookieheader is now dropped from the newRequestobject.The old behavior could be exploited by an attacker to gain access to your cookies. Please, see the cjvr-mfj7-j4j8 security advisory for more information.
Note
It is still possible to enable the sharing of cookies between different domains with a shared domain suffix (e.g.
example.comand any subdomain) by defining the shared domain suffix (e.g.example.com) as the cookie domain when defining your cookies. See the documentation of theRequestclass for more information.When the domain of a cookie, either received in the
Set-Cookieheader of a response or defined in aRequestobject, is set to a public suffix, the cookie is now ignored unless the cookie domain is the same as the request domain.The old behavior could be exploited by an attacker to inject cookies into your requests to some other domains. Please, see the mfjm-vh54-3f96 security advisory for more information.
Scrapy 1.8.1 (2021-10-05)¶
Security bug fix:
If you use
HttpAuthMiddleware(i.e. thehttp_userandhttp_passspider attributes) for HTTP authentication, any request exposes your credentials to the request target.To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute,
http_auth_domain, and point it to the specific domain to which the authentication credentials must be sent.If the
http_auth_domainspider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.If you need to send the same HTTP authentication credentials to multiple domains, you can use
w3lib.http.basic_auth_header()instead to set the value of theAuthorizationheader of your requests.If you really want your spider to send the same HTTP authentication credentials to any domain, set the
http_auth_domainspider attribute toNone.Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.
Scrapy 1.8.0 (2019-10-28)¶
Highlights:
Dropped Python 3.4 support and updated minimum requirements; made Python 3.8 support official
New
Request.from_curlclass methodNew
ROBOTSTXT_PARSERandROBOTSTXT_USER_AGENTsettingsNew
DOWNLOADER_CLIENT_TLS_CIPHERSandDOWNLOADER_CLIENT_TLS_VERBOSE_LOGGINGsettings
Backward-incompatible changes¶
Python 3.4 is no longer supported, and some of the minimum requirements of Scrapy have also changed:
cssselect 0.9.1
cryptography 2.0
lxml 3.5.0
pyOpenSSL 16.2.0
queuelib 1.4.2
service_identity 16.0.0
six 1.10.0
Twisted 17.9.0 (16.0.0 with Python 2)
zope.interface 4.1.3
JSONRequestis now calledJsonRequestfor consistency with similar classes (issue 3929, issue 3982)If you are using a custom context factory (
DOWNLOADER_CLIENTCONTEXTFACTORY), its__init__method must accept two new parameters:tls_verbose_loggingandtls_ciphers(issue 2111, issue 3392, issue 3442, issue 3450)ItemLoadernow turns the values of its input item into lists:>>> item = MyItem() >>> item["field"] = "value1" >>> loader = ItemLoader(item=item) >>> item["field"] ['value1']
This is needed to allow adding values to existing fields (
loader.add_value('field', 'value2')).(issue 3804, issue 3819, issue 3897, issue 3976, issue 3998, issue 4036)
See also Deprecation removals below.
New features¶
A new
Request.from_curlclass method allows creating a request from a cURL command (issue 2985, issue 3862)A new
ROBOTSTXT_PARSERsetting allows choosing which robots.txt parser to use. It includes built-in support for RobotFileParser, Protego (default), Reppy, and Robotexclusionrulesparser, and allows you to implement support for additional parsers (issue 754, issue 2669, issue 3796, issue 3935, issue 3969, issue 4006)A new
ROBOTSTXT_USER_AGENTsetting allows defining a separate user agent string to use for robots.txt parsing (issue 3931, issue 3966)Ruleno longer requires aLinkExtractorparameter (issue 781, issue 4016)Use the new
DOWNLOADER_CLIENT_TLS_CIPHERSsetting to customize the TLS/SSL ciphers used by the default HTTP/1.1 downloader (issue 3392, issue 3442)Set the new
DOWNLOADER_CLIENT_TLS_VERBOSE_LOGGINGsetting toTrueto enable debug-level messages about TLS connection parameters after establishing HTTPS connections (issue 2111, issue 3450)Callbacks that receive keyword arguments (see
Request.cb_kwargs) can now be tested using the new@cb_kwargsspider contract (issue 3985, issue 3988)When a
@scrapesspider contract fails, all missing fields are now reported (issue 766, issue 3939)Custom log formats can now drop messages by having the corresponding methods of the configured
LOG_FORMATTERreturnNone(issue 3984, issue 3987)A much improved completion definition is now available for Zsh (issue 4069)
Bug fixes¶
ItemLoader.load_item()no longer makes later calls toItemLoader.get_output_value()orItemLoader.load_item()return empty data (issue 3804, issue 3819, issue 3897, issue 3976, issue 3998, issue 4036)Fixed
DummyStatsCollectorraising aTypeErrorexception (issue 4007, issue 4052)FilesPipeline.file_pathandImagesPipeline.file_pathno longer choose file extensions that are not registered with IANA (issue 1287, issue 3953, issue 3954)When using botocore to persist files in S3, all botocore-supported headers are properly mapped now (issue 3904, issue 3905)
FTP passwords in
FEED_URIcontaining percent-escaped characters are now properly decoded (issue 3941)A memory-handling and error-handling issue in
scrapy.utils.ssl.get_temp_key_info()has been fixed (issue 3920)
Documentation¶
The documentation now covers how to define and configure a custom log format (issue 3616, issue 3660)
API documentation added for
MarshalItemExporterandPythonItemExporter(issue 3973)API documentation added for
BaseItemandItemMeta(issue 3999)Minor documentation fixes (issue 2998, issue 3398, issue 3597, issue 3894, issue 3934, issue 3978, issue 3993, issue 4022, issue 4028, issue 4033, issue 4046, issue 4050, issue 4055, issue 4056, issue 4061, issue 4072, issue 4071, issue 4079, issue 4081, issue 4089, issue 4093)
Deprecation removals¶
scrapy.xlibhas been removed (issue 4015)
Deprecations¶
The LevelDB storage backend (
scrapy.extensions.httpcache.LeveldbCacheStorage) ofHttpCacheMiddlewareis deprecated (issue 4085, issue 4092)Use of the undocumented
SCRAPY_PICKLED_SETTINGS_TO_OVERRIDEenvironment variable is deprecated (issue 3910)scrapy.item.DictItemis deprecated, useIteminstead (issue 3999)
Other changes¶
Minimum versions of optional Scrapy requirements that are covered by continuous integration tests have been updated:
Lower versions of these optional requirements may work, but it is not guaranteed (issue 3892)
GitHub templates for bug reports and feature requests (issue 3126, issue 3471, issue 3749, issue 3754)
Continuous integration fixes (issue 3923)
Code cleanup (issue 3391, issue 3907, issue 3946, issue 3950, issue 4023, issue 4031)
Scrapy 1.7.4 (2019-10-21)¶
Revert the fix for issue 3804 (issue 3819), which has a few undesired side effects (issue 3897, issue 3976).
As a result, when an item loader is initialized with an item,
ItemLoader.load_item() once again
makes later calls to ItemLoader.get_output_value() or ItemLoader.load_item() return empty data.
Scrapy 1.7.3 (2019-08-01)¶
Enforce lxml 4.3.5 or lower for Python 3.4 (issue 3912, issue 3918).
Scrapy 1.7.2 (2019-07-23)¶
Fix Python 2 support (issue 3889, issue 3893, issue 3896).
Scrapy 1.7.1 (2019-07-18)¶
Re-packaging of Scrapy 1.7.0, which was missing some changes in PyPI.
Scrapy 1.7.0 (2019-07-18)¶
Note
Make sure you install Scrapy 1.7.1. The Scrapy 1.7.0 package in PyPI is the result of an erroneous commit tagging and does not include all the changes described below.
Highlights:
Improvements for crawls targeting multiple domains
A cleaner way to pass arguments to callbacks
A new class for JSON requests
Improvements for rule-based spiders
New features for feed exports
Backward-incompatible changes¶
429is now part of theRETRY_HTTP_CODESsetting by defaultThis change is backward incompatible. If you don’t want to retry
429, you must overrideRETRY_HTTP_CODESaccordingly.Crawler,CrawlerRunner.crawlandCrawlerRunner.create_crawlerno longer accept aSpidersubclass instance, they only accept aSpidersubclass now.Spidersubclass instances were never meant to work, and they were not working as one would expect: instead of using the passedSpidersubclass instance, theirfrom_crawlermethod was called to generate a new instance.Non-default values for the
SCHEDULER_PRIORITY_QUEUEsetting may stop working. Scheduler priority queue classes now need to handleRequestobjects instead of arbitrary Python data structures.An additional
crawlerparameter has been added to the__init__method of theSchedulerclass. Custom scheduler subclasses which don’t accept arbitrary parameters in their__init__method might break because of this change.For more information, see
SCHEDULER.
See also Deprecation removals below.
New features¶
A new scheduler priority queue,
scrapy.pqueues.DownloaderAwarePriorityQueue, may be enabled for a significant scheduling improvement on crawls targeting multiple web domains, at the cost of noCONCURRENT_REQUESTS_PER_IPsupport (issue 3520)A new
Request.cb_kwargsattribute provides a cleaner way to pass keyword arguments to callback methods (issue 1138, issue 3563)A new
JSONRequestclass offers a more convenient way to build JSON requests (issue 3504, issue 3505)A
process_requestcallback passed to theRule__init__method now receives theResponseobject that originated the request as its second argument (issue 3682)A new
restrict_textparameter for theLinkExtractor__init__method allows filtering links by linking text (issue 3622, issue 3635)A new
FEED_STORAGE_S3_ACLsetting allows defining a custom ACL for feeds exported to Amazon S3 (issue 3607)A new
FEED_STORAGE_FTP_ACTIVEsetting allows using FTP’s active connection mode for feeds exported to FTP servers (issue 3829)A new
METAREFRESH_IGNORE_TAGSsetting allows overriding which HTML tags are ignored when searching a response for HTML meta tags that trigger a redirect (issue 1422, issue 3768)A new
redirect_reasonsrequest meta key exposes the reason (status code, meta refresh) behind every followed redirect (issue 3581, issue 3687)The
SCRAPY_CHECKvariable is now set to thetruestring during runs of thecheckcommand, which allows detecting contract check runs from code (issue 3704, issue 3739)A new
Item.deepcopy()method makes it easier to deep-copy items (issue 1493, issue 3671)CoreStatsalso logselapsed_time_secondsnow (issue 3638)Exceptions from
ItemLoaderinput and output processors are now more verbose (issue 3836, issue 3840)Crawler,CrawlerRunner.crawlandCrawlerRunner.create_crawlernow fail gracefully if they receive aSpidersubclass instance instead of the subclass itself (issue 2283, issue 3610, issue 3872)
Bug fixes¶
process_spider_exception()is now also invoked for generators (issue 220, issue 2061)System exceptions like KeyboardInterrupt are no longer caught (issue 3726)
ItemLoader.load_item()no longer makes later calls toItemLoader.get_output_value()orItemLoader.load_item()return empty data (issue 3804, issue 3819)The images pipeline (
ImagesPipeline) no longer ignores these Amazon S3 settings:AWS_ENDPOINT_URL,AWS_REGION_NAME,AWS_USE_SSL,AWS_VERIFY(issue 3625)Fixed a memory leak in
scrapy.pipelines.media.MediaPipelineaffecting, for example, non-200 responses and exceptions from custom middlewares (issue 3813)Requests with private callbacks are now correctly unserialized from disk (issue 3790)
FormRequest.from_response()now handles invalid methods like major web browsers (issue 3777, issue 3794)
Documentation¶
A new topic, Selecting dynamically-loaded content, covers recommended approaches to read dynamically-loaded data (issue 3703)
Broad Crawls now features information about memory usage (issue 1264, issue 3866)
The documentation of
Rulenow covers how to access the text of a link when usingCrawlSpider(issue 3711, issue 3712)A new section, Writing your own storage backend, covers writing a custom cache storage backend for
HttpCacheMiddleware(issue 3683, issue 3692)A new FAQ entry, How to split an item into multiple items in an item pipeline?, explains what to do when you want to split an item into multiple items from an item pipeline (issue 2240, issue 3672)
Updated the FAQ entry about crawl order to explain why the first few requests rarely follow the desired order (issue 1739, issue 3621)
The
LOGSTATS_INTERVALsetting (issue 3730), theFilesPipeline.file_pathandImagesPipeline.file_pathmethods (issue 2253, issue 3609) and theCrawler.stop()method (issue 3842) are now documentedSome parts of the documentation that were confusing or misleading are now clearer (issue 1347, issue 1789, issue 2289, issue 3069, issue 3615, issue 3626, issue 3668, issue 3670, issue 3673, issue 3728, issue 3762, issue 3861, issue 3882)
Minor documentation fixes (issue 3648, issue 3649, issue 3662, issue 3674, issue 3676, issue 3694, issue 3724, issue 3764, issue 3767, issue 3791, issue 3797, issue 3806, issue 3812)
Deprecation removals¶
The following deprecated APIs have been removed (issue 3578):
scrapy.conf(useCrawler.settings)From
scrapy.core.downloader.handlers:http.HttpDownloadHandler(usehttp10.HTTP10DownloadHandler)
scrapy.loader.ItemLoader._get_values(use_get_xpathvalues)scrapy.loader.XPathItemLoader(useItemLoader)scrapy.log(see Logging)From
scrapy.pipelines:files.FilesPipeline.file_key(usefile_path)images.ImagesPipeline.file_key(usefile_path)images.ImagesPipeline.image_key(usefile_path)images.ImagesPipeline.thumb_key(usethumb_path)
From both
scrapy.selectorandscrapy.selector.lxmlsel:From
scrapy.selector.csstranslator:ScrapyGenericTranslator(use parsel.csstranslator.GenericTranslator)ScrapyHTMLTranslator(use parsel.csstranslator.HTMLTranslator)ScrapyXPathExpr(use parsel.csstranslator.XPathExpr)
From
Selector:_root(both the__init__method argument and the object property, useroot)extract_unquoted(usegetall)select(usexpath)
From
SelectorList:extract_unquoted(usegetall)select(usexpath)x(usexpath)
scrapy.spiders.BaseSpider(useSpider)From
Spider(and subclasses):DOWNLOAD_DELAY(use download_delay)set_crawler(usefrom_crawler())
scrapy.spiders.spiders(useSpiderLoader)scrapy.telnet(usescrapy.extensions.telnet)From
scrapy.utils.python:str_to_unicode(useto_unicode)unicode_to_str(useto_bytes)
scrapy.utils.response.body_or_str
The following deprecated settings have also been removed (issue 3578):
SPIDER_MANAGER_CLASS(useSPIDER_LOADER_CLASS)
Deprecations¶
The
queuelib.PriorityQueuevalue for theSCHEDULER_PRIORITY_QUEUEsetting is deprecated. Usescrapy.pqueues.ScrapyPriorityQueueinstead.process_requestcallbacks passed toRulethat do not accept two arguments are deprecated.The following modules are deprecated:
scrapy.utils.http(use w3lib.http)scrapy.utils.markup(use w3lib.html)scrapy.utils.multipart(use urllib3)
The
scrapy.utils.datatypes.MergeDictclass is deprecated for Python 3 code bases. UseChainMapinstead. (issue 3878)The
scrapy.utils.gz.is_gzippedfunction is deprecated. Usescrapy.utils.gz.gzip_magic_numberinstead.
Other changes¶
It is now possible to run all tests from the same tox environment in parallel; the documentation now covers this and other ways to run tests (issue 3707)
It is now possible to generate an API documentation coverage report (issue 3806, issue 3810, issue 3860)
The documentation policies now require docstrings (issue 3701) that follow PEP 257 (issue 3748)
Internal fixes and cleanup (issue 3629, issue 3643, issue 3684, issue 3698, issue 3734, issue 3735, issue 3736, issue 3737, issue 3809, issue 3821, issue 3825, issue 3827, issue 3833, issue 3857, issue 3877)
Scrapy 1.6.0 (2019-01-30)¶
Highlights:
better Windows support;
Python 3.7 compatibility;
big documentation improvements, including a switch from
.extract_first()+.extract()API to.get()+.getall()API;feed exports, FilePipeline and MediaPipeline improvements;
better extensibility:
item_errorandrequest_reached_downloadersignals;from_crawlersupport for feed exporters, feed storages and dupefilters.scrapy.contractsfixes and new features;telnet console security improvements, first released as a backport in Scrapy 1.5.2 (2019-01-22);
clean-up of the deprecated code;
various bug fixes, small new features and usability improvements across the codebase.
Selector API changes¶
While these are not changes in Scrapy itself, but rather in the parsel
library which Scrapy uses for xpath/css selectors, these changes are
worth mentioning here. Scrapy now depends on parsel >= 1.5, and
Scrapy documentation is updated to follow recent parsel API conventions.
Most visible change is that .get() and .getall() selector
methods are now preferred over .extract_first() and .extract().
We feel that these new methods result in a more concise and readable code.
See extract() and extract_first() for more details.
Note
There are currently no plans to deprecate .extract()
and .extract_first() methods.
Another useful new feature is the introduction of Selector.attrib and
SelectorList.attrib properties, which make it easier to get
attributes of HTML elements. See Selecting element attributes.
CSS selectors are cached in parsel >= 1.5, which makes them faster when the same CSS path is used many times. This is very common in case of Scrapy spiders: callbacks are usually called several times, on different pages.
If you’re using custom Selector or SelectorList subclasses,
a backward incompatible change in parsel may affect your code.
See parsel changelog for a detailed description, as well as for the
full list of improvements.
Telnet console¶
Backward incompatible: Scrapy’s telnet console now requires username and password. See Telnet Console for more details. This change fixes a security issue; see Scrapy 1.5.2 (2019-01-22) release notes for details.
New extensibility features¶
from_crawlersupport is added to feed exporters and feed storages. This, among other things, allows to access Scrapy settings from custom feed storages and exporters (issue 1605, issue 3348).from_crawlersupport is added to dupefilters (issue 2956); this allows to access e.g. settings or a spider from a dupefilter.item_erroris fired when an error happens in a pipeline (issue 3256);request_reached_downloaderis fired when Downloader gets a new Request; this signal can be useful e.g. for custom Schedulers (issue 3393).new SitemapSpider
sitemap_filter()method which allows to select sitemap entries based on their attributes in SitemapSpider subclasses (issue 3512).Lazy loading of Downloader Handlers is now optional; this enables better initialization error handling in custom Downloader Handlers (issue 3394).
New FilePipeline and MediaPipeline features¶
Expose more options for S3FilesStore:
AWS_ENDPOINT_URL,AWS_USE_SSL,AWS_VERIFY,AWS_REGION_NAME. For example, this allows to use alternative or self-hosted AWS-compatible providers (issue 2609, issue 3548).ACL support for Google Cloud Storage:
FILES_STORE_GCS_ACLandIMAGES_STORE_GCS_ACL(issue 3199).
scrapy.contracts improvements¶
Exceptions in contracts code are handled better (issue 3377);
dont_filter=Trueis used for contract requests, which allows to test different callbacks with the same URL (issue 3381);request_clsattribute in Contract subclasses allow to use different Request classes in contracts, for example FormRequest (issue 3383).Fixed errback handling in contracts, e.g. for cases where a contract is executed for URL which returns non-200 response (issue 3371).
Usability improvements¶
more stats for RobotsTxtMiddleware (issue 3100)
INFO log level is used to show telnet host/port (issue 3115)
a message is added to IgnoreRequest in RobotsTxtMiddleware (issue 3113)
better validation of
urlargument inResponse.follow(issue 3131)non-zero exit code is returned from Scrapy commands when error happens on spider initialization (issue 3226)
Link extraction improvements: “ftp” is added to scheme list (issue 3152); “flv” is added to common video extensions (issue 3165)
better error message when an exporter is disabled (issue 3358);
scrapy shell --helpmentions syntax required for local files (./file.html) - issue 3496.Referer header value is added to RFPDupeFilter log messages (issue 3588)
Bug fixes¶
fixed issue with extra blank lines in .csv exports under Windows (issue 3039);
proper handling of pickling errors in Python 3 when serializing objects for disk queues (issue 3082)
flags are now preserved when copying Requests (issue 3342);
FormRequest.from_response clickdata shouldn’t ignore elements with
input[type=image](issue 3153).FormRequest.from_response should preserve duplicate keys (issue 3247)
Documentation improvements¶
Docs are re-written to suggest .get/.getall API instead of .extract/.extract_first. Also, Selectors docs are updated and re-structured to match latest parsel docs; they now contain more topics, such as Selecting element attributes or Extensions to CSS Selectors (issue 3390).
Using your browser’s Developer Tools for scraping is a new tutorial which replaces old Firefox and Firebug tutorials (issue 3400).
SCRAPY_PROJECT environment variable is documented (issue 3518);
troubleshooting section is added to install instructions (issue 3517);
improved links to beginner resources in the tutorial (issue 3367, issue 3468);
fixed
RETRY_HTTP_CODESdefault values in docs (issue 3335);remove unused
DEPTH_STATSoption from docs (issue 3245);other cleanups (issue 3347, issue 3350, issue 3445, issue 3544, issue 3605).
Deprecation removals¶
Compatibility shims for pre-1.0 Scrapy module names are removed (issue 3318):
scrapy.commandscrapy.contrib(with all submodules)scrapy.contrib_exp(with all submodules)scrapy.dupefilterscrapy.linkextractorscrapy.projectscrapy.spiderscrapy.spidermanagerscrapy.squeuescrapy.statsscrapy.statscolscrapy.utils.decorator
See Module Relocations for more information, or use suggestions from Scrapy 1.5.x deprecation warnings to update your code.
Other deprecation removals:
Deprecated scrapy.interfaces.ISpiderManager is removed; please use scrapy.interfaces.ISpiderLoader.
Deprecated
CrawlerSettingsclass is removed (issue 3327).Deprecated
Settings.overridesandSettings.defaultsattributes are removed (issue 3327, issue 3359).
Other improvements, cleanups¶
All Scrapy tests now pass on Windows; Scrapy testing suite is executed in a Windows environment on CI (issue 3315).
Python 3.7 support (issue 3326, issue 3150, issue 3547).
Testing and CI fixes (issue 3526, issue 3538, issue 3308, issue 3311, issue 3309, issue 3305, issue 3210, issue 3299)
scrapy.http.cookies.CookieJar.clearaccepts “domain”, “path” and “name” optional arguments (issue 3231).additional files are included to sdist (issue 3495);
code style fixes (issue 3405, issue 3304);
unneeded .strip() call is removed (issue 3519);
collections.deque is used to store MiddlewareManager methods instead of a list (issue 3476)
Scrapy 1.5.2 (2019-01-22)¶
Security bugfix: Telnet console extension can be easily exploited by rogue websites POSTing content to http://localhost:6023, we haven’t found a way to exploit it from Scrapy, but it is very easy to trick a browser to do so and elevates the risk for local development environment.
The fix is backward incompatible, it enables telnet user-password authentication by default with a random generated password. If you can’t upgrade right away, please consider setting
TELNETCONSOLE_PORTout of its default value.See telnet console documentation for more info
Backport CI build failure under GCE environment due to boto import error.
Scrapy 1.5.1 (2018-07-12)¶
This is a maintenance release with important bug fixes, but no new features:
O(N^2)gzip decompression issue which affected Python 3 and PyPy is fixed (issue 3281);skipping of TLS validation errors is improved (issue 3166);
Ctrl-C handling is fixed in Python 3.5+ (issue 3096);
testing fixes (issue 3092, issue 3263);
documentation improvements (issue 3058, issue 3059, issue 3089, issue 3123, issue 3127, issue 3189, issue 3224, issue 3280, issue 3279, issue 3201, issue 3260, issue 3284, issue 3298, issue 3294).
Scrapy 1.5.0 (2017-12-29)¶
This release brings small new features and improvements across the codebase. Some highlights:
Google Cloud Storage is supported in FilesPipeline and ImagesPipeline.
Crawling with proxy servers becomes more efficient, as connections to proxies can be reused now.
Warnings, exception and logging messages are improved to make debugging easier.
scrapy parsecommand now allows to set custom request meta via--metaargument.Compatibility with Python 3.6, PyPy and PyPy3 is improved; PyPy and PyPy3 are now supported officially, by running tests on CI.
Better default handling of HTTP 308, 522 and 524 status codes.
Documentation is improved, as usual.
Backward Incompatible Changes¶
Scrapy 1.5 drops support for Python 3.3.
Default Scrapy User-Agent now uses https link to scrapy.org (issue 2983). This is technically backward-incompatible; override
USER_AGENTif you relied on old value.Logging of settings overridden by
custom_settingsis fixed; this is technically backward-incompatible because the logger changes from[scrapy.utils.log]to[scrapy.crawler]. If you’re parsing Scrapy logs, please update your log parsers (issue 1343).LinkExtractor now ignores
m4vextension by default, this is change in behavior.522 and 524 status codes are added to
RETRY_HTTP_CODES(issue 2851)
New features¶
Support
<link>tags inResponse.follow(issue 2785)Support for
ptpythonREPL (issue 2654)Google Cloud Storage support for FilesPipeline and ImagesPipeline (issue 2923).
New
--metaoption of the “scrapy parse” command allows to pass additional request.meta (issue 2883)Populate spider variable when using
shell.inspect_response(issue 2812)Handle HTTP 308 Permanent Redirect (issue 2844)
Add 522 and 524 to
RETRY_HTTP_CODES(issue 2851)Log versions information at startup (issue 2857)
scrapy.mail.MailSendernow works in Python 3 (it requires Twisted 17.9.0)Connections to proxy servers are reused (issue 2743)
Add template for a downloader middleware (issue 2755)
Explicit message for NotImplementedError when parse callback not defined (issue 2831)
CrawlerProcess got an option to disable installation of root log handler (issue 2921)
LinkExtractor now ignores
m4vextension by defaultBetter log messages for responses over
DOWNLOAD_WARNSIZEandDOWNLOAD_MAXSIZElimits (issue 2927)Show warning when a URL is put to
Spider.allowed_domainsinstead of a domain (issue 2250).
Bug fixes¶
Fix logging of settings overridden by
custom_settings; this is technically backward-incompatible because the logger changes from[scrapy.utils.log]to[scrapy.crawler], so please update your log parsers if needed (issue 1343)Default Scrapy User-Agent now uses https link to scrapy.org (issue 2983). This is technically backward-incompatible; override
USER_AGENTif you relied on old value.Fix PyPy and PyPy3 test failures, support them officially (issue 2793, issue 2935, issue 2990, issue 3050, issue 2213, issue 3048)
Fix DNS resolver when
DNSCACHE_ENABLED=False(issue 2811)Add
cryptographyfor Debian Jessie tox test env (issue 2848)Add verification to check if Request callback is callable (issue 2766)
Port
extras/qpsclient.pyto Python 3 (issue 2849)Use getfullargspec under the scenes for Python 3 to stop DeprecationWarning (issue 2862)
Update deprecated test aliases (issue 2876)
Fix
SitemapSpidersupport for alternate links (issue 2853)
Docs¶
Added missing bullet point for the
AUTOTHROTTLE_TARGET_CONCURRENCYsetting. (issue 2756)Update Contributing docs, document new support channels (issue 2762, issue:3038)
Include references to Scrapy subreddit in the docs
Fix broken links; use https:// for external links (issue 2978, issue 2982, issue 2958)
Document CloseSpider extension better (issue 2759)
Use
pymongo.collection.Collection.insert_one()in MongoDB example (issue 2781)Spelling mistake and typos (issue 2828, issue 2837, issue 2884, issue 2924)
Clarify
CSVFeedSpider.headersdocumentation (issue 2826)Document
DontCloseSpiderexception and clarifyspider_idle(issue 2791)Update “Releases” section in README (issue 2764)
Fix rst syntax in
DOWNLOAD_FAIL_ON_DATALOSSdocs (issue 2763)Small fix in description of startproject arguments (issue 2866)
Clarify data types in Response.body docs (issue 2922)
Add a note about
request.meta['depth']to DepthMiddleware docs (issue 2374)Add a note about
request.meta['dont_merge_cookies']to CookiesMiddleware docs (issue 2999)Up-to-date example of project structure (issue 2964, issue 2976)
A better example of ItemExporters usage (issue 2989)
Document
from_crawlermethods for spider and downloader middlewares (issue 3019)
Scrapy 1.4.0 (2017-05-18)¶
Scrapy 1.4 does not bring that many breathtaking new features but quite a few handy improvements nonetheless.
Scrapy now supports anonymous FTP sessions with customizable user and
password via the new FTP_USER and FTP_PASSWORD settings.
And if you’re using Twisted version 17.1.0 or above, FTP is now available
with Python 3.
There’s a new response.follow method
for creating requests; it is now a recommended way to create Requests
in Scrapy spiders. This method makes it easier to write correct
spiders; response.follow has several advantages over creating
scrapy.Request objects directly:
it handles relative URLs;
it works properly with non-ascii URLs on non-UTF8 pages;
in addition to absolute and relative URLs it supports Selectors; for
<a>elements it can also extract their href values.
For example, instead of this:
for href in response.css('li.page a::attr(href)').extract():
url = response.urljoin(href)
yield scrapy.Request(url, self.parse, encoding=response.encoding)
One can now write this:
for a in response.css('li.page a'):
yield response.follow(a, self.parse)
Link extractors are also improved. They work similarly to what a regular
modern browser would do: leading and trailing whitespace are removed
from attributes (think href=" http://example.com") when building
Link objects. This whitespace-stripping also happens for action
attributes with FormRequest.
Please also note that link extractors do not canonicalize URLs by default anymore. This was puzzling users every now and then, and it’s not what browsers do in fact, so we removed that extra transformation on extracted links.
For those of you wanting more control on the Referer: header that Scrapy
sends when following links, you can set your own Referrer Policy.
Prior to Scrapy 1.4, the default RefererMiddleware would simply and
blindly set it to the URL of the response that generated the HTTP request
(which could leak information on your URL seeds).
By default, Scrapy now behaves much like your regular browser does.
And this policy is fully customizable with W3C standard values
(or with something really custom of your own if you wish).
See REFERRER_POLICY for details.
To make Scrapy spiders easier to debug, Scrapy logs more stats by default in 1.4: memory usage stats, detailed retry stats, detailed HTTP error code stats. A similar change is that HTTP cache path is also visible in logs now.
Last but not least, Scrapy now has the option to make JSON and XML items
more human-readable, with newlines between items and even custom indenting
offset, using the new FEED_EXPORT_INDENT setting.
Enjoy! (Or read on for the rest of changes in this release.)
Deprecations and Backward Incompatible Changes¶
Default to
canonicalize=Falseinscrapy.linkextractors.LinkExtractor(issue 2537, fixes issue 1941 and issue 1982): warning, this is technically backward-incompatibleEnable memusage extension by default (issue 2539, fixes issue 2187); this is technically backward-incompatible so please check if you have any non-default
MEMUSAGE_***options set.EDITORenvironment variable now takes precedence overEDITORoption defined in settings.py (issue 1829); Scrapy default settings no longer depend on environment variables. This is technically a backward incompatible change.Spider.make_requests_from_urlis deprecated (issue 1728, fixes issue 1495).
New Features¶
Accept proxy credentials in
proxyrequest meta key (issue 2526)Support brotli-compressed content; requires optional brotlipy (issue 2535)
New response.follow shortcut for creating requests (issue 1940)
Added
flagsargument and attribute toRequestobjects (issue 2047)Support Anonymous FTP (issue 2342)
Added
retry/count,retry/max_reachedandretry/reason_count/<reason>stats toRetryMiddleware(issue 2543)Added
httperror/response_ignored_countandhttperror/response_ignored_status_count/<status>stats toHttpErrorMiddleware(issue 2566)Customizable
Referrer policyinRefererMiddleware(issue 2306)New
data:URI download handler (issue 2334, fixes issue 2156)Log cache directory when HTTP Cache is used (issue 2611, fixes issue 2604)
Warn users when project contains duplicate spider names (fixes issue 2181)
scrapy.utils.datatypes.CaselessDictnow acceptsMappinginstances and not only dicts (issue 2646)Media downloads, with
FilesPipelineorImagesPipeline, can now optionally handle HTTP redirects using the newMEDIA_ALLOW_REDIRECTSsetting (issue 2616, fixes issue 2004)Accept non-complete responses from websites using a new
DOWNLOAD_FAIL_ON_DATALOSSsetting (issue 2590, fixes issue 2586)Optional pretty-printing of JSON and XML items via
FEED_EXPORT_INDENTsetting (issue 2456, fixes issue 1327)Allow dropping fields in
FormRequest.from_responseformdata whenNonevalue is passed (issue 667)Per-request retry times with the new
max_retry_timesmeta key (issue 2642)python -m scrapyas a more explicit alternative toscrapycommand (issue 2740)
Bug fixes¶
LinkExtractor now strips leading and trailing whitespaces from attributes (issue 2547, fixes issue 1614)
Properly handle whitespaces in action attribute in
FormRequest(issue 2548)Buffer CONNECT response bytes from proxy until all HTTP headers are received (issue 2495, fixes issue 2491)
FTP downloader now works on Python 3, provided you use Twisted>=17.1 (issue 2599)
Use body to choose response type after decompressing content (issue 2393, fixes issue 2145)
Always decompress
Content-Encoding: gzipatHttpCompressionMiddlewarestage (issue 2391)Respect custom log level in
Spider.custom_settings(issue 2581, fixes issue 1612)‘make htmlview’ fix for macOS (issue 2661)
Remove “commands” from the command list (issue 2695)
Fix duplicate Content-Length header for POST requests with empty body (issue 2677)
Properly cancel large downloads, i.e. above
DOWNLOAD_MAXSIZE(issue 1616)ImagesPipeline: fixed processing of transparent PNG images with palette (issue 2675)
Cleanups & Refactoring¶
Tests: remove temp files and folders (issue 2570), fixed ProjectUtilsTest on macOS (issue 2569), use portable pypy for Linux on Travis CI (issue 2710)
Separate building request from
_requests_to_followin CrawlSpider (issue 2562)Remove “Python 3 progress” badge (issue 2567)
Add a couple more lines to
.gitignore(issue 2557)Remove bumpversion prerelease configuration (issue 2159)
Add codecov.yml file (issue 2750)
Set context factory implementation based on Twisted version (issue 2577, fixes issue 2560)
Add omitted
selfarguments in default project middleware template (issue 2595)Remove redundant
slot.add_request()call in ExecutionEngine (issue 2617)Catch more specific
os.errorexception inscrapy.pipelines.files.FSFilesStore(issue 2644)Change “localhost” test server certificate (issue 2720)
Remove unused
MEMUSAGE_REPORTsetting (issue 2576)
Documentation¶
Binary mode is required for exporters (issue 2564, fixes issue 2553)
Mention issue with
FormRequest.from_responsedue to bug in lxml (issue 2572)Use single quotes uniformly in templates (issue 2596)
Document
ftp_userandftp_passwordmeta keys (issue 2587)Removed section on deprecated
contrib/(issue 2636)Recommend Anaconda when installing Scrapy on Windows (issue 2477, fixes issue 2475)
FAQ: rewrite note on Python 3 support on Windows (issue 2690)
Rearrange selector sections (issue 2705)
Remove
__nonzero__fromSelectorListdocs (issue 2683)Mention how to disable request filtering in documentation of
DUPEFILTER_CLASSsetting (issue 2714)Add sphinx_rtd_theme to docs setup readme (issue 2668)
Open file in text mode in JSON item writer example (issue 2729)
Clarify
allowed_domainsexample (issue 2670)
Scrapy 1.3.3 (2017-03-10)¶
Bug fixes¶
Make
SpiderLoaderraiseImportErroragain by default for missing dependencies and wrongSPIDER_MODULES. These exceptions were silenced as warnings since 1.3.0. A new setting is introduced to toggle between warning or exception if needed ; seeSPIDER_LOADER_WARN_ONLYfor details.
Scrapy 1.3.2 (2017-02-13)¶
Bug fixes¶
Preserve request class when converting to/from dicts (utils.reqser) (issue 2510).
Use consistent selectors for author field in tutorial (issue 2551).
Fix TLS compatibility in Twisted 17+ (issue 2558)
Scrapy 1.3.1 (2017-02-08)¶
New features¶
Support
'True'and'False'string values for boolean settings (issue 2519); you can now do something likescrapy crawl myspider -s REDIRECT_ENABLED=False.Support kwargs with
response.xpath()to use XPath variables and ad-hoc namespaces declarations ; this requires at least Parsel v1.1 (issue 2457).Add support for Python 3.6 (issue 2485).
Run tests on PyPy (warning: some tests still fail, so PyPy is not supported yet).
Bug fixes¶
Enforce
DNS_TIMEOUTsetting (issue 2496).Fix
viewcommand ; it was a regression in v1.3.0 (issue 2503).Fix tests regarding
*_EXPIRES settingswith Files/Images pipelines (issue 2460).Fix name of generated pipeline class when using basic project template (issue 2466).
Fix compatibility with Twisted 17+ (issue 2496, issue 2528).
Fix
scrapy.Iteminheritance on Python 3.6 (issue 2511).Enforce numeric values for components order in
SPIDER_MIDDLEWARES,DOWNLOADER_MIDDLEWARES,EXTENSIONSandSPIDER_CONTRACTS(issue 2420).
Documentation¶
Reword Code of Conduct section and upgrade to Contributor Covenant v1.4 (issue 2469).
Clarify that passing spider arguments converts them to spider attributes (issue 2483).
Document
formidargument onFormRequest.from_response()(issue 2497).Add .rst extension to README files (issue 2507).
Mention LevelDB cache storage backend (issue 2525).
Use
yieldin sample callback code (issue 2533).Add note about HTML entities decoding with
.re()/.re_first()(issue 1704).Typos (issue 2512, issue 2534, issue 2531).
Cleanups¶
Remove redundant check in
MetaRefreshMiddleware(issue 2542).Faster checks in
LinkExtractorfor allow/deny patterns (issue 2538).Remove dead code supporting old Twisted versions (issue 2544).
Scrapy 1.3.0 (2016-12-21)¶
This release comes rather soon after 1.2.2 for one main reason:
it was found out that releases since 0.18 up to 1.2.2 (included) use
some backported code from Twisted (scrapy.xlib.tx.*),
even if newer Twisted modules are available.
Scrapy now uses twisted.web.client and twisted.internet.endpoints directly.
(See also cleanups below.)
As it is a major change, we wanted to get the bug fix out quickly while not breaking any projects using the 1.2 series.
New Features¶
MailSendernow accepts single strings as values fortoandccarguments (issue 2272)scrapy fetch url,scrapy shell urlandfetch(url)inside Scrapy shell now follow HTTP redirections by default (issue 2290); Seefetchandshellfor details.HttpErrorMiddlewarenow logs errors withINFOlevel instead ofDEBUG; this is technically backward incompatible so please check your log parsers.By default, logger names now use a long-form path, e.g.
[scrapy.extensions.logstats], instead of the shorter “top-level” variant of prior releases (e.g.[scrapy]); this is backward incompatible if you have log parsers expecting the short logger name part. You can switch back to short logger names usingLOG_SHORT_NAMESset toTrue.
Dependencies & Cleanups¶
Scrapy now requires Twisted >= 13.1 which is the case for many Linux distributions already.
As a consequence, we got rid of
scrapy.xlib.tx.*modules, which copied some of Twisted code for users stuck with an “old” Twisted versionChunkedTransferMiddlewareis deprecated and removed from the default downloader middlewares.
Scrapy 1.2.3 (2017-03-03)¶
Packaging fix: disallow unsupported Twisted versions in setup.py
Scrapy 1.2.2 (2016-12-06)¶
Bug fixes¶
Fix a cryptic traceback when a pipeline fails on
open_spider()(issue 2011)Fix embedded IPython shell variables (fixing issue 396 that re-appeared in 1.2.0, fixed in issue 2418)
A couple of patches when dealing with robots.txt:
handle (non-standard) relative sitemap URLs (issue 2390)
handle non-ASCII URLs and User-Agents in Python 2 (issue 2373)
Documentation¶
Document
"download_latency"key inRequest’smetadict (issue 2033)Remove page on (deprecated & unsupported) Ubuntu packages from ToC (issue 2335)
A few fixed typos (issue 2346, issue 2369, issue 2369, issue 2380) and clarifications (issue 2354, issue 2325, issue 2414)
Other changes¶
Advertize conda-forge as Scrapy’s official conda channel (issue 2387)
More helpful error messages when trying to use
.css()or.xpath()on non-Text Responses (issue 2264)startprojectcommand now generates a samplemiddlewares.pyfile (issue 2335)Add more dependencies’ version info in
scrapy versionverbose output (issue 2404)Remove all
*.pycfiles from source distribution (issue 2386)
Scrapy 1.2.1 (2016-10-21)¶
Bug fixes¶
Include OpenSSL’s more permissive default ciphers when establishing TLS/SSL connections (issue 2314).
Fix “Location” HTTP header decoding on non-ASCII URL redirects (issue 2321).
Documentation¶
Fix JsonWriterPipeline example (issue 2302).
Various notes: issue 2330 on spider names, issue 2329 on middleware methods processing order, issue 2327 on getting multi-valued HTTP headers as lists.
Other changes¶
Removed
www.fromstart_urlsin built-in spider templates (issue 2299).
Scrapy 1.2.0 (2016-10-03)¶
New Features¶
New
FEED_EXPORT_ENCODINGsetting to customize the encoding used when writing items to a file. This can be used to turn off\uXXXXescapes in JSON output. This is also useful for those wanting something else than UTF-8 for XML or CSV output (issue 2034).startprojectcommand now supports an optional destination directory to override the default one based on the project name (issue 2005).New
SCHEDULER_DEBUGsetting to log requests serialization failures (issue 1610).JSON encoder now supports serialization of
setinstances (issue 2058).Interpret
application/json-amazonui-streamingasTextResponse(issue 1503).scrapyis imported by default when using shell tools (shell, inspect_response) (issue 2248).
Bug fixes¶
DefaultRequestHeaders middleware now runs before UserAgent middleware (issue 2088). Warning: this is technically backward incompatible, though we consider this a bug fix.
HTTP cache extension and plugins that use the
.scrapydata directory now work outside projects (issue 1581). Warning: this is technically backward incompatible, though we consider this a bug fix.Selectordoes not allow passing bothresponseandtextanymore (issue 2153).Fixed logging of wrong callback name with
scrapy parse(issue 2169).Fix for an odd gzip decompression bug (issue 1606).
Fix for selected callbacks when using
CrawlSpiderwithscrapy parse(issue 2225).Fix for invalid JSON and XML files when spider yields no items (issue 872).
Implement
flush()forStreamLoggeravoiding a warning in logs (issue 2125).
Refactoring¶
canonicalize_urlhas been moved to w3lib.url (issue 2168).
Tests & Requirements¶
Scrapy’s new requirements baseline is Debian 8 “Jessie”. It was previously Ubuntu 12.04 Precise. What this means in practice is that we run continuous integration tests with these (main) packages versions at a minimum: Twisted 14.0, pyOpenSSL 0.14, lxml 3.4.
Scrapy may very well work with older versions of these packages (the code base still has switches for older Twisted versions for example) but it is not guaranteed (because it’s not tested anymore).
Documentation¶
Grammar fixes: issue 2128, issue 1566.
Download stats badge removed from README (issue 2160).
New Scrapy architecture diagram (issue 2165).
Updated
Responseparameters documentation (issue 2197).Reworded misleading
RANDOMIZE_DOWNLOAD_DELAYdescription (issue 2190).Add StackOverflow as a support channel (issue 2257).
Scrapy 1.1.4 (2017-03-03)¶
Packaging fix: disallow unsupported Twisted versions in setup.py
Scrapy 1.1.3 (2016-09-22)¶
Bug fixes¶
Class attributes for subclasses of
ImagesPipelineandFilesPipelinework as they did before 1.1.1 (issue 2243, fixes issue 2198)
Documentation¶
Overview and tutorial rewritten to use http://toscrape.com websites (issue 2236, issue 2249, issue 2252).
Scrapy 1.1.2 (2016-08-18)¶
Bug fixes¶
Introduce a missing
IMAGES_STORE_S3_ACLsetting to override the default ACL policy inImagesPipelinewhen uploading images to S3 (note that default ACL policy is “private” – instead of “public-read” – since Scrapy 1.1.0)IMAGES_EXPIRESdefault value set back to 90 (the regression was introduced in 1.1.1)
Scrapy 1.1.1 (2016-07-13)¶
Bug fixes¶
Add “Host” header in CONNECT requests to HTTPS proxies (issue 2069)
Use response
bodywhen choosing response class (issue 2001, fixes issue 2000)Do not fail on canonicalizing URLs with wrong netlocs (issue 2038, fixes issue 2010)
a few fixes for
HttpCompressionMiddleware(andSitemapSpider):Do not decode HEAD responses (issue 2008, fixes issue 1899)
Handle charset parameter in gzip Content-Type header (issue 2050, fixes issue 2049)
Do not decompress gzip octet-stream responses (issue 2065, fixes issue 2063)
Catch (and ignore with a warning) exception when verifying certificate against IP-address hosts (issue 2094, fixes issue 2092)
Make
FilesPipelineandImagesPipelinebackward compatible again regarding the use of legacy class attributes for customization (issue 1989, fixes issue 1985)
New features¶
Enable genspider command outside project folder (issue 2052)
Retry HTTPS CONNECT
TunnelErrorby default (issue 1974)
Documentation¶
FEED_TEMPDIRsetting at lexicographical position (commit 9b3c72c)Use idiomatic
.extract_first()in overview (issue 1994)Update years in copyright notice (commit c2c8036)
Add information and example on errbacks (issue 1995)
Use “url” variable in downloader middleware example (issue 2015)
Grammar fixes (issue 2054, issue 2120)
New FAQ entry on using BeautifulSoup in spider callbacks (issue 2048)
Add notes about Scrapy not working on Windows with Python 3 (issue 2060)
Encourage complete titles in pull requests (issue 2026)
Tests¶
Upgrade py.test requirement on Travis CI and Pin pytest-cov to 2.2.1 (issue 2095)
Scrapy 1.1.0 (2016-05-11)¶
This 1.1 release brings a lot of interesting features and bug fixes:
Scrapy 1.1 has beta Python 3 support (requires Twisted >= 15.5). See Beta Python 3 Support for more details and some limitations.
Hot new features:
Item loaders now support nested loaders (issue 1467).
FormRequest.from_responseimprovements (issue 1382, issue 1137).Added setting
AUTOTHROTTLE_TARGET_CONCURRENCYand improved AutoThrottle docs (issue 1324).Added
response.textto get body as unicode (issue 1730).Anonymous S3 connections (issue 1358).
Deferreds in downloader middlewares (issue 1473). This enables better robots.txt handling (issue 1471).
HTTP caching now follows RFC2616 more closely, added settings
HTTPCACHE_ALWAYS_STOREandHTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS(issue 1151).Selectors were extracted to the parsel library (issue 1409). This means you can use Scrapy Selectors without Scrapy and also upgrade the selectors engine without needing to upgrade Scrapy.
HTTPS downloader now does TLS protocol negotiation by default, instead of forcing TLS 1.0. You can also set the SSL/TLS method using the new
DOWNLOADER_CLIENT_TLS_METHOD.
These bug fixes may require your attention:
Don’t retry bad requests (HTTP 400) by default (issue 1289). If you need the old behavior, add
400toRETRY_HTTP_CODES.Fix shell files argument handling (issue 1710, issue 1550). If you try
scrapy shell index.htmlit will try to load the URL http://index.html, usescrapy shell ./index.htmlto load a local file.Robots.txt compliance is now enabled by default for newly-created projects (issue 1724). Scrapy will also wait for robots.txt to be downloaded before proceeding with the crawl (issue 1735). If you want to disable this behavior, update
ROBOTSTXT_OBEYinsettings.pyfile after creating a new project.Exporters now work on unicode, instead of bytes by default (issue 1080). If you use
PythonItemExporter, you may want to update your code to disable binary mode which is now deprecated.Accept XML node names containing dots as valid (issue 1533).
When uploading files or images to S3 (with
FilesPipelineorImagesPipeline), the default ACL policy is now “private” instead of “public” Warning: backward incompatible!. You can useFILES_STORE_S3_ACLto change it.We’ve reimplemented
canonicalize_url()for more correct output, especially for URLs with non-ASCII characters (issue 1947). This could change link extractors output compared to previous Scrapy versions. This may also invalidate some cache entries you could still have from pre-1.1 runs. Warning: backward incompatible!.
Keep reading for more details on other improvements and bug fixes.
Beta Python 3 Support¶
We have been hard at work to make Scrapy run on Python 3. As a result, now you can run spiders on Python 3.3, 3.4 and 3.5 (Twisted >= 15.5 required). Some features are still missing (and some may never be ported).
Almost all builtin extensions/middlewares are expected to work. However, we are aware of some limitations in Python 3:
Scrapy does not work on Windows with Python 3
Sending emails is not supported
FTP download handler is not supported
Telnet console is not supported
Additional New Features and Enhancements¶
Scrapy now has a Code of Conduct (issue 1681).
Command line tool now has completion for zsh (issue 934).
Improvements to
scrapy shell:Support for bpython and configure preferred Python shell via
SCRAPY_PYTHON_SHELL(issue 1100, issue 1444).Support URLs without scheme (issue 1498) Warning: backward incompatible!
Bring back support for relative file path (issue 1710, issue 1550).
Added
MEMUSAGE_CHECK_INTERVAL_SECONDSsetting to change default check interval (issue 1282).Download handlers are now lazy-loaded on first request using their scheme (issue 1390, issue 1421).
HTTPS download handlers do not force TLS 1.0 anymore; instead, OpenSSL’s
SSLv23_method()/TLS_method()is used allowing to try negotiating with the remote hosts the highest TLS protocol version it can (issue 1794, issue 1629).RedirectMiddlewarenow skips the status codes fromhandle_httpstatus_liston spider attribute or inRequest’smetakey (issue 1334, issue 1364, issue 1447).Form submission:
now works with
<button>elements too (issue 1469).an empty string is now used for submit buttons without a value (issue 1472)
Dict-like settings now have per-key priorities (issue 1135, issue 1149 and issue 1586).
Sending non-ASCII emails (issue 1662)
CloseSpiderandSpiderStateextensions now get disabled if no relevant setting is set (issue 1723, issue 1725).Added method
ExecutionEngine.close(issue 1423).Added method
CrawlerRunner.create_crawler(issue 1528).Scheduler priority queue can now be customized via
SCHEDULER_PRIORITY_QUEUE(issue 1822)..ppslinks are now ignored by default in link extractors (issue 1835).temporary data folder for FTP and S3 feed storages can be customized using a new
FEED_TEMPDIRsetting (issue 1847).FilesPipelineandImagesPipelinesettings are now instance attributes instead of class attributes, enabling spider-specific behaviors (issue 1891).JsonItemExporternow formats opening and closing square brackets on their own line (first and last lines of output file) (issue 1950).If available,
botocoreis used forS3FeedStorage,S3DownloadHandlerandS3FilesStore(issue 1761, issue 1883).Tons of documentation updates and related fixes (issue 1291, issue 1302, issue 1335, issue 1683, issue 1660, issue 1642, issue 1721, issue 1727, issue 1879).
Other refactoring, optimizations and cleanup (issue 1476, issue 1481, issue 1477, issue 1315, issue 1290, issue 1750, issue 1881).
Deprecations and Removals¶
Added
to_bytesandto_unicode, deprecatedstr_to_unicodeandunicode_to_strfunctions (issue 778).binary_is_textis introduced, to replace use ofisbinarytext(but with inverse return value) (issue 1851)The
optional_featuresset has been removed (issue 1359).The
--lsprofcommand line option has been removed (issue 1689). Warning: backward incompatible, but doesn’t break user code.The following datatypes were deprecated (issue 1720):
scrapy.utils.datatypes.MultiValueDictKeyErrorscrapy.utils.datatypes.MultiValueDictscrapy.utils.datatypes.SiteNode
The previously bundled
scrapy.xlib.pydispatchlibrary was deprecated and replaced by pydispatcher.
Relocations¶
telnetconsolewas relocated toextensions/(issue 1524).Note: telnet is not enabled on Python 3 (https://github.com/scrapy/scrapy/pull/1524#issuecomment-146985595)
Bugfixes¶
Scrapy does not retry requests that got a
HTTP 400 Bad Requestresponse anymore (issue 1289). Warning: backward incompatible!Support empty password for http_proxy config (issue 1274).
Interpret
application/x-jsonasTextResponse(issue 1333).Support link rel attribute with multiple values (issue 1201).
Fixed
scrapy.http.FormRequest.from_responsewhen there is a<base>tag (issue 1564).Fixed
TEMPLATES_DIRhandling (issue 1575).Various
FormRequestfixes (issue 1595, issue 1596, issue 1597).Makes
_monkeypatchesmore robust (issue 1634).Fixed bug on
XMLItemExporterwith non-string fields in items (issue 1738).Fixed startproject command in macOS (issue 1635).
Fixed
PythonItemExporterand CSVExporter for non-string item types (issue 1737).Various logging related fixes (issue 1294, issue 1419, issue 1263, issue 1624, issue 1654, issue 1722, issue 1726 and issue 1303).
Fixed bug in
utils.template.render_templatefile()(issue 1212).sitemaps extraction from
robots.txtis now case-insensitive (issue 1902).HTTPS+CONNECT tunnels could get mixed up when using multiple proxies to same remote host (issue 1912).
Scrapy 1.0.7 (2017-03-03)¶
Packaging fix: disallow unsupported Twisted versions in setup.py
Scrapy 1.0.6 (2016-05-04)¶
FIX: RetryMiddleware is now robust to non-standard HTTP status codes (issue 1857)
FIX: Filestorage HTTP cache was checking wrong modified time (issue 1875)
DOC: Support for Sphinx 1.4+ (issue 1893)
DOC: Consistency in selectors examples (issue 1869)
Scrapy 1.0.5 (2016-02-04)¶
FIX: [Backport] Ignore bogus links in LinkExtractors (fixes issue 907, commit 108195e)
TST: Changed buildbot makefile to use ‘pytest’ (commit 1f3d90a)
DOC: Fixed typos in tutorial and media-pipeline (commit 808a9ea and commit 803bd87)
DOC: Add AjaxCrawlMiddleware to DOWNLOADER_MIDDLEWARES_BASE in settings docs (commit aa94121)
Scrapy 1.0.4 (2015-12-30)¶
Ignoring xlib/tx folder, depending on Twisted version. (commit 7dfa979)
Run on new travis-ci infra (commit 6e42f0b)
Spelling fixes (commit 823a1cc)
escape nodename in xmliter regex (commit da3c155)
test xml nodename with dots (commit 4418fc3)
TST don’t use broken Pillow version in tests (commit a55078c)
disable log on version command. closes #1426 (commit 86fc330)
disable log on startproject command (commit db4c9fe)
Add PyPI download stats badge (commit df2b944)
don’t run tests twice on Travis if a PR is made from a scrapy/scrapy branch (commit a83ab41)
Add Python 3 porting status badge to the README (commit 73ac80d)
fixed RFPDupeFilter persistence (commit 97d080e)
TST a test to show that dupefilter persistence is not working (commit 97f2fb3)
explicit close file on file:// scheme handler (commit d9b4850)
Disable dupefilter in shell (commit c0d0734)
DOC: Add captions to toctrees which appear in sidebar (commit aa239ad)
DOC Removed pywin32 from install instructions as it’s already declared as dependency. (commit 10eb400)
Added installation notes about using Conda for Windows and other OSes. (commit 1c3600a)
Fixed minor grammar issues. (commit 7f4ddd5)
fixed a typo in the documentation. (commit b71f677)
Version 1 now exists (commit 5456c0e)
fix another invalid xpath error (commit 0a1366e)
fix ValueError: Invalid XPath: //div/[id=”not-exists”]/text() on selectors.rst (commit ca8d60f)
Typos corrections (commit 7067117)
fix typos in downloader-middleware.rst and exceptions.rst, middlware -> middleware (commit 32f115c)
Add note to Ubuntu install section about Debian compatibility (commit 23fda69)
Replace alternative macOS install workaround with virtualenv (commit 98b63ee)
Reference Homebrew’s homepage for installation instructions (commit 1925db1)
Add oldest supported tox version to contributing docs (commit 5d10d6d)
Note in install docs about pip being already included in python>=2.7.9 (commit 85c980e)
Add non-python dependencies to Ubuntu install section in the docs (commit fbd010d)
Add macOS installation section to docs (commit d8f4cba)
DOC(ENH): specify path to rtd theme explicitly (commit de73b1a)
minor: scrapy.Spider docs grammar (commit 1ddcc7b)
Make common practices sample code match the comments (commit 1b85bcf)
nextcall repetitive calls (heartbeats). (commit 55f7104)
Backport fix compatibility with Twisted 15.4.0 (commit b262411)
pin pytest to 2.7.3 (commit a6535c2)
Merge pull request #1512 from mgedmin/patch-1 (commit 8876111)
Merge pull request #1513 from mgedmin/patch-2 (commit 5d4daf8)
Typo (commit f8d0682)
Fix list formatting (commit 5f83a93)
fix Scrapy squeue tests after recent changes to queuelib (commit 3365c01)
Merge pull request #1475 from rweindl/patch-1 (commit 2d688cd)
Update tutorial.rst (commit fbc1f25)
Merge pull request #1449 from rhoekman/patch-1 (commit 7d6538c)
Small grammatical change (commit 8752294)
Add openssl version to version command (commit 13c45ac)
Scrapy 1.0.3 (2015-08-11)¶
add service_identity to Scrapy install_requires (commit cbc2501)
Workaround for travis#296 (commit 66af9cd)
Scrapy 1.0.2 (2015-08-06)¶
Twisted 15.3.0 does not raises PicklingError serializing lambda functions (commit b04dd7d)
Minor method name fix (commit 6f85c7f)
minor: scrapy.Spider grammar and clarity (commit 9c9d2e0)
Put a blurb about support channels in CONTRIBUTING (commit c63882b)
Fixed typos (commit a9ae7b0)
Fix doc reference. (commit 7c8a4fe)
Scrapy 1.0.1 (2015-07-01)¶
Unquote request path before passing to FTPClient, it already escape paths (commit cc00ad2)
include tests/ to source distribution in MANIFEST.in (commit eca227e)
DOC Fix SelectJmes documentation (commit b8567bc)
DOC Bring Ubuntu and Archlinux outside of Windows subsection (commit 392233f)
DOC remove version suffix from Ubuntu package (commit 5303c66)
DOC Update release date for 1.0 (commit c89fa29)
Scrapy 1.0.0 (2015-06-19)¶
You will find a lot of new features and bugfixes in this major release. Make sure to check our updated overview to get a glance of some of the changes, along with our brushed tutorial.
Support for returning dictionaries in spiders¶
Declaring and returning Scrapy Items is no longer necessary to collect the scraped data from your spider, you can now return explicit dictionaries instead.
Classic version
class MyItem(scrapy.Item):
url = scrapy.Field()
class MySpider(scrapy.Spider):
def parse(self, response):
return MyItem(url=response.url)
New version
class MySpider(scrapy.Spider):
def parse(self, response):
return {'url': response.url}
Per-spider settings (GSoC 2014)¶
Last Google Summer of Code project accomplished an important redesign of the mechanism used for populating settings, introducing explicit priorities to override any given setting. As an extension of that goal, we included a new level of priority for settings that act exclusively for a single spider, allowing them to redefine project settings.
Start using it by defining a custom_settings
class variable in your spider:
class MySpider(scrapy.Spider):
custom_settings = {
"DOWNLOAD_DELAY": 5.0,
"RETRY_ENABLED": False,
}
Read more about settings population: Settings
Python Logging¶
Scrapy 1.0 has moved away from Twisted logging to support Python built in’s as default logging system. We’re maintaining backward compatibility for most of the old custom interface to call logging functions, but you’ll get warnings to switch to the Python logging API entirely.
Old version
from scrapy import log
log.msg('MESSAGE', log.INFO)
New version
import logging
logging.info('MESSAGE')
Logging with spiders remains the same, but on top of the
log() method you’ll have access to a custom
logger created for the spider to issue log
events:
class MySpider(scrapy.Spider):
def parse(self, response):
self.logger.info('Response received')
Read more in the logging documentation: Logging
Crawler API refactoring (GSoC 2014)¶
Another milestone for last Google Summer of Code was a refactoring of the internal API, seeking a simpler and easier usage. Check new core interface in: Core API
A common situation where you will face these changes is while running Scrapy from scripts. Here’s a quick example of how to run a Spider manually with the new API:
from scrapy.crawler import CrawlerProcess
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})
process.crawl(MySpider)
process.start()
Bear in mind this feature is still under development and its API may change until it reaches a stable status.
See more examples for scripts running Scrapy: Common Practices
Module Relocations¶
There’s been a large rearrangement of modules trying to improve the general
structure of Scrapy. Main changes were separating various subpackages into
new projects and dissolving both scrapy.contrib and scrapy.contrib_exp
into top level packages. Backward compatibility was kept among internal
relocations, while importing deprecated modules expect warnings indicating
their new place.
Full list of relocations¶
Outsourced packages
Note
These extensions went through some minor changes, e.g. some setting names were changed. Please check the documentation in each new repository to get familiar with the new usage.
Old location |
New location |
|---|---|
scrapy.commands.deploy |
scrapyd-client (See other alternatives here: Deploying Spiders) |
scrapy.contrib.djangoitem |
|
scrapy.webservice |
scrapy.contrib_exp and scrapy.contrib dissolutions
Old location |
New location |
|---|---|
scrapy.contrib_exp.downloadermiddleware.decompression |
scrapy.downloadermiddlewares.decompression |
scrapy.contrib_exp.iterators |
scrapy.utils.iterators |
scrapy.contrib.downloadermiddleware |
scrapy.downloadermiddlewares |
scrapy.contrib.exporter |
scrapy.exporters |
scrapy.contrib.linkextractors |
scrapy.linkextractors |
scrapy.contrib.loader |
scrapy.loader |
scrapy.contrib.loader.processor |
scrapy.loader.processors |
scrapy.contrib.pipeline |
scrapy.pipelines |
scrapy.contrib.spidermiddleware |
scrapy.spidermiddlewares |
scrapy.contrib.spiders |
scrapy.spiders |
|
scrapy.extensions.* |
Plural renames and Modules unification
Old location |
New location |
|---|---|
scrapy.command |
scrapy.commands |
scrapy.dupefilter |
scrapy.dupefilters |
scrapy.linkextractor |
scrapy.linkextractors |
scrapy.spider |
scrapy.spiders |
scrapy.squeue |
scrapy.squeues |
scrapy.statscol |
scrapy.statscollectors |
scrapy.utils.decorator |
scrapy.utils.decorators |
Class renames
Old location |
New location |
|---|---|
scrapy.spidermanager.SpiderManager |
scrapy.spiderloader.SpiderLoader |
Settings renames
Old location |
New location |
|---|---|
SPIDER_MANAGER_CLASS |
SPIDER_LOADER_CLASS |
Changelog¶
New Features and Enhancements
Python logging (issue 1060, issue 1235, issue 1236, issue 1240, issue 1259, issue 1278, issue 1286)
FEED_EXPORT_FIELDS option (issue 1159, issue 1224)
Dns cache size and timeout options (issue 1132)
support namespace prefix in xmliter_lxml (issue 963)
Reactor threadpool max size setting (issue 1123)
Allow spiders to return dicts. (issue 1081)
Add Response.urljoin() helper (issue 1086)
look in ~/.config/scrapy.cfg for user config (issue 1098)
handle TLS SNI (issue 1101)
Selectorlist extract first (issue 624, issue 1145)
Added JmesSelect (issue 1016)
add gzip compression to filesystem http cache backend (issue 1020)
CSS support in link extractors (issue 983)
httpcache dont_cache meta #19 #689 (issue 821)
add signal to be sent when request is dropped by the scheduler (issue 961)
avoid download large response (issue 946)
Allow to specify the quotechar in CSVFeedSpider (issue 882)
Add referer to “Spider error processing” log message (issue 795)
process robots.txt once (issue 896)
GSoC Per-spider settings (issue 854)
Add project name validation (issue 817)
GSoC API cleanup (issue 816, issue 1128, issue 1147, issue 1148, issue 1156, issue 1185, issue 1187, issue 1258, issue 1268, issue 1276, issue 1285, issue 1284)
Be more responsive with IO operations (issue 1074 and issue 1075)
Do leveldb compaction for httpcache on closing (issue 1297)
Deprecations and Removals
Deprecate htmlparser link extractor (issue 1205)
remove deprecated code from FeedExporter (issue 1155)
a leftover for.15 compatibility (issue 925)
drop support for CONCURRENT_REQUESTS_PER_SPIDER (issue 895)
Drop old engine code (issue 911)
Deprecate SgmlLinkExtractor (issue 777)
Relocations
Move exporters/__init__.py to exporters.py (issue 1242)
Move base classes to their packages (issue 1218, issue 1233)
Module relocation (issue 1181, issue 1210)
rename SpiderManager to SpiderLoader (issue 1166)
Remove djangoitem (issue 1177)
remove scrapy deploy command (issue 1102)
dissolve contrib_exp (issue 1134)
Deleted bin folder from root, fixes #913 (issue 914)
Remove jsonrpc based webservice (issue 859)
Move Test cases under project root dir (issue 827, issue 841)
Fix backward incompatibility for relocated paths in settings (issue 1267)
Documentation
CrawlerProcess documentation (issue 1190)
Favoring web scraping over screen scraping in the descriptions (issue 1188)
Some improvements for Scrapy tutorial (issue 1180)
Documenting Files Pipeline together with Images Pipeline (issue 1150)
deployment docs tweaks (issue 1164)
Added deployment section covering scrapyd-deploy and shub (issue 1124)
Adding more settings to project template (issue 1073)
some improvements to overview page (issue 1106)
Updated link in docs/topics/architecture.rst (issue 647)
DOC reorder topics (issue 1022)
updating list of Request.meta special keys (issue 1071)
DOC document download_timeout (issue 898)
DOC simplify extension docs (issue 893)
Leaks docs (issue 894)
DOC document from_crawler method for item pipelines (issue 904)
Spider_error doesn’t support deferreds (issue 1292)
Corrections & Sphinx related fixes (issue 1220, issue 1219, issue 1196, issue 1172, issue 1171, issue 1169, issue 1160, issue 1154, issue 1127, issue 1112, issue 1105, issue 1041, issue 1082, issue 1033, issue 944, issue 866, issue 864, issue 796, issue 1260, issue 1271, issue 1293, issue 1298)
Bugfixes
Item multi inheritance fix (issue 353, issue 1228)
ItemLoader.load_item: iterate over copy of fields (issue 722)
Fix Unhandled error in Deferred (RobotsTxtMiddleware) (issue 1131, issue 1197)
Force to read DOWNLOAD_TIMEOUT as int (issue 954)
scrapy.utils.misc.load_object should print full traceback (issue 902)
Fix bug for “.local” host name (issue 878)
Fix for Enabled extensions, middlewares, pipelines info not printed anymore (issue 879)
fix dont_merge_cookies bad behaviour when set to false on meta (issue 846)
Python 3 In Progress Support
disable scrapy.telnet if twisted.conch is not available (issue 1161)
fix Python 3 syntax errors in ajaxcrawl.py (issue 1162)
more python3 compatibility changes for urllib (issue 1121)
assertItemsEqual was renamed to assertCountEqual in Python 3. (issue 1070)
Import unittest.mock if available. (issue 1066)
updated deprecated cgi.parse_qsl to use six’s parse_qsl (issue 909)
Prevent Python 3 port regressions (issue 830)
PY3: use MutableMapping for python 3 (issue 810)
PY3: use six.BytesIO and six.moves.cStringIO (issue 803)
PY3: fix xmlrpclib and email imports (issue 801)
PY3: use six for robotparser and urlparse (issue 800)
PY3: use six.iterkeys, six.iteritems, and tempfile (issue 799)
PY3: fix has_key and use six.moves.configparser (issue 798)
PY3: use six.moves.cPickle (issue 797)
PY3 make it possible to run some tests in Python3 (issue 776)
Tests
remove unnecessary lines from py3-ignores (issue 1243)
Fix remaining warnings from pytest while collecting tests (issue 1206)
Add docs build to travis (issue 1234)
TST don’t collect tests from deprecated modules. (issue 1165)
install service_identity package in tests to prevent warnings (issue 1168)
Fix deprecated settings API in tests (issue 1152)
Add test for webclient with POST method and no body given (issue 1089)
py3-ignores.txt supports comments (issue 1044)
modernize some of the asserts (issue 835)
selector.__repr__ test (issue 779)
Code refactoring
CSVFeedSpider cleanup: use iterate_spider_output (issue 1079)
remove unnecessary check from scrapy.utils.spider.iter_spider_output (issue 1078)
Pydispatch pep8 (issue 992)
Removed unused ‘load=False’ parameter from walk_modules() (issue 871)
For consistency, use
job_dirhelper inSpiderStateextension. (issue 805)rename “sflo” local variables to less cryptic “log_observer” (issue 775)
Scrapy 0.24.6 (2015-04-20)¶
encode invalid xpath with unicode_escape under PY2 (commit 07cb3e5)
fix IPython shell scope issue and load IPython user config (commit 2c8e573)
Fix small typo in the docs (commit d694019)
Fix small typo (commit f92fa83)
Converted sel.xpath() calls to response.xpath() in Extracting the data (commit c2c6d15)
Scrapy 0.24.5 (2015-02-25)¶
Support new _getEndpoint Agent signatures on Twisted 15.0.0 (commit 540b9bc)
DOC a couple more references are fixed (commit b4c454b)
DOC fix a reference (commit e3c1260)
t.i.b.ThreadedResolver is now a new-style class (commit 9e13f42)
S3DownloadHandler: fix auth for requests with quoted paths/query params (commit cdb9a0b)
fixed the variable types in mailsender documentation (commit bb3a848)
Reset items_scraped instead of item_count (commit edb07a4)
Tentative attention message about what document to read for contributions (commit 7ee6f7a)
mitmproxy 0.10.1 needs netlib 0.10.1 too (commit 874fcdd)
pin mitmproxy 0.10.1 as >0.11 does not work with tests (commit c6b21f0)
Test the parse command locally instead of against an external url (commit c3a6628)
Patches Twisted issue while closing the connection pool on HTTPDownloadHandler (commit d0bf957)
Updates documentation on dynamic item classes. (commit eeb589a)
Merge pull request #943 from Lazar-T/patch-3 (commit 5fdab02)
typo (commit b0ae199)
pywin32 is required by Twisted. closes #937 (commit 5cb0cfb)
Update install.rst (commit 781286b)
Merge pull request #928 from Lazar-T/patch-1 (commit b415d04)
comma instead of fullstop (commit 627b9ba)
Merge pull request #885 from jsma/patch-1 (commit de909ad)
Update request-response.rst (commit 3f3263d)
SgmlLinkExtractor - fix for parsing <area> tag with Unicode present (commit 49b40f0)
Scrapy 0.24.4 (2014-08-09)¶
pem file is used by mockserver and required by scrapy bench (commit 5eddc68)
scrapy bench needs scrapy.tests* (commit d6cb999)
Scrapy 0.24.3 (2014-08-09)¶
no need to waste travis-ci time on py3 for 0.24 (commit 8e080c1)
Update installation docs (commit 1d0c096)
There is a trove classifier for Scrapy framework! (commit 4c701d7)
update other places where w3lib version is mentioned (commit d109c13)
Update w3lib requirement to 1.8.0 (commit 39d2ce5)
Use w3lib.html.replace_entities() (remove_entities() is deprecated) (commit 180d3ad)
set zip_safe=False (commit a51ee8b)
do not ship tests package (commit ee3b371)
scrapy.bat is not needed anymore (commit c3861cf)
Modernize setup.py (commit 362e322)
headers can not handle non-string values (commit 94a5c65)
fix ftp test cases (commit a274a7f)
The sum up of travis-ci builds are taking like 50min to complete (commit ae1e2cc)
Update shell.rst typo (commit e49c96a)
removes weird indentation in the shell results (commit 1ca489d)
improved explanations, clarified blog post as source, added link for XPath string functions in the spec (commit 65c8f05)
renamed UserTimeoutError and ServerTimeouterror #583 (commit 037f6ab)
adding some xpath tips to selectors docs (commit 2d103e0)
fix tests to account for https://github.com/scrapy/w3lib/pull/23 (commit f8d366a)
get_func_args maximum recursion fix #728 (commit 81344ea)
Updated input/output processor example according to #560. (commit f7c4ea8)
Fixed Python syntax in tutorial. (commit db59ed9)
Add test case for tunneling proxy (commit f090260)
Bugfix for leaking Proxy-Authorization header to remote host when using tunneling (commit d8793af)
Extract links from XHTML documents with MIME-Type “application/xml” (commit ed1f376)
Merge pull request #793 from roysc/patch-1 (commit 91a1106)
Fix typo in commands.rst (commit 743e1e2)
better testcase for settings.overrides.setdefault (commit e22daaf)
Using CRLF as line marker according to http 1.1 definition (commit 5ec430b)
Scrapy 0.24.2 (2014-07-08)¶
Use a mutable mapping to proxy deprecated settings.overrides and settings.defaults attribute (commit e5e8133)
there is not support for python3 yet (commit 3cd6146)
Update python compatible version set to Debian packages (commit fa5d76b)
DOC fix formatting in release notes (commit c6a9e20)
Scrapy 0.24.1 (2014-06-27)¶
Fix deprecated CrawlerSettings and increase backward compatibility with .defaults attribute (commit 8e3f20a)
Scrapy 0.24.0 (2014-06-26)¶
Enhancements¶
Add new lxml based LinkExtractor to replace unmaintained SgmlLinkExtractor (issue 559, issue 761, issue 763)
Cleanup settings API - part of per-spider settings GSoC project (issue 737)
Add UTF8 encoding header to templates (issue 688, issue 762)
Telnet console now binds to 127.0.0.1 by default (issue 699)
Update Debian/Ubuntu install instructions (issue 509, issue 549)
Disable smart strings in lxml XPath evaluations (issue 535)
Restore filesystem based cache as default for http cache middleware (issue 541, issue 500, issue 571)
Expose current crawler in Scrapy shell (issue 557)
Improve testsuite comparing CSV and XML exporters (issue 570)
New
offsite/filteredandoffsite/domainsstats (issue 566)Support process_links as generator in CrawlSpider (issue 555)
Verbose logging and new stats counters for DupeFilter (issue 553)
Add a mimetype parameter to
MailSender.send()(issue 602)Generalize file pipeline log messages (issue 622)
Replace unencodeable codepoints with html entities in SGMLLinkExtractor (issue 565)
Converted SEP documents to rst format (issue 629, issue 630, issue 638, issue 632, issue 636, issue 640, issue 635, issue 634, issue 639, issue 637, issue 631, issue 633, issue 641, issue 642)
Tests and docs for clickdata’s nr index in FormRequest (issue 646, issue 645)
Allow to disable a downloader handler just like any other component (issue 650)
Log when a request is discarded after too many redirections (issue 654)
Log error responses if they are not handled by spider callbacks (issue 612, issue 656)
Add content-type check to http compression mw (issue 193, issue 660)
Run pypy tests using latest pypi from ppa (issue 674)
Run test suite using pytest instead of trial (issue 679)
Build docs and check for dead links in tox environment (issue 687)
Make scrapy.version_info a tuple of integers (issue 681, issue 692)
Infer exporter’s output format from filename extensions (issue 546, issue 659, issue 760)
Support case-insensitive domains in
url_is_from_any_domain()(issue 693)Remove pep8 warnings in project and spider templates (issue 698)
Tests and docs for
request_fingerprintfunction (issue 597)Update SEP-19 for GSoC project
per-spider settings(issue 705)Set exit code to non-zero when contracts fails (issue 727)
Add a setting to control what class is instantiated as Downloader component (issue 738)
Pass response in
item_droppedsignal (issue 724)Improve
scrapy checkcontracts command (issue 733, issue 752)Document
spider.closed()shortcut (issue 719)Document
request_scheduledsignal (issue 746)Add a note about reporting security issues (issue 697)
Add LevelDB http cache storage backend (issue 626, issue 500)
Sort spider list output of
scrapy listcommand (issue 742)Multiple documentation enhancements and fixes (issue 575, issue 587, issue 590, issue 596, issue 610, issue 617, issue 618, issue 627, issue 613, issue 643, issue 654, issue 675, issue 663, issue 711, issue 714)
Bugfixes¶
Encode unicode URL value when creating Links in RegexLinkExtractor (issue 561)
Ignore None values in ItemLoader processors (issue 556)
Fix link text when there is an inner tag in SGMLLinkExtractor and HtmlParserLinkExtractor (issue 485, issue 574)
Fix wrong checks on subclassing of deprecated classes (issue 581, issue 584)
Handle errors caused by inspect.stack() failures (issue 582)
Fix a reference to unexistent engine attribute (issue 593, issue 594)
Fix dynamic itemclass example usage of type() (issue 603)
Use lucasdemarchi/codespell to fix typos (issue 628)
Fix default value of attrs argument in SgmlLinkExtractor to be tuple (issue 661)
Fix XXE flaw in sitemap reader (issue 676)
Fix engine to support filtered start requests (issue 707)
Fix offsite middleware case on urls with no hostnames (issue 745)
Testsuite doesn’t require PIL anymore (issue 585)
Scrapy 0.22.2 (released 2014-02-14)¶
fix a reference to unexistent engine.slots. closes #593 (commit 13c099a)
downloaderMW doc typo (spiderMW doc copy remnant) (commit 8ae11bf)
Correct typos (commit 1346037)
Scrapy 0.22.1 (released 2014-02-08)¶
localhost666 can resolve under certain circumstances (commit 2ec2279)
test inspect.stack failure (commit cc3eda3)
Handle cases when inspect.stack() fails (commit 8cb44f9)
Fix wrong checks on subclassing of deprecated classes. closes #581 (commit 46d98d6)
Docs: 4-space indent for final spider example (commit 13846de)
Fix HtmlParserLinkExtractor and tests after #485 merge (commit 368a946)
BaseSgmlLinkExtractor: Fixed the missing space when the link has an inner tag (commit b566388)
BaseSgmlLinkExtractor: Added unit test of a link with an inner tag (commit c1cb418)
BaseSgmlLinkExtractor: Fixed unknown_endtag() so that it only set current_link=None when the end tag match the opening tag (commit 7e4d627)
Fix tests for Travis-CI build (commit 76c7e20)
replace unencodeable codepoints with html entities. fixes #562 and #285 (commit 5f87b17)
RegexLinkExtractor: encode URL unicode value when creating Links (commit d0ee545)
Updated the tutorial crawl output with latest output. (commit 8da65de)
Updated shell docs with the crawler reference and fixed the actual shell output. (commit 875b9ab)
PEP8 minor edits. (commit f89efaf)
Expose current crawler in the Scrapy shell. (commit 5349cec)
Unused re import and PEP8 minor edits. (commit 387f414)
Ignore None’s values when using the ItemLoader. (commit 0632546)
DOC Fixed HTTPCACHE_STORAGE typo in the default value which is now Filesystem instead Dbm. (commit cde9a8c)
show Ubuntu setup instructions as literal code (commit fb5c9c5)
Update Ubuntu installation instructions (commit 70fb105)
Merge pull request #550 from stray-leone/patch-1 (commit 6f70b6a)
modify the version of Scrapy Ubuntu package (commit 725900d)
fix 0.22.0 release date (commit af0219a)
fix typos in news.rst and remove (not released yet) header (commit b7f58f4)
Scrapy 0.22.0 (released 2014-01-17)¶
Enhancements¶
[Backward incompatible] Switched HTTPCacheMiddleware backend to filesystem (issue 541) To restore old backend set
HTTPCACHE_STORAGEtoscrapy.contrib.httpcache.DbmCacheStorageProxy https:// urls using CONNECT method (issue 392, issue 397)
Add a middleware to crawl ajax crawlable pages as defined by google (issue 343)
Rename scrapy.spider.BaseSpider to scrapy.spider.Spider (issue 510, issue 519)
Selectors register EXSLT namespaces by default (issue 472)
Unify item loaders similar to selectors renaming (issue 461)
Make
RFPDupeFilterclass easily subclassable (issue 533)Improve test coverage and forthcoming Python 3 support (issue 525)
Promote startup info on settings and middleware to INFO level (issue 520)
Support partials in
get_func_argsutil (issue 506, issue:504)Allow running individual tests via tox (issue 503)
Update extensions ignored by link extractors (issue 498)
Add middleware methods to get files/images/thumbs paths (issue 490)
Improve offsite middleware tests (issue 478)
Add a way to skip default Referer header set by RefererMiddleware (issue 475)
Do not send
x-gzipin defaultAccept-Encodingheader (issue 469)Support defining http error handling using settings (issue 466)
Use modern python idioms wherever you find legacies (issue 497)
Improve and correct documentation (issue 527, issue 524, issue 521, issue 517, issue 512, issue 505, issue 502, issue 489, issue 465, issue 460, issue 425, issue 536)
Fixes¶
Update Selector class imports in CrawlSpider template (issue 484)
Fix unexistent reference to
engine.slots(issue 464)Do not try to call
body_as_unicode()on a non-TextResponse instance (issue 462)Warn when subclassing XPathItemLoader, previously it only warned on instantiation. (issue 523)
Warn when subclassing XPathSelector, previously it only warned on instantiation. (issue 537)
Multiple fixes to memory stats (issue 531, issue 530, issue 529)
Fix overriding url in
FormRequest.from_response()(issue 507)Fix tests runner under pip 1.5 (issue 513)
Fix logging error when spider name is unicode (issue 479)
Scrapy 0.20.2 (released 2013-12-09)¶
Update CrawlSpider Template with Selector changes (commit 6d1457d)
fix method name in tutorial. closes GH-480 (commit b4fc359
Scrapy 0.20.1 (released 2013-11-28)¶
include_package_data is required to build wheels from published sources (commit 5ba1ad5)
process_parallel was leaking the failures on its internal deferreds. closes #458 (commit 419a780)
Scrapy 0.20.0 (released 2013-11-08)¶
Enhancements¶
New Selector’s API including CSS selectors (issue 395 and issue 426),
Request/Response url/body attributes are now immutable (modifying them had been deprecated for a long time)
ITEM_PIPELINESis now defined as a dict (instead of a list)Sitemap spider can fetch alternate URLs (issue 360)
Selector.remove_namespaces()now remove namespaces from element’s attributes. (issue 416)Paved the road for Python 3.3+ (issue 435, issue 436, issue 431, issue 452)
New item exporter using native python types with nesting support (issue 366)
Tune HTTP1.1 pool size so it matches concurrency defined by settings (commit b43b5f575)
scrapy.mail.MailSender now can connect over TLS or upgrade using STARTTLS (issue 327)
New FilesPipeline with functionality factored out from ImagesPipeline (issue 370, issue 409)
Recommend Pillow instead of PIL for image handling (issue 317)
Added Debian packages for Ubuntu Quantal and Raring (commit 86230c0)
Mock server (used for tests) can listen for HTTPS requests (issue 410)
Remove multi spider support from multiple core components (issue 422, issue 421, issue 420, issue 419, issue 423, issue 418)
Travis-CI now tests Scrapy changes against development versions of
w3libandqueuelibpython packages.Add pypy 2.1 to continuous integration tests (commit ecfa7431)
Pylinted, pep8 and removed old-style exceptions from source (issue 430, issue 432)
Use importlib for parametric imports (issue 445)
Handle a regression introduced in Python 2.7.5 that affects XmlItemExporter (issue 372)
Bugfix crawling shutdown on SIGINT (issue 450)
Do not submit
resettype inputs in FormRequest.from_response (commit b326b87)Do not silence download errors when request errback raises an exception (commit 684cfc0)
Bugfixes¶
Fix tests under Django 1.6 (commit b6bed44c)
Lot of bugfixes to retry middleware under disconnections using HTTP 1.1 download handler
Fix inconsistencies among Twisted releases (issue 406)
Fix invalid variable name in setup.py (issue 429)
Fix tutorial references (issue 387)
Improve request-response docs (issue 391)
Improve best practices docs (issue 399, issue 400, issue 401, issue 402)
Improve django integration docs (issue 404)
Document
bindaddressrequest meta (commit 37c24e01d7)Improve
Requestclass documentation (issue 226)
Other¶
Dropped Python 2.6 support (issue 448)
Add cssselect python package as install dependency
Drop libxml2 and multi selector’s backend support, lxml is required from now on.
Minimum Twisted version increased to 10.0.0, dropped Twisted 8.0 support.
Running test suite now requires
mockpython library (issue 390)
Thanks¶
Thanks to everyone who contribute to this release!
List of contributors sorted by number of commits:
69 Daniel Graña <dangra@...>
37 Pablo Hoffman <pablo@...>
13 Mikhail Korobov <kmike84@...>
9 Alex Cepoi <alex.cepoi@...>
9 alexanderlukanin13 <alexander.lukanin.13@...>
8 Rolando Espinoza La fuente <darkrho@...>
8 Lukasz Biedrycki <lukasz.biedrycki@...>
6 Nicolas Ramirez <nramirez.uy@...>
3 Paul Tremberth <paul.tremberth@...>
2 Martin Olveyra <molveyra@...>
2 Stefan <misc@...>
2 Rolando Espinoza <darkrho@...>
2 Loren Davie <loren@...>
2 irgmedeiros <irgmedeiros@...>
1 Stefan Koch <taikano@...>
1 Stefan <cct@...>
1 scraperdragon <dragon@...>
1 Kumara Tharmalingam <ktharmal@...>
1 Francesco Piccinno <stack.box@...>
1 Marcos Campal <duendex@...>
1 Dragon Dave <dragon@...>
1 Capi Etheriel <barraponto@...>
1 cacovsky <amarquesferraz@...>
1 Berend Iwema <berend@...>
Scrapy 0.18.4 (released 2013-10-10)¶
IPython refuses to update the namespace. fix #396 (commit 3d32c4f)
Fix AlreadyCalledError replacing a request in shell command. closes #407 (commit b1d8919)
Fix start_requests laziness and early hangs (commit 89faf52)
Scrapy 0.18.3 (released 2013-10-03)¶
fix regression on lazy evaluation of start requests (commit 12693a5)
forms: do not submit reset inputs (commit e429f63)
increase unittest timeouts to decrease travis false positive failures (commit 912202e)
backport master fixes to json exporter (commit cfc2d46)
Fix permission and set umask before generating sdist tarball (commit 06149e0)
Scrapy 0.18.2 (released 2013-09-03)¶
Backport
scrapy checkcommand fixes and backward compatible multi crawler process(issue 339)
Scrapy 0.18.1 (released 2013-08-27)¶
remove extra import added by cherry picked changes (commit d20304e)
fix crawling tests under twisted pre 11.0.0 (commit 1994f38)
py26 can not format zero length fields {} (commit abf756f)
test PotentiaDataLoss errors on unbound responses (commit b15470d)
Treat responses without content-length or Transfer-Encoding as good responses (commit c4bf324)
do no include ResponseFailed if http11 handler is not enabled (commit 6cbe684)
New HTTP client wraps connection lost in ResponseFailed exception. fix #373 (commit 1a20bba)
limit travis-ci build matrix (commit 3b01bb8)
Merge pull request #375 from peterarenot/patch-1 (commit fa766d7)
Fixed so it refers to the correct folder (commit 3283809)
added Quantal & Raring to support Ubuntu releases (commit 1411923)
fix retry middleware which didn’t retry certain connection errors after the upgrade to http1 client, closes GH-373 (commit bb35ed0)
fix XmlItemExporter in Python 2.7.4 and 2.7.5 (commit de3e451)
minor updates to 0.18 release notes (commit c45e5f1)
fix contributors list format (commit 0b60031)
Scrapy 0.18.0 (released 2013-08-09)¶
Lot of improvements to testsuite run using Tox, including a way to test on pypi
Handle GET parameters for AJAX crawlable urls (commit 3fe2a32)
Use lxml recover option to parse sitemaps (issue 347)
Bugfix cookie merging by hostname and not by netloc (issue 352)
Support disabling
HttpCompressionMiddlewareusing a flag setting (issue 359)Support xml namespaces using
iternodesparser inXMLFeedSpider(issue 12)Support
dont_cacherequest meta flag (issue 19)Bugfix
scrapy.utils.gz.gunzipbroken by changes in python 2.7.4 (commit 4dc76e)Bugfix url encoding on
SgmlLinkExtractor(issue 24)Bugfix
TakeFirstprocessor shouldn’t discard zero (0) value (issue 59)Support nested items in xml exporter (issue 66)
Improve cookies handling performance (issue 77)
Log dupe filtered requests once (issue 105)
Split redirection middleware into status and meta based middlewares (issue 78)
Use HTTP1.1 as default downloader handler (issue 109 and issue 318)
Support xpath form selection on
FormRequest.from_response(issue 185)Bugfix unicode decoding error on
SgmlLinkExtractor(issue 199)Bugfix signal dispatching on pypi interpreter (issue 205)
Improve request delay and concurrency handling (issue 206)
Add RFC2616 cache policy to
HttpCacheMiddleware(issue 212)Allow customization of messages logged by engine (issue 214)
Multiples improvements to
DjangoItem(issue 217, issue 218, issue 221)Extend Scrapy commands using setuptools entry points (issue 260)
Allow spider
allowed_domainsvalue to be set/tuple (issue 261)Support
settings.getdict(issue 269)Simplify internal
scrapy.core.scraperslot handling (issue 271)Added
Item.copy(issue 290)Collect idle downloader slots (issue 297)
Add
ftp://scheme downloader handler (issue 329)Added downloader benchmark webserver and spider tools Benchmarking
Moved persistent (on disk) queues to a separate project (queuelib) which Scrapy now depends on
Add Scrapy commands using external libraries (issue 260)
Added
--pdboption toscrapycommand line toolAdded
XPathSelector.remove_namespaceswhich allows to remove all namespaces from XML documents for convenience (to work with namespace-less XPaths). Documented in Selectors.Several improvements to spider contracts
New default middleware named MetaRefreshMiddleware that handles meta-refresh html tag redirections,
MetaRefreshMiddleware and RedirectMiddleware have different priorities to address #62
added from_crawler method to spiders
added system tests with mock server
more improvements to macOS compatibility (thanks Alex Cepoi)
several more cleanups to singletons and multi-spider support (thanks Nicolas Ramirez)
support custom download slots
added –spider option to “shell” command.
log overridden settings when Scrapy starts
Thanks to everyone who contribute to this release. Here is a list of contributors sorted by number of commits:
130 Pablo Hoffman <pablo@...>
97 Daniel Graña <dangra@...>
20 Nicolás Ramírez <nramirez.uy@...>
13 Mikhail Korobov <kmike84@...>
12 Pedro Faustino <pedrobandim@...>
11 Steven Almeroth <sroth77@...>
5 Rolando Espinoza La fuente <darkrho@...>
4 Michal Danilak <mimino.coder@...>
4 Alex Cepoi <alex.cepoi@...>
4 Alexandr N Zamaraev (aka tonal) <tonal@...>
3 paul <paul.tremberth@...>
3 Martin Olveyra <molveyra@...>
3 Jordi Llonch <llonchj@...>
3 arijitchakraborty <myself.arijit@...>
2 Shane Evans <shane.evans@...>
2 joehillen <joehillen@...>
2 Hart <HartSimha@...>
2 Dan <ellisd23@...>
1 Zuhao Wan <wanzuhao@...>
1 whodatninja <blake@...>
1 vkrest <v.krestiannykov@...>
1 tpeng <pengtaoo@...>
1 Tom Mortimer-Jones <tom@...>
1 Rocio Aramberri <roschegel@...>
1 Pedro <pedro@...>
1 notsobad <wangxiaohugg@...>
1 Natan L <kuyanatan.nlao@...>
1 Mark Grey <mark.grey@...>
1 Luan <luanpab@...>
1 Libor Nenadál <libor.nenadal@...>
1 Juan M Uys <opyate@...>
1 Jonas Brunsgaard <jonas.brunsgaard@...>
1 Ilya Baryshev <baryshev@...>
1 Hasnain Lakhani <m.hasnain.lakhani@...>
1 Emanuel Schorsch <emschorsch@...>
1 Chris Tilden <chris.tilden@...>
1 Capi Etheriel <barraponto@...>
1 cacovsky <amarquesferraz@...>
1 Berend Iwema <berend@...>
Scrapy 0.16.5 (released 2013-05-30)¶
obey request method when Scrapy deploy is redirected to a new endpoint (commit 8c4fcee)
fix inaccurate downloader middleware documentation. refs #280 (commit 40667cb)
doc: remove links to diveintopython.org, which is no longer available. closes #246 (commit bd58bfa)
Find form nodes in invalid html5 documents (commit e3d6945)
Fix typo labeling attrs type bool instead of list (commit a274276)
Scrapy 0.16.4 (released 2013-01-23)¶
fixes spelling errors in documentation (commit 6d2b3aa)
add doc about disabling an extension. refs #132 (commit c90de33)
Fixed error message formatting. log.err() doesn’t support cool formatting and when error occurred, the message was: “ERROR: Error processing %(item)s” (commit c16150c)
lint and improve images pipeline error logging (commit 56b45fc)
fixed doc typos (commit 243be84)
add documentation topics: Broad Crawls & Common Practices (commit 1fbb715)
fix bug in Scrapy parse command when spider is not specified explicitly. closes #209 (commit c72e682)
Update docs/topics/commands.rst (commit 28eac7a)
Scrapy 0.16.3 (released 2012-12-07)¶
Remove concurrency limitation when using download delays and still ensure inter-request delays are enforced (commit 487b9b5)
add error details when image pipeline fails (commit 8232569)
improve macOS compatibility (commit 8dcf8aa)
setup.py: use README.rst to populate long_description (commit 7b5310d)
doc: removed obsolete references to ClientForm (commit 80f9bb6)
correct docs for default storage backend (commit 2aa491b)
doc: removed broken proxyhub link from FAQ (commit bdf61c4)
Fixed docs typo in SpiderOpenCloseLogging example (commit 7184094)
Scrapy 0.16.2 (released 2012-11-09)¶
Scrapy contracts: python2.6 compat (commit a4a9199)
Scrapy contracts verbose option (commit ec41673)
proper unittest-like output for Scrapy contracts (commit 86635e4)
added open_in_browser to debugging doc (commit c9b690d)
removed reference to global Scrapy stats from settings doc (commit dd55067)
Fix SpiderState bug in Windows platforms (commit 58998f4)
Scrapy 0.16.1 (released 2012-10-26)¶
fixed LogStats extension, which got broken after a wrong merge before the 0.16 release (commit 8c780fd)
better backward compatibility for scrapy.conf.settings (commit 3403089)
extended documentation on how to access crawler stats from extensions (commit c4da0b5)
removed .hgtags (no longer needed now that Scrapy uses git) (commit d52c188)
fix dashes under rst headers (commit fa4f7f9)
set release date for 0.16.0 in news (commit e292246)
Scrapy 0.16.0 (released 2012-10-18)¶
Scrapy changes:
added Spiders Contracts, a mechanism for testing spiders in a formal/reproducible way
added options
-oand-tto therunspidercommanddocumented AutoThrottle extension and added to extensions installed by default. You still need to enable it with
AUTOTHROTTLE_ENABLEDmajor Stats Collection refactoring: removed separation of global/per-spider stats, removed stats-related signals (
stats_spider_opened, etc). Stats are much simpler now, backward compatibility is kept on the Stats Collector API and signals.added
process_start_requests()method to spider middlewaresdropped Signals singleton. Signals should now be accessed through the Crawler.signals attribute. See the signals documentation for more info.
dropped Stats Collector singleton. Stats can now be accessed through the Crawler.stats attribute. See the stats collection documentation for more info.
documented Core API
lxmlis now the default selectors backend instead oflibxml2ported FormRequest.from_response() to use lxml instead of ClientForm
removed modules:
scrapy.xlib.BeautifulSoupandscrapy.xlib.ClientFormSitemapSpider: added support for sitemap urls ending in .xml and .xml.gz, even if they advertise a wrong content type (commit 10ed28b)
StackTraceDump extension: also dump trackref live references (commit fe2ce93)
nested items now fully supported in JSON and JSONLines exporters
added
cookiejarRequest meta key to support multiple cookie sessions per spiderdecoupled encoding detection code to w3lib.encoding, and ported Scrapy code to use that module
dropped support for Python 2.5. See https://blog.scrapinghub.com/2012/02/27/scrapy-0-15-dropping-support-for-python-2-5/
dropped support for Twisted 2.5
added
REFERER_ENABLEDsetting, to control referer middlewarechanged default user agent to:
Scrapy/VERSION (+http://scrapy.org)removed (undocumented)
HTMLImageLinkExtractorclass fromscrapy.contrib.linkextractors.imageremoved per-spider settings (to be replaced by instantiating multiple crawler objects)
USER_AGENTspider attribute will no longer work, useuser_agentattribute insteadDOWNLOAD_TIMEOUTspider attribute will no longer work, usedownload_timeoutattribute insteadremoved
ENCODING_ALIASESsetting, as encoding auto-detection has been moved to the w3lib librarypromoted DjangoItem to main contrib
LogFormatter method now return dicts(instead of strings) to support lazy formatting (issue 164, commit dcef7b0)
downloader handlers (
DOWNLOAD_HANDLERSsetting) now receive settings as the first argument of the__init__methodreplaced memory usage accounting with (more portable) resource module, removed
scrapy.utils.memorymoduleremoved signal:
scrapy.mail.mail_sentremoved
TRACK_REFSsetting, now trackrefs is always enabledDBM is now the default storage backend for HTTP cache middleware
number of log messages (per level) are now tracked through Scrapy stats (stat name:
log_count/LEVEL)number received responses are now tracked through Scrapy stats (stat name:
response_received_count)removed
scrapy.log.startedattribute
Scrapy 0.14.4¶
added precise to supported Ubuntu distros (commit b7e46df)
fixed bug in json-rpc webservice reported in https://groups.google.com/forum/#!topic/scrapy-users/qgVBmFybNAQ/discussion. also removed no longer supported ‘run’ command from extras/scrapy-ws.py (commit 340fbdb)
meta tag attributes for content-type http equiv can be in any order. #123 (commit 0cb68af)
replace “import Image” by more standard “from PIL import Image”. closes #88 (commit 4d17048)
return trial status as bin/runtests.sh exit value. #118 (commit b7b2e7f)
Scrapy 0.14.3¶
forgot to include pydispatch license. #118 (commit fd85f9c)
include egg files used by testsuite in source distribution. #118 (commit c897793)
update docstring in project template to avoid confusion with genspider command, which may be considered as an advanced feature. refs #107 (commit 2548dcc)
added note to docs/topics/firebug.rst about google directory being shut down (commit 668e352)
don’t discard slot when empty, just save in another dict in order to recycle if needed again. (commit 8e9f607)
do not fail handling unicode xpaths in libxml2 backed selectors (commit b830e95)
fixed minor mistake in Request objects documentation (commit bf3c9ee)
fixed minor defect in link extractors documentation (commit ba14f38)
removed some obsolete remaining code related to sqlite support in Scrapy (commit 0665175)
Scrapy 0.14.2¶
move buffer pointing to start of file before computing checksum. refs #92 (commit 6a5bef2)
Compute image checksum before persisting images. closes #92 (commit 9817df1)
remove leaking references in cached failures (commit 673a120)
fixed bug in MemoryUsage extension: get_engine_status() takes exactly 1 argument (0 given) (commit 11133e9)
fixed struct.error on http compression middleware. closes #87 (commit 1423140)
ajax crawling wasn’t expanding for unicode urls (commit 0de3fb4)
Catch start_requests iterator errors. refs #83 (commit 454a21d)
Speed-up libxml2 XPathSelector (commit 2fbd662)
updated versioning doc according to recent changes (commit 0a070f5)
scrapyd: fixed documentation link (commit 2b4e4c3)
extras/makedeb.py: no longer obtaining version from git (commit caffe0e)
Scrapy 0.14.1¶
extras/makedeb.py: no longer obtaining version from git (commit caffe0e)
bumped version to 0.14.1 (commit 6cb9e1c)
fixed reference to tutorial directory (commit 4b86bd6)
doc: removed duplicated callback argument from Request.replace() (commit 1aeccdd)
fixed formatting of scrapyd doc (commit 8bf19e6)
Dump stacks for all running threads and fix engine status dumped by StackTraceDump extension (commit 14a8e6e)
added comment about why we disable ssl on boto images upload (commit 5223575)
SSL handshaking hangs when doing too many parallel connections to S3 (commit 63d583d)
change tutorial to follow changes on dmoz site (commit bcb3198)
Avoid _disconnectedDeferred AttributeError exception in Twisted>=11.1.0 (commit 98f3f87)
allow spider to set autothrottle max concurrency (commit 175a4b5)
Scrapy 0.14¶
New features and settings¶
Support for AJAX crawlable urls
New persistent scheduler that stores requests on disk, allowing to suspend and resume crawls (r2737)
added
-ooption toscrapy crawl, a shortcut for dumping scraped items into a file (or standard output using-)Added support for passing custom settings to Scrapyd
schedule.jsonapi (r2779, r2783)New
ChunkedTransferMiddleware(enabled by default) to support chunked transfer encoding (r2769)Add boto 2.0 support for S3 downloader handler (r2763)
In request errbacks, offending requests are now received in
failure.requestattribute (r2738)- Big downloader refactoring to support per domain/ip concurrency limits (r2732)
CONCURRENT_REQUESTS_PER_SPIDERsetting has been deprecated and replaced by:
check the documentation for more details
Added builtin caching DNS resolver (r2728)
Moved Amazon AWS-related components/extensions (SQS spider queue, SimpleDB stats collector) to a separate project: [scaws](https://github.com/scrapinghub/scaws) (r2706, r2714)
Moved spider queues to scrapyd:
scrapy.spiderqueue->scrapyd.spiderqueue(r2708)Moved sqlite utils to scrapyd:
scrapy.utils.sqlite->scrapyd.sqlite(r2781)Real support for returning iterators on
start_requests()method. The iterator is now consumed during the crawl when the spider is getting idle (r2704)Added
REDIRECT_ENABLEDsetting to quickly enable/disable the redirect middleware (r2697)Added
RETRY_ENABLEDsetting to quickly enable/disable the retry middleware (r2694)Added
CloseSpiderexception to manually close spiders (r2691)Improved encoding detection by adding support for HTML5 meta charset declaration (r2690)
Refactored close spider behavior to wait for all downloads to finish and be processed by spiders, before closing the spider (r2688)
Added
SitemapSpider(see documentation in Spiders page) (r2658)Added
LogStatsextension for periodically logging basic stats (like crawled pages and scraped items) (r2657)Make handling of gzipped responses more robust (#319, r2643). Now Scrapy will try and decompress as much as possible from a gzipped response, instead of failing with an
IOError.Simplified !MemoryDebugger extension to use stats for dumping memory debugging info (r2639)
Added new command to edit spiders:
scrapy edit(r2636) and-eflag togenspidercommand that uses it (r2653)Changed default representation of items to pretty-printed dicts. (r2631). This improves default logging by making log more readable in the default case, for both Scraped and Dropped lines.
Added
spider_errorsignal (r2628)Added
COOKIES_ENABLEDsetting (r2625)Stats are now dumped to Scrapy log (default value of
STATS_DUMPsetting has been changed toTrue). This is to make Scrapy users more aware of Scrapy stats and the data that is collected there.Added support for dynamically adjusting download delay and maximum concurrent requests (r2599)
Added new DBM HTTP cache storage backend (r2576)
Added
listjobs.jsonAPI to Scrapyd (r2571)CsvItemExporter: addedjoin_multivaluedparameter (r2578)Added namespace support to
xmliter_lxml(r2552)Improved cookies middleware by making
COOKIES_DEBUGnicer and documenting it (r2579)Several improvements to Scrapyd and Link extractors
Code rearranged and removed¶
- Merged item passed and item scraped concepts, as they have often proved confusing in the past. This means: (r2630)
original item_scraped signal was removed
original item_passed signal was renamed to item_scraped
old log lines
Scraped Item...were removedold log lines
Passed Item...were renamed toScraped Item...lines and downgraded toDEBUGlevel
Removed unused function:
scrapy.utils.request.request_info()(r2577)Removed googledir project from
examples/googledir. There’s now a new example project calleddirbotavailable on GitHub: https://github.com/scrapy/dirbotRemoved support for default field values in Scrapy items (r2616)
Removed experimental crawlspider v2 (r2632)
Removed scheduler middleware to simplify architecture. Duplicates filter is now done in the scheduler itself, using the same dupe filtering class as before (
DUPEFILTER_CLASSsetting) (r2640)Removed support for passing urls to
scrapy crawlcommand (usescrapy parseinstead) (r2704)Removed deprecated Execution Queue (r2704)
Removed (undocumented) spider context extension (from scrapy.contrib.spidercontext) (r2780)
removed
CONCURRENT_SPIDERSsetting (use scrapyd maxproc instead) (r2789)Renamed attributes of core components: downloader.sites -> downloader.slots, scraper.sites -> scraper.slots (r2717, r2718)
Renamed setting
CLOSESPIDER_ITEMPASSEDtoCLOSESPIDER_ITEMCOUNT(r2655). Backward compatibility kept.
Scrapy 0.12¶
The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.
New features and improvements¶
Passed item is now sent in the
itemargument of theitem_passed(#273)Added verbose option to
scrapy versioncommand, useful for bug reports (#298)HTTP cache now stored by default in the project data dir (#279)
Added project data storage directory (#276, #277)
Documented file structure of Scrapy projects (see command-line tool doc)
New lxml backend for XPath selectors (#147)
Per-spider settings (#245)
Support exit codes to signal errors in Scrapy commands (#248)
Added
-cargument toscrapy shellcommandMade
libxml2optional (#260)New
deploycommand (#261)Added
CLOSESPIDER_PAGECOUNTsetting (#253)Added
CLOSESPIDER_ERRORCOUNTsetting (#254)
Scrapyd changes¶
Scrapyd now uses one process per spider
It stores one log file per spider run, and rotate them keeping the latest 5 logs per spider (by default)
A minimal web ui was added, available at http://localhost:6800 by default
There is now a
scrapy servercommand to start a Scrapyd server of the current project
Changes to settings¶
added
HTTPCACHE_ENABLEDsetting (False by default) to enable HTTP cache middlewarechanged
HTTPCACHE_EXPIRATION_SECSsemantics: now zero means “never expire”.
Deprecated/obsoleted functionality¶
Deprecated
runservercommand in favor ofservercommand which starts a Scrapyd server. See also: Scrapyd changesDeprecated
queuecommand in favor of using Scrapydschedule.jsonAPI. See also: Scrapyd changesRemoved the !LxmlItemLoader (experimental contrib which never graduated to main contrib)
Scrapy 0.10¶
The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.
New features and improvements¶
New Scrapy service called
scrapydfor deploying Scrapy crawlers in production (#218) (documentation available)Simplified Images pipeline usage which doesn’t require subclassing your own images pipeline now (#217)
Scrapy shell now shows the Scrapy log by default (#206)
Refactored execution queue in a common base code and pluggable backends called “spider queues” (#220)
New persistent spider queue (based on SQLite) (#198), available by default, which allows to start Scrapy in server mode and then schedule spiders to run.
Added documentation for Scrapy command-line tool and all its available sub-commands. (documentation available)
Feed exporters with pluggable backends (#197) (documentation available)
Deferred signals (#193)
Added two new methods to item pipeline open_spider(), close_spider() with deferred support (#195)
Support for overriding default request headers per spider (#181)
Replaced default Spider Manager with one with similar functionality but not depending on Twisted Plugins (#186)
Split Debian package into two packages - the library and the service (#187)
Scrapy log refactoring (#188)
New extension for keeping persistent spider contexts among different runs (#203)
Added
dont_redirectrequest.meta key for avoiding redirects (#233)Added
dont_retryrequest.meta key for avoiding retries (#234)
Command-line tool changes¶
New
scrapycommand which replaces the oldscrapy-ctl.py(#199) - there is only one globalscrapycommand now, instead of onescrapy-ctl.pyper project - Addedscrapy.batscript for running more conveniently from WindowsAdded bash completion to command-line tool (#210)
Renamed command
starttorunserver(#209)
API changes¶
urlandbodyattributes of Request objects are now read-only (#230)Request.copy()andRequest.replace()now also copies theircallbackanderrbackattributes (#231)Removed
UrlFilterMiddlewarefromscrapy.contrib(already disabled by default)Offsite middleware doesn’t filter out any request coming from a spider that doesn’t have a allowed_domains attribute (#225)
Removed Spider Manager
load()method. Now spiders are loaded in the__init__method itself.- Changes to Scrapy Manager (now called “Crawler”):
scrapy.core.manager.ScrapyManagerclass renamed toscrapy.crawler.Crawlerscrapy.core.manager.scrapymanagersingleton moved toscrapy.project.crawler
Moved module:
scrapy.contrib.spidermanagertoscrapy.spidermanagerSpider Manager singleton moved from
scrapy.spider.spidersto thespiders` attribute of ``scrapy.project.crawlersingleton.- moved Stats Collector classes: (#204)
scrapy.stats.collector.StatsCollectortoscrapy.statscol.StatsCollectorscrapy.stats.collector.SimpledbStatsCollectortoscrapy.contrib.statscol.SimpledbStatsCollector
default per-command settings are now specified in the
default_settingsattribute of command object class (#201)- changed arguments of Item pipeline
process_item()method from(spider, item)to(item, spider) backward compatibility kept (with deprecation warning)
- changed arguments of Item pipeline
- moved
scrapy.core.signalsmodule toscrapy.signals backward compatibility kept (with deprecation warning)
- moved
- moved
scrapy.core.exceptionsmodule toscrapy.exceptions backward compatibility kept (with deprecation warning)
- moved
added
handles_request()class method toBaseSpiderdropped
scrapy.log.exc()function (usescrapy.log.err()instead)dropped
componentargument ofscrapy.log.msg()functiondropped
scrapy.log.log_levelattributeAdded
from_settings()class methods to Spider Manager, and Item Pipeline Manager
Changes to settings¶
Added
HTTPCACHE_IGNORE_SCHEMESsetting to ignore certain schemes on !HttpCacheMiddleware (#225)Added
SPIDER_QUEUE_CLASSsetting which defines the spider queue to use (#220)Added
KEEP_ALIVEsetting (#220)Removed
SERVICE_QUEUEsetting (#220)Removed
COMMANDS_SETTINGS_MODULEsetting (#201)Renamed
REQUEST_HANDLERStoDOWNLOAD_HANDLERSand make download handlers classes (instead of functions)
Scrapy 0.9¶
The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.
New features and improvements¶
Added SMTP-AUTH support to scrapy.mail
New settings added:
MAIL_USER,MAIL_PASS(r2065 | #149)Added new scrapy-ctl view command - To view URL in the browser, as seen by Scrapy (r2039)
Added web service for controlling Scrapy process (this also deprecates the web console. (r2053 | #167)
Support for running Scrapy as a service, for production systems (r1988, r2054, r2055, r2056, r2057 | #168)
Added wrapper induction library (documentation only available in source code for now). (r2011)
Simplified and improved response encoding support (r1961, r1969)
Added
LOG_ENCODINGsetting (r1956, documentation available)Added
RANDOMIZE_DOWNLOAD_DELAYsetting (enabled by default) (r1923, doc available)MailSenderis no longer IO-blocking (r1955 | #146)Linkextractors and new Crawlspider now handle relative base tag urls (r1960 | #148)
Several improvements to Item Loaders and processors (r2022, r2023, r2024, r2025, r2026, r2027, r2028, r2029, r2030)
Added support for adding variables to telnet console (r2047 | #165)
Support for requests without callbacks (r2050 | #166)
API changes¶
Change
Spider.domain_nametoSpider.name(SEP-012, r1975)Response.encodingis now the detected encoding (r1961)HttpErrorMiddlewarenow returns None or raises an exception (r2006 | #157)Added
ExecutionQueuefor feeding spiders to scrape (r2034)Removed
ExecutionEnginesingleton (r2039)Ported
S3ImagesStore(images pipeline) to use boto and threads (r2033)Moved module:
scrapy.management.telnettoscrapy.telnet(r2047)
Changes to default settings¶
Changed default
SCHEDULER_ORDERtoDFO(r1939)
Scrapy 0.8¶
The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.
New features¶
Added DEFAULT_RESPONSE_ENCODING setting (r1809)
Added
dont_clickargument toFormRequest.from_response()method (r1813, r1816)Added
clickdataargument toFormRequest.from_response()method (r1802, r1803)Added support for HTTP proxies (
HttpProxyMiddleware) (r1781, r1785)Offsite spider middleware now logs messages when filtering out requests (r1841)
Backward-incompatible changes¶
Changed
scrapy.utils.response.get_meta_refresh()signature (r1804)Removed deprecated
scrapy.item.ScrapedItemclass - usescrapy.item.Item instead(r1838)Removed deprecated
scrapy.xpathmodule - usescrapy.selectorinstead. (r1836)Removed deprecated
core.signals.domain_opensignal - usecore.signals.domain_openedinstead (r1822)log.msg()now receives aspiderargument (r1822)Old domain argument has been deprecated and will be removed in 0.9. For spiders, you should always use the
spiderargument and pass spider references. If you really want to pass a string, use thecomponentargument instead.
Changed core signals
domain_opened,domain_closed,domain_idle- Changed Item pipeline to use spiders instead of domains
The
domainargument ofprocess_item()item pipeline method was changed tospider, the new signature is:process_item(spider, item)(r1827 | #105)To quickly port your code (to work with Scrapy 0.8) just use
spider.domain_namewhere you previously useddomain.
- Changed Stats API to use spiders instead of domains (r1849 | #113)
StatsCollectorwas changed to receive spider references (instead of domains) in its methods (set_value,inc_value, etc).added
StatsCollector.iter_spider_stats()methodremoved
StatsCollector.list_domains()methodAlso, Stats signals were renamed and now pass around spider references (instead of domains). Here’s a summary of the changes:
To quickly port your code (to work with Scrapy 0.8) just use
spider.domain_namewhere you previously useddomain.spider_statscontains exactly the same data asdomain_stats.
CloseDomainextension moved toscrapy.contrib.closespider.CloseSpider(r1833)- Its settings were also renamed:
CLOSEDOMAIN_TIMEOUTtoCLOSESPIDER_TIMEOUTCLOSEDOMAIN_ITEMCOUNTtoCLOSESPIDER_ITEMCOUNT
Removed deprecated
SCRAPYSETTINGS_MODULEenvironment variable - useSCRAPY_SETTINGS_MODULEinstead (r1840)Renamed setting:
REQUESTS_PER_DOMAINtoCONCURRENT_REQUESTS_PER_SPIDER(r1830, r1844)Renamed setting:
CONCURRENT_DOMAINStoCONCURRENT_SPIDERS(r1830)Refactored HTTP Cache middleware
HTTP Cache middleware has been heavily refactored, retaining the same functionality except for the domain sectorization which was removed. (r1843 )
Renamed exception:
DontCloseDomaintoDontCloseSpider(r1859 | #120)Renamed extension:
DelayedCloseDomaintoSpiderCloseDelay(r1861 | #121)Removed obsolete
scrapy.utils.markup.remove_escape_charsfunction - usescrapy.utils.markup.replace_escape_charsinstead (r1865)
Scrapy 0.7¶
First release of Scrapy.