Frequently Asked Questions¶
How does Scrapy compare to BeautifulSoup or lxml?¶
BeautifulSoup and lxml are libraries for parsing HTML and XML. Scrapy is an application framework for writing web spiders that crawl web sites and extract data from them.
Scrapy provides a built-in mechanism for extracting data (called selectors) but you can easily use BeautifulSoup (or lxml) instead, if you feel more comfortable working with them. After all, they’re just parsing libraries which can be imported and used from any Python code.
In other words, comparing BeautifulSoup (or lxml) to Scrapy is like comparing jinja2 to Django.
Can I use Scrapy with BeautifulSoup?¶
Yes, you can.
As mentioned above, BeautifulSoup can be used
for parsing HTML responses in Scrapy callbacks.
You just have to feed the response’s body into a BeautifulSoup
object
and extract whatever data you need from it.
Here’s an example spider using BeautifulSoup API, with lxml
as the HTML parser:
from bs4 import BeautifulSoup
import scrapy
class ExampleSpider(scrapy.Spider):
name = "example"
allowed_domains = ["example.com"]
start_urls = ("http://www.example.com/",)
def parse(self, response):
# use lxml to get decent HTML parsing speed
soup = BeautifulSoup(response.text, "lxml")
yield {"url": response.url, "title": soup.h1.string}
Note
BeautifulSoup
supports several HTML/XML parsers.
See BeautifulSoup’s official documentation on which ones are available.
Did Scrapy “steal” X from Django?¶
Probably, but we don’t like that word. We think Django is a great open source project and an example to follow, so we’ve used it as an inspiration for Scrapy.
We believe that, if something is already done well, there’s no need to reinvent it. This concept, besides being one of the foundations for open source and free software, not only applies to software but also to documentation, procedures, policies, etc. So, instead of going through each problem ourselves, we choose to copy ideas from those projects that have already solved them properly, and focus on the real problems we need to solve.
We’d be proud if Scrapy serves as an inspiration for other projects. Feel free to steal from us!
Does Scrapy work with HTTP proxies?¶
Yes. Support for HTTP proxies is provided (since Scrapy 0.8) through the HTTP
Proxy downloader middleware. See
HttpProxyMiddleware
.
How can I scrape an item with attributes in different pages?¶
How can I simulate a user login in my spider?¶
See Using FormRequest.from_response() to simulate a user login.
Does Scrapy crawl in breadth-first or depth-first order?¶
By default, Scrapy uses a LIFO queue for storing pending requests, which basically means that it crawls in DFO order. This order is more convenient in most cases.
If you do want to crawl in true BFO order, you can do it by setting the following settings:
DEPTH_PRIORITY = 1
SCHEDULER_DISK_QUEUE = "scrapy.squeues.PickleFifoDiskQueue"
SCHEDULER_MEMORY_QUEUE = "scrapy.squeues.FifoMemoryQueue"
While pending requests are below the configured values of
CONCURRENT_REQUESTS
, CONCURRENT_REQUESTS_PER_DOMAIN
or
CONCURRENT_REQUESTS_PER_IP
, those requests are sent
concurrently. As a result, the first few requests of a crawl rarely follow the
desired order. Lowering those settings to 1
enforces the desired order, but
it significantly slows down the crawl as a whole.
My Scrapy crawler has memory leaks. What can I do?¶
Also, Python has a builtin memory leak issue which is described in Leaks without leaks.
How can I make Scrapy consume less memory?¶
See previous question.
How can I prevent memory errors due to many allowed domains?¶
If you have a spider with a long list of allowed_domains
(e.g. 50,000+), consider replacing the default
OffsiteMiddleware
downloader
middleware with a custom downloader middleware that requires less memory. For example:
If your domain names are similar enough, use your own regular expression instead joining the strings in
allowed_domains
into a complex regular expression.If you can meet the installation requirements, use pyre2 instead of Python’s re to compile your URL-filtering regular expression. See issue 1908.
See also other suggestions at StackOverflow.
Note
Remember to disable
scrapy.downloadermiddlewares.offsite.OffsiteMiddleware
when you
enable your custom implementation:
DOWNLOADER_MIDDLEWARES = {
"scrapy.downloadermiddlewares.offsite.OffsiteMiddleware": None,
"myproject.middlewares.CustomOffsiteMiddleware": 50,
}
Can I use Basic HTTP Authentication in my spiders?¶
Yes, see HttpAuthMiddleware
.
Why does Scrapy download pages in English instead of my native language?¶
Try changing the default Accept-Language request header by overriding the
DEFAULT_REQUEST_HEADERS
setting.
Where can I find some example Scrapy projects?¶
See Examples.
Can I run a spider without creating a project?¶
Yes. You can use the runspider
command. For example, if you have a
spider written in a my_spider.py
file you can run it with:
scrapy runspider my_spider.py
See runspider
command for more info.
I get “Filtered offsite request” messages. How can I fix them?¶
Those messages (logged with DEBUG
level) don’t necessarily mean there is a
problem, so you may not need to fix them.
Those messages are thrown by
OffsiteMiddleware
, which is a
downloader middleware (enabled by default) whose purpose is to filter out
requests to domains outside the ones covered by the spider.
What is the recommended way to deploy a Scrapy crawler in production?¶
See Deploying Spiders.
Can I use JSON for large exports?¶
It’ll depend on how large your output is. See this warning in JsonItemExporter
documentation.
Can I return (Twisted) deferreds from signal handlers?¶
Some signals support returning deferreds from their handlers, others don’t. See the Built-in signals reference to know which ones.
What does the response status code 999 mean?¶
999 is a custom response status code used by Yahoo sites to throttle requests.
Try slowing down the crawling speed by using a download delay of 2
(or
higher) in your spider:
from scrapy.spiders import CrawlSpider
class MySpider(CrawlSpider):
name = "myspider"
download_delay = 2
# [ ... rest of the spider code ... ]
Or by setting a global download delay in your project with the
DOWNLOAD_DELAY
setting.
Can I call pdb.set_trace()
from my spiders to debug them?¶
Yes, but you can also use the Scrapy shell which allows you to quickly analyze
(and even modify) the response being processed by your spider, which is, quite
often, more useful than plain old pdb.set_trace()
.
For more info see Invoking the shell from spiders to inspect responses.
Simplest way to dump all my scraped items into a JSON/CSV/XML file?¶
To dump into a JSON file:
scrapy crawl myspider -O items.json
To dump into a CSV file:
scrapy crawl myspider -O items.csv
To dump into a XML file:
scrapy crawl myspider -O items.xml
For more information see Feed exports
What’s this huge cryptic __VIEWSTATE
parameter used in some forms?¶
The __VIEWSTATE
parameter is used in sites built with ASP.NET/VB.NET. For
more info on how it works see this page. Also, here’s an example spider
which scrapes one of these sites.
What’s the best way to parse big XML/CSV data feeds?¶
Parsing big feeds with XPath selectors can be problematic since they need to build the DOM of the entire feed in memory, and this can be quite slow and consume a lot of memory.
In order to avoid parsing all the entire feed at once in memory, you can use
the xmliter_lxml()
and
csviter()
functions. In fact, this is what
XMLFeedSpider
uses.
- scrapy.utils.iterators.xmliter_lxml(obj: Union[Response, str, bytes], nodename: str, namespace: Optional[str] = None, prefix: str = 'x') Generator[Selector, Any, None] [source]¶
- scrapy.utils.iterators.csviter(obj: Union[Response, str, bytes], delimiter: Optional[str] = None, headers: Optional[List[str]] = None, encoding: Optional[str] = None, quotechar: Optional[str] = None) Generator[Dict[str, str], Any, None] [source]¶
Returns an iterator of dictionaries from the given csv object
obj can be: - a Response object - a unicode string - a string encoded as utf-8
delimiter is the character used to separate fields on the given obj.
headers is an iterable that when provided offers the keys for the returned dictionaries, if not the first row is used.
quotechar is the character used to enclosure fields on the given obj.
How can I instruct a spider to stop itself?¶
Raise the CloseSpider
exception from a callback. For
more info see: CloseSpider
.
How can I prevent my Scrapy bot from getting banned?¶
Should I use spider arguments or settings to configure my spider?¶
Both spider arguments and settings can be used to configure your spider. There is no strict rule that mandates to use one or the other, but settings are more suited for parameters that, once set, don’t change much, while spider arguments are meant to change more often, even on each spider run and sometimes are required for the spider to run at all (for example, to set the start url of a spider).
To illustrate with an example, assuming you have a spider that needs to log into a site to scrape data, and you only want to scrape data from a certain section of the site (which varies each time). In that case, the credentials to log in would be settings, while the url of the section to scrape would be a spider argument.
I’m scraping a XML document and my XPath selector doesn’t return any items¶
You may need to remove namespaces. See Removing namespaces.
How to split an item into multiple items in an item pipeline?¶
Item pipelines cannot yield multiple items per
input item. Create a spider middleware
instead, and use its
process_spider_output()
method for this purpose. For example:
from copy import deepcopy
from itemadapter import is_item, ItemAdapter
class MultiplyItemsMiddleware:
def process_spider_output(self, response, result, spider):
for item in result:
if is_item(item):
adapter = ItemAdapter(item)
for _ in range(adapter["multiply_by"]):
yield deepcopy(item)
Does Scrapy support IPv6 addresses?¶
Yes, by setting DNS_RESOLVER
to scrapy.resolver.CachingHostnameResolver
.
Note that by doing so, you lose the ability to set a specific timeout for DNS requests
(the value of the DNS_TIMEOUT
setting is ignored).
How to deal with <class 'ValueError'>: filedescriptor out of range in select()
exceptions?¶
This issue has been reported to appear when running broad crawls in macOS, where the default
Twisted reactor is twisted.internet.selectreactor.SelectReactor
. Switching to a
different reactor is possible by using the TWISTED_REACTOR
setting.
How can I cancel the download of a given response?¶
In some situations, it might be useful to stop the download of a certain response.
For instance, sometimes you can determine whether or not you need the full contents
of a response by inspecting its headers or the first bytes of its body. In that case,
you could save resources by attaching a handler to the bytes_received
or headers_received
signals and raising a
StopDownload
exception. Please refer to the
Stopping the download of a Response topic for additional information and examples.
How can I make a blank request?¶
from scrapy import Request
blank_request = Request("data:,")
In this case, the URL is set to a data URI scheme. Data URLs allow you to include data in-line in web pages as if they were external resources. The “data:” scheme with an empty content (“,”) essentially creates a request to a data URL without any specific content.
Running runspider
I get error: No spider found in file: <filename>
¶
This may happen if your Scrapy project has a spider module with a name that
conflicts with the name of one of the Python standard library modules, such
as csv.py
or os.py
, or any Python package that you have installed.
See issue 2680.