Frequently Asked Questions¶
How does Scrapy compare to BeautifulSoul or lxml?¶
Scrapy provides a built-in mechanism for extracting data (called selectors) but you can easily use BeautifulSoup (or lxml) instead, if you feel more comfortable working with them. After all, they’re just parsing libraries which can be imported and used from any Python code.
Does Scrapy work with Python 3.0?¶
No, and there are no plans to port Scrapy to Python 3.0 yet. At the moment, Scrapy works with Python 2.5, 2.6 and 2.7.
Did Scrapy “steal” X from Django?¶
Probably, but we don’t like that word. We think Django is a great open source project and an example to follow, so we’ve used it as an inspiration for Scrapy.
We believe that, if something is already done well, there’s no need to reinvent it. This concept, besides being one of the foundations for open source and free software, not only applies to software but also to documentation, procedures, policies, etc. So, instead of going through each problem ourselves, we choose to copy ideas from those projects that have already solved them properly, and focus on the real problems we need to solve.
We’d be proud if Scrapy serves as an inspiration for other projects. Feel free to steal from us!
Does Scrapy work with HTTP proxies?¶
Yes. Support for HTTP proxies is provided (since Scrapy 0.8) through the HTTP
Proxy downloader middleware. See
Scrapy crashes with: ImportError: No module named win32api¶
How can I simulate a user login in my spider?¶
Can I crawl in breadth-first order instead of depth-first order?¶
Yes, there’s a setting for that:
My Scrapy crawler has memory leaks. What can I do?¶
Also, Python has a builtin memory leak issue which is described in Leaks without leaks.
How can I make Scrapy consume less memory?¶
See previous question.
Why does Scrapy download pages in English instead of my native language?¶
Where can I find some example code using Scrapy?¶
Also, there’s a site for sharing code snippets (spiders, middlewares, extensions) called Scrapy snippets.
Finally, you can find some example code for performing not-so-trivial tasks in the Scrapy Recipes wiki page.
Can I run a spider without creating a project?¶
Yes. You can use the
runspider command. For example, if you have a
spider written in a
my_spider.py file you can run it with:
scrapy runspider my_spider.py
runspider command for more info.
I get “Filtered offsite request” messages. How can I fix them?¶
Those messages (logged with
DEBUG level) don’t necessarily mean there is a
problem, so you may not need to fix them.
Those message are thrown by the Offsite Spider Middleware, which is a spider middleware (enabled by default) whose purpose is to filter out requests to domains outside the ones covered by the spider.
For more info see:
What is the recommended way to deploy a Scrapy crawler in production?¶
Can I use JSON for large exports?¶
Can I return (Twisted) deferreds from signal handlers?¶
Some signals support returning deferreds from their handlers, others don’t. See the Built-in signals reference to know which ones.
What does the response status code 999 means?¶
999 is a custom reponse status code used by Yahoo sites to throttle requests.
Try slowing down the crawling speed by using a download delay of
higher) in your spider:
class MySpider(CrawlSpider): name = 'myspider' download_delay = 2 # [ ... rest of the spider code ... ]
Or by setting a global download delay in your project with the
Can I call
pdb.set_trace() from my spiders to debug them?¶
Yes, but you can also use the Scrapy shell which allows you too quickly analyze
(and even modify) the response being processed by your spider, which is, quite
often, more useful than plain old
For more info see Invoking the shell from spiders to inspect responses.
Simplest way to dump all my scraped items into a JSON/CSV/XML file?¶
To dump into a JSON file:
scrapy crawl myspider --set FEED_URI=items.json --set FEED_FORMAT=json
To dump into a CSV file:
scrapy crawl myspider --set FEED_URI=items.csv --set FEED_FORMAT=csv
To dump into a XML file:
scrapy crawl myspider --set FEED_URI=items.xml --set FEED_FORMAT=xml
For more information see Feed exports