When you’re scraping web pages, the most common task you need to perform is to extract data from the HTML source. There are several libraries available to achieve this:
- BeautifulSoup is a very popular screen scraping library among Python programmers which constructs a Python object based on the structure of the HTML code and also deals with bad markup reasonably well, but it has one drawback: it’s slow.
- lxml is a XML parsing library (which also parses HTML) with a pythonic API based on ElementTree (which is not part of the Python standard library).
Scrapy comes with its own mechanism for extracting data. They’re called XPath selectors (or just “selectors”, for short) because they “select” certain parts of the HTML document specified by XPath expressions.
XPath is a language for selecting nodes in XML documents, which can also be used with HTML.
This page explains how selectors work and describes their API which is very small and simple, unlike the lxml API which is much bigger because the lxml library can be used for many other tasks, besides selecting markup documents.
For a complete reference of the selectors API see the XPath selector reference.
There are two types of selectors bundled with Scrapy. Those are:
Both share the same selector API, and are constructed with a Response object as their first parameter. This is the Response they’re going to be “selecting”.
hxs = HtmlXPathSelector(response) # a HTML selector xxs = XmlXPathSelector(response) # a XML selector
To explain how to use the selectors we’ll use the Scrapy shell (which provides interactive testing) and an example page located in the Scrapy documentation server:
Here’s its HTML code:
<html> <head> <base href='http://example.com/' /> <title>Example website</title> </head> <body> <div id='images'> <a href='image1.html'>Name: My image 1 <br /><img src='image1_thumb.jpg' /></a> <a href='image2.html'>Name: My image 2 <br /><img src='image2_thumb.jpg' /></a> <a href='image3.html'>Name: My image 3 <br /><img src='image3_thumb.jpg' /></a> <a href='image4.html'>Name: My image 4 <br /><img src='image4_thumb.jpg' /></a> <a href='image5.html'>Name: My image 5 <br /><img src='image5_thumb.jpg' /></a> </div> </body> </html>
First, let’s open the shell:
scrapy shell http://doc.scrapy.org/en/latest/_static/selectors-sample1.html
Then, after the shell loads, you’ll have some selectors already instantiated and ready to use.
Since we’re dealing with HTML, we’ll be using the HtmlXPathSelector object which is found, by default, in the hxs shell variable.
So, by looking at the HTML code of that page, let’s construct an XPath (using an HTML selector) for selecting the text inside the title tag:
>>> hxs.select('//title/text()') [<HtmlXPathSelector (text) xpath=//title/text()>]
As you can see, the select() method returns an XPathSelectorList, which is a list of new selectors. This API can be used quickly for extracting nested data.
To actually extract the textual data, you must call the selector extract() method, as follows:
>>> hxs.select('//title/text()').extract() [u'Example website']
Now we’re going to get the base URL and some image links:
>>> hxs.select('//base/@href').extract() [u'http://example.com/'] >>> hxs.select('//a[contains(@href, "image")]/@href').extract() [u'image1.html', u'image2.html', u'image3.html', u'image4.html', u'image5.html'] >>> hxs.select('//a[contains(@href, "image")]/img/@src').extract() [u'image1_thumb.jpg', u'image2_thumb.jpg', u'image3_thumb.jpg', u'image4_thumb.jpg', u'image5_thumb.jpg']
Selectors also have a re() method for extracting data using regular expressions. However, unlike using the select() method, the re() method does not return a list of XPathSelector objects, so you can’t construct nested .re() calls.
Here’s an example used to extract images names from the HTML code above:
>>> hxs.select('//a[contains(@href, "image")]/text()').re(r'Name:\s*(.*)') [u'My image 1', u'My image 2', u'My image 3', u'My image 4', u'My image 5']
The select() selector method returns a list of selectors, so you can call the select() for those selectors too. Here’s an example:
>>> links = hxs.select('//a[contains(@href, "image")]') >>> links.extract() [u'<a href="image1.html">Name: My image 1 <br><img src="image1_thumb.jpg"></a>', u'<a href="image2.html">Name: My image 2 <br><img src="image2_thumb.jpg"></a>', u'<a href="image3.html">Name: My image 3 <br><img src="image3_thumb.jpg"></a>', u'<a href="image4.html">Name: My image 4 <br><img src="image4_thumb.jpg"></a>', u'<a href="image5.html">Name: My image 5 <br><img src="image5_thumb.jpg"></a>'] >>> for index, link in enumerate(links): args = (index, link.select('@href').extract(), link.select('img/@src').extract()) print 'Link number %d points to url %s and image %s' % args Link number 0 points to url [u'image1.html'] and image [u'image1_thumb.jpg'] Link number 1 points to url [u'image2.html'] and image [u'image2_thumb.jpg'] Link number 2 points to url [u'image3.html'] and image [u'image3_thumb.jpg'] Link number 3 points to url [u'image4.html'] and image [u'image4_thumb.jpg'] Link number 4 points to url [u'image5.html'] and image [u'image5_thumb.jpg']
Keep in mind that if you are nesting XPathSelectors and use an XPath that starts with /, that XPath will be absolute to the document and not relative to the XPathSelector you’re calling it from.
For example, suppose you want to extract all <p> elements inside <div> elements. First, you would get all <div> elements:
>>> divs = hxs.select('//div')
At first, you may be tempted to use the following approach, which is wrong, as it actually extracts all <p> elements from the document, not only those inside <div> elements:
>>> for p in divs.select('//p') # this is wrong - gets all <p> from the whole document >>> print p.extract()
This is the proper way to do it (note the dot prefixing the .//p XPath):
>>> for p in divs.select('.//p') # extracts all <p> inside >>> print p.extract()
Another common case would be to extract all direct <p> children:
>>> for p in divs.select('p') >>> print p.extract()
For more details about relative XPaths see the Location Paths section in the XPath specification.
There are two types of selectors bundled with Scrapy: HtmlXPathSelector and XmlXPathSelector. Both of them implement the same XPathSelector interface. The only different is that one is used to process HTML data and the other XML data.
A XPathSelector object is a wrapper over response to select certain parts of its content.
response is a Response object that will be used for selecting and extracting data
xpath is a string containing the XPath to apply
Apply the given regex and return a list of unicode strings with the matches.
regex can be either a compiled regular expression or a string which will be compiled to a regular expression using re.compile(regex)
Register the given namespace to be used in this XPathSelector. Without registering namespaces you can’t select or extract data from non-standard namespaces. See examples below.
Remove all namespaces, allowing to traverse the document using namespace-less xpaths. See example below.
The XPathSelectorList class is subclass of the builtin list class, which provides a few additional methods.
xpath is the same argument as the one in XPathSelector.select()
regex is the same argument as the one in XPathSelector.re()
x = HtmlXPathSelector(html_response)
Extract the text of all <h1> elements from a HTML response body, returning a list of unicode strings:
x.select("//h1").extract() # this includes the h1 tag x.select("//h1/text()").extract() # this excludes the h1 tag
Iterate over all <p> tags and print their class attribute:
for node in x.select("//p"): ... print node.select("@href")
Extract textual data from all <p> tags without entities, as a list of unicode strings:
x.select("//p/text()").extract_unquoted() # the following line is wrong. extract_unquoted() should only be used # with textual XPathSelectors x.select("//p").extract_unquoted() # it may work but output is unpredictable
x = XmlXPathSelector(xml_response)
Extract all prices from a Google Base XML feed which requires registering a namespace:
x.register_namespace("g", "http://base.google.com/ns/1.0") x.select("//g:price").extract()
When dealing with scraping projects, it is often quite convenient to get rid of namespaces altogether and just work with element names, to write more simple/convenient XPaths. You can use the XPathSelector.remove_namespaces() method for that.
Let’s show an example that illustrates this with Github blog atom feed.
First, we open the shell with the url we want to scrape:
$ scrapy shell https://github.com/blog.atom
Once in the shell we can try selecting all <link> objects and see that it doesn’t work (because the Atom XML namespace is obfuscating those nodes):
>>> xxs.select("//link") 
But once we call the XPathSelector.remove_namespaces() method, all nodes can be accessed directly by their names:
>>> xxs.remove_namespaces() >>> xxs.select("//link") [<XmlXPathSelector xpath='//link' data=u'<link xmlns="http://www.w3.org/2005/Atom'>, <XmlXPathSelector xpath='//link' data=u'<link xmlns="http://www.w3.org/2005/Atom'>, ...
If you wonder why the namespace removal procedure is not always called, instead of having to call it manually. This is because of two reasons which, in order of relevance, are: