Web Scraping 101

Posted on  by 



Python is a very popular programming language used by companies like Google, Facebook, Amazon, Microsoft, etc. Python is used for all variety of things like building websites using Django Python, web scraping, data analysis, machine learning, and natural language processing using Python. Web Scraping 101. Web Scraping (aka web mining) is the process of collecting information from a network resource. This document is a (WIP) introduction to differenct aspects and techniques involved in an web scraping system. Most of this text will be language-agnostic, but.

XPath is a technology that uses path expressions to select nodes or node- sets in an XML document (or in our case an HTML document). Even if XPath is not a programming language in itself, it allows you to write expressions that can access directly to a specific HTML element without having to go through the entire HTML tree.

It looks like the perfect tool for web scraping right? At ScrapingBee we love XPath!

Web Scraping In Python 101

In our previous article about web scraping with Python we talked a little bit about XPath expression. Now it's time to dig a bit deeper into this subject. Download xbox 360 games with 4 player offline free software.

Why learn XPath

  • Knowing how to use basic XPath expressions is a must-have skill when extracting data from a web page.
  • It's more powerful than CSS selectors
  • It allows you to navigate the DOM in any direction
  • Can match text inside HTML elements

Entire books have been written on XPath, and I don’t have the pretention to explain everything in-depth, this is an introduction to XPath and we will see through real examples how you can use it for your web scraping needs.

But first, let's talk a little about the DOM

Document Object Model

I am going to assume you already know HTML, so this is just a small reminder.

As you already know, a web page is a document containing text within tags, that add meaning to the document by describing elements like titles, paragraphs, lists, links etc.

Let's see a basic HTML page, to understand what the Document Object Model is.


This HTML code is basically HTML content encapsulated inside other HTML content. The HTML hierarchy can be viewed as a tree. We can already see this hierarchy through the indentation in the HTML code.

Python web scraping api

When your web browser parses this code, it will create a tree which is an object representation of the HTML document. It is called the Document Object Model.

Below is the internal tree structure inside Google Chrome inspector :


On the left we can see the HTML tree, and on the right we have the Javascript object representing the currently selected element (in this case, the <p> tag), with all its attributes.

The important thing to remember is that the DOM you see in your browser, when you right click + inspect can be really different from the actual HTML that was sent. Maybe some Javascript code was executed and dynamically changed the DOM ! For example, when you scroll on your twitter account, a request is sent by your browser to fetch new tweets, and some Javascript code is dynamically adding those new tweets to the DOM. Portrait of a planet pdf.

XPath Syntax

First let’s look at some XPath vocabulary :

• In Xpath terminology, as with HTML, there are different types of nodes : root nodes, element nodes, attribute nodes, and so called atomic values which is a synonym for text nodes in an HTML document.

• Each element node has one parent. in this example, the section element is the parent of p, details and button.

• Element nodes can have any number of children. In our example, li elements are all children of the ul element.

• Siblings are nodes that have the same parents. p, details and button are siblings.

• Ancestors a node’s parent and parent’s parent… Islam and science.

• Descendants a node’s children and children’s children…

There are different types of expressions to select a node in an HTML document, here are the most important ones :

Xpath ExpressionDescription
nodenameThis is the simplest one, it select all nodes with this nodename
/Selects from the root node (useful for writing absolute path)
//Selects nodes from the current node that matches
.Selects the current node
.Selects the current node's parent
@Selects attribute
*Matches any node
@*Matches any attribute node

You can also use predicates to find a node that contains a specific value. Predicates are always in square brackets: [predicate]

Here are some examples :

Xpath ExpressionDescription
//li[last()]Selects the last li element
//div[@class='product']Selects all div elements that have the class attribute with the product value.
//li[3]Selects the third li element (the index starts at 1)
//div[@class='product']Selects all div elements that have the class attribute with the product value.

Now we will see some examples of Xpath expressions. We can test XPath expressions inside Chrome Dev tools, so it is time to fire up Chrome.

To do so, right-click on the web page -> inspect and then cmd + f on a Mac or ctrl + f on other systems, then you can enter an Xpath expression, and the match will be highlighted in the Dev tool.


Tip

In the dev tools, you can right-click on any DOM node, and show its full XPath expression, that you can later factorize.

Web Scraping 101. How To Make A Simple Bot For Winning In ..

Tired of getting blocked while scraping the web? Our API handles headless browsers and rotates proxies for you.

XPath with Python

There are many Python packages that allow you to use XPath expressions to select HTML elements like lxml, Scrapy or Selenium. In these examples, we are going to use Selenium with Chrome in headless mode. You can look at this article to set up your environment: Scraping Single Page Application with Python

E-commerce product data extraction

In this example, we are going to see how to extract E-commerce product data from Ebay.com with XPath expressions.


On these three XPath expressions, we are using a // as an axis, meaning we are selecting nodes anywhere in the HTML tree. Then we are using a predicate [predicate] to match on specific IDs. IDs are supposed to be unique so it's not a problem do to this.

But when you select an element with its class name, it's better to use a relative path, because the class name can be used anywhere in the DOM, so the more specific you are the better. Not only that, but when the website will change (and it will), your code will be much more resilient to changes.

Automagically authenticate to a website

Web Scraping 101 (Using Selenium for Java) | by Gal ..

When you have to perform the same action on a website or extract the same type of information we can be a little smarter with our XPath expression, in order to create generic ones, and not specific XPath for each website.

In order to explain this, we're going to make a “generic” authentication function that will take a Login URL, a username and password, and try to authenticate on the target website.

To auto-magically log into a website with your scrapers, the idea is :

  • GET /loginPage

  • Select the first tag

  • Select the first before it that is not hidden

  • Set the value attribute for both inputs

  • Select the enclosing form and click on the submit button.

Most login forms will have an <input type='password'> tag. So we can select this password input with a simple: //input[@type='password']

Once we have this password input, we can use a relative path to select the username/email input. It will generally be the first preceding input that isn't hidden:.//preceding::input[not(@type='hidden')]

It's really important to exclude hidden inputs, because most of the time you will have at least one CSRF token hidden input. CSRF stands for Cross Site Request Forgery. The token is generated by the server and is required in every form submissions / POST requests. Almost every website use this mechanism to prevent CSRF attacks.

Now we need to select the enclosing form from one of the input:

.//ancestor::form

And with the form, we can select the submit input/button:

.//*[@type='submit']

Here is an example of such a function:

Of course it is far from perfect, it won't work everywhere but you get the idea.

Conclusion

XPath is very powerful when it comes to selecting HTML elements on a page, and often more powerful than CSS selectors.

One of the most difficult task when writing XPath expressions is not the expression in itself, but being precise enough to be sure to select the right element when the DOM will change, but also resilient enough to resist DOM changes.

At ScrapingBee, depending on our needs, we use XPath expressions or CSS selectors for our ready-made APIs. We will discuss the differences between the two in another blog post!

I hope you enjoyed this article, if you're interested by CSS selectors, checkout this BeautifulSoup tutorial

Happy Scraping!

February 15, 2015 // scraping , python , data , tutorial

This is part of a series of posts I have written about web scraping with Python.

  1. Web Scraping 101 with Python, which covers the basics of using Python for web scraping.
  2. Web Scraping 201: Finding the API, which covers when sites load data client-side with Javascript.
  3. Asynchronous Scraping with Python, showing how to use multithreading to speed things up.
  4. Scraping Pages Behind Login Forms, which shows how to log into sites using Python.

Update: Sorry folks, it looks like the NBA doesn't make shot log data accessible anymore. The same principles of this post still apply, but the particular example used is no longer functional. I do not intend to rewrite this post.

Previously, I explained how to scrape a page where the data is rendered server-side. However, the increasing popularity of Javascript frameworks such as AngularJS coupled with RESTful APIs means that fewer sites are generated server-side and are instead being rendered client-side.

In this post, I’ll give a brief overview of the differences between the two and show how to find the underlying API, allowing you to get the data you’re looking for.

Server-side vs client-side

Imagine we have a database of sports statistics and would like to build a web application on top of it (e.g. something like Basketball Reference).

If we build our web app using a server-side framework like Django [1], something akin to the following happens each time a user visits a page.

  1. User’s browser sends a request to the server hosting our application.
  2. Our server processes the request, checking to make sure the URL requested exists (amongst other things).
  3. If the requested URL does not exist, send an error back to the user’s browser and direct them to a 404 page.
  4. If the requested URL does exist, execute some code on the server which gets data from our database. Let’s say the user wants to see John Wall’s game-by-game stats for the 2014-15 NBA season. In this case, our Django/Python code queries the database and receives the data.
  5. Our Django/Python code injects the data into our application’s templates to complete the HTML for the page.
  6. Finally, the server sends the HTML to the user’s browser (a response to their request) and the page is displayed.

To illustrate the last step, go to John Wall’s game log and view the page source. Ctrl+f or Cmd+f and search for “2014-10-29”. This is the first row of the game-by-game stats table. We know the page was created server-side because the data is present in the page source.

However, if the web application is built with a client-side framework like Angular, the process is slightly different. In this case, the server still sends the static content (the HTML, CSS, and Javascript), but the HTML is only a template - it doesn’t hold any data. Separately, the Javascript in the server response fetches the data from an API and uses it to create the page client-side.

To illustrate, view the source of John Wall’s shot log page on NBA.com - there’s no data to scrape! See for yourself. Ctrl+f or Cmd+f for “Was @“. Despite there being many instances of it in the shot log table, none found in the page source.

If you’re thinking “Oh crap, I can’t scrape this data,” well, you’re in luck! Applications using an API are often easier to scrape - you just need to know how to find the API. Which means I should probably tell you how to do that.

Finding the API

With a client-side app, your browser is doing much of the work. And because your browser is what’s rendering the HTML, we can use it to see where the data is coming from using its built-in developer tools.

To illustrate, I’ll be using Chrome, but Firefox should be more or less the same (Internet Explorer users … you should switch to Chrome or Firefox and not look back).

To open Chrome’s Developer Tools, go to View -> Developer -> Developer Tools. In Firefox, it’s Tools -> Web Developer -> Toggle Tools. We’ll be using the Network tab, so click on that one. It should be empty.

Now, go to the page that has your data. In this case, it’s John Wall’s shot logs. If you’re already on the page, hit refresh. Your Network tab should look similar to this:

Next, click on the XHR filter. XHR is short for XMLHttpRequest - this is the type of request used to fetch XML or JSON data. You should see a couple entries in this table (screenshot below). One of them is the API request that returns the data you’re looking for (in this case, John Wall’s shots).

At this point, you’ll need to explore a bit to determine which request is the one you want. For our example, the one starting with “playerdashptshotlog” sounds promising. Let’s click on it and view it in the Preview tab. Things should now look like this:

Bingo! That’s the API endpoint. We can use the Preview tab to explore the response.

You should see a couple of objects:

  1. The resource name - playerdashptshotlog.
  2. The parameters (you might need to expand the resource section). These are the request parameters that were passed to the API. You can think of them like the WHERE clause of a SQL query. This request has parameters of Season=2014-15 and PlayerID=202322 (amongst others). Change the parameters in the URL and you’ll get different data (more on that in a bit).
  3. The result sets. This is self-explanatory.
  4. Within the result sets, you’ll find the headers and row set. Each object in the row set is essentially the result of a database query, while the headers tell you the column order. We can see that the first item in each row corresponds to the Game_ID, while the second is the Matchup.

Now, go to the Headers tab, grab the request URL, and open it in a new browser tab, we’ll see the data we’re looking for (example below). Note that I'm using JSONView, which nicely formats JSON in your browser.

To grab this data, we can use something like Python’s requests. Here’s an example:

That’s it. Now you have the data and can get to work.

Note that passing different parameter values to the API yields different results. For instance, change the Season parameter to 2013-14 - now you have John Wall’s shots for the 2013-14 season. Change the PlayerID to 201935 - now you have James Harden’s shots.

Additionally, different APIs return different types of data. Some might send XML; others, JSON. Some might store the results in an array of arrays; others, an array of maps or dictionaries. Some might not return the column headers at all. Things are vary between sites.

Had a situation where you haven't been able to find the data you're looking for in the page source? Well, now you know how to find it.

Was there something I missed? Have questions? Let me know.

[1] Really this can be any server-side framework - Ruby on Rails, PHP’s Drupal or CodeIgniter, etc.



Coments are closed