Python Requests Web Scraping




  1. Python Web Scraping Without Requests
  2. Web Scraping With Python Requests
  3. Python Web Scraping Requests-html
  4. Python Requests Web Scraping For Prices

This post will walk through how to use the requests_html package to scrape options data from a JavaScript-rendered webpage. requests_html serves as an alternative to Selenium and PhantomJS, and provides a clear syntax similar to the awesome requests package. The code we’ll walk through is packaged into functions in the options module in the yahoo_fin package, but this article will show how to write the code from scratch using requests_html so that you can use the same idea to scrape other JavaScript-rendered webpages.

Requests# Well known library for most of the Python developers as a fundamental tool to get raw. With Python's requests (pip install requests) library we're getting a web page by using get on the URL. The response r contains many things, but using r.content will give us the HTML. Once we have the HTML we can then parse it for the data we're interested in analyzing.

Note:

requests_html requires Python 3.6+. If you don’t have requests_html installed, you can download it using pip:

Motivation

Let’s say we want to scrape options data for a particular stock. As an example, let’s look at Netflix (since it’s well known). If we go to the below site, we can see the option chain information for the earliest upcoming options expiration date for Netflix:

On this webpage there’s a drop-down box allowing us to view data by other expiration dates. What if we want to get all the possible choices – i.e. all the possible expiration dates?

Python web scraping beautifulsoup

We can try using requests with BeautifulSoup, but that won’t work quite the way we want. To demonstrate, let’s try doing that to see what happens.

Running the above code shows us that option_tags is an empty list. This is because there are no option tags found in the HTML we scrapped from the webpage above. However, if we look at the source via a web browser, we can see that there are, indeed, option tags:

Why the disconnect? The reason why we see option tags when looking at the source code in a browser is that the browser is executing JavaScript code that renders that HTML i.e. it modifies the HTML of the page dynamically to allow a user to select one of the possible expiration dates. This means if we try just scraping the HTML, the JavaScript won’t be executed, and thus, we won’t see the tags containing the expiration dates. This brings us to requests_html.

Using requests_html to render JavaScript

Now, let’s use requests_html to run the JavaScript code in order to render the HTML we’re looking for.

Similar to the requests package, we can use a session object to get the webpage we need. This gets stored in a response variable, resp. If you print out resp you should see the message Response 200, which means the connection to the webpage was successful (otherwise you’ll get a different message).

Running resp.html will give us an object that allows us to print out, search through, and perform several functions on the webpage’s HTML. To simulate running the JavaScript code, we use the render method on the resp.html object. Note how we don’t need to set a variable equal to this rendered result i.e. running the below code:

stores the updated HTML as in attribute in resp.html. Specifically, we can access the rendered HTML like this:

So now resp.html.html contains the HTML we need containing the option tags. From here, we can parse out the expiration dates from these tags using the find method.

Similarly, if we wanted to search for other HTML tags we could just input whatever those are into the find method e.g. anchor (a), paragraph (p), header tags (h1, h2, h3, etc.) and so on.

Python Requests Web Scraping

Alternatively, we could also use BeautifulSoup on the rendered HTML (see below). However, the awesome point here is that we can create the connection to this webpage, render its JavaScript, and parse out the resultant HTML all in one package!

Lastly, we could scrape this particular webpage directly with yahoo_fin, which provides functions that wrap around requests_html specifically for Yahoo Finance’s website.

Scraping options data for each expiration date

Once we have the expiration dates, we could proceed with scraping the data associated with each date. In this particular case, the pattern of the URL for each expiration date’s data requires the date be converted to Unix timestamp format. Akonii driver download for windows. This can be done using the pandas package.

Python Web Scraping Without Requests

Similarly, we could scrape this data using yahoo_fin. In this case, we just input the ticker symbol, NFLX and associated expiration date into either get_calls or get_puts to obtain the calls and puts data, respectively.

Note: here we don’t need to convert each date to a Unix timestamp as these functions will figure that out automatically from the input dates.

That’s it for this post! To learn more about requests-html, check out my web scraping course on Udemy here!

Web Scraping With Python Requests

To see the official documentation for requests_html, click here.

Related

I’ve recently had to perform some web scraping from a site that required login.It wasn’t very straight forward as I expected so I’ve decided to write a tutorial for it.

For this tutorial we will scrape a list of projects from our bitbucket account.

The code from this tutorial can be found on my Github.

We will perform the following steps:

Web scraping with python pdf
  1. Extract the details that we need for the login
  2. Perform login to the site
  3. Scrape the required data

For this tutorial, I’ve used the following packages (can be found in the requirements.txt):

Open the login page

Go to the following page “bitbucket.org/account/signin” .You will see the following page (perform logout in case you’re already logged in)

Check the details that we need to extract in order to login

In this section we will build a dictionary that will hold our details for performing login:

  1. Right click on the “Username or email” field and select “inspect element”. We will use the value of the “name” attribue for this input which is “username”. “username” will be the key and our user name / email will be the value (on other sites this might be “email”, “user_name”, “login”, etc.).
  2. Right click on the “Password” field and select “inspect element”. In the script we will need to use the value of the “name” attribue for this input which is “password”. “password” will be the key in the dictionary and our password will be the value (on other sites this might be “user_password”, “login_password”, “pwd”, etc.).
  3. In the page source, search for a hidden input tag called “csrfmiddlewaretoken”. “csrfmiddlewaretoken” will be the key and value will be the hidden input value (on other sites this might be a hidden input with the name “csrf_token”, “authentication_token”, etc.). For example “Vy00PE3Ra6aISwKBrPn72SFml00IcUV8”.

We will end up with a dict that will look like this:

Requests

Keep in mind that this is the specific case for this site. While this login form is simple, other sites might require us to check the request log of the browser and find the relevant keys and values that we should use for the login step.

For this script we will only need to import the following:

First, we would like to create our session object. This object will allow us to persist the login session across all our requests.

Python Web Scraping Requests-html

Second, we would like to extract the csrf token from the web page, this token is used during login.For this example we are using lxml and xpath, we could have used regular expression or any other method that will extract this data.

** More about xpath and lxml can be found here.

Next, we would like to perform the login phase.In this phase, we send a POST request to the login url. We use the payload that we created in the previous step as the data.We also use a header for the request and add a referer key to it for the same url.

Python Requests Web Scraping For Prices

Now, that we were able to successfully login, we will perform the actual scraping from bitbucket dashboard page

In order to test this, let’s scrape the list of projects from the bitbucket dashboard page.Again, we will use xpath to find the target elements and print out the results. Cellebrite laptops & desktops driver download for windows. If everything went OK, the output should be the list of buckets / project that are in your bitbucket account.

You can also validate the requests results by checking the returned status code from each request.It won’t always let you know that the login phase was successful but it can be used as an indicator.

for example:

That’s it.

Full code sample can be found on Github.