12 min read
Our article is an excerpt from the book Web Scraping with Python, written by Richard Lawson. This book contains step by step tutorials on how to leverage Python programming techniques for ethical web scraping.

The amount of data available on the web is consistently growing both in quantity and in form. Businesses require this data to make decisions, particularly with the explosive growth of machine learning tools which require large amounts of data for training. Much of this data is available via Application Programming Interfaces, but at the same time a lot of valuable data is still only available through the process of web scraping.

Python is the choice of programing language for many who build systems to perform scraping. It is an easy to use programming language with a rich ecosystem of tools for other tasks. In this article, we will focus on the fundamentals of setting up a scraping environment and perform basic requests for data with several tools of trade.

Setting up a Python development environment

If you have not used Python before, it is important to have a working development  environment. The recipes in this book will be all in Python and be a mix of interactive examples, but primarily implemented as scripts to be interpreted by the Python interpreter. This recipe will show you how to set up an isolated development environment with virtualenv and manage project dependencies with pip . We also get the code for the book and install it into the Python virtual environment.

Getting ready

We will exclusively be using Python 3.x, and specifically in my case 3.6.1. While Mac and  Linux normally have Python version 2 installed, and Windows systems do not. So it is likely that in any case that Python 3 will need to be installed. You can find references for Python installers at www.python.org. You can check Python’s version with python –version

Python Install

pip comes installed with Python 3.x, so we will omit instructions on its installation. Additionally, all command line examples in this book are run on a Mac. For Linux users the commands should be identical. On Windows, there are alternate commands (like dir instead of ls), but these alternatives will not be covered.

How to do it

We will be installing a number of packages with pip. These packages are installed into a Python environment. There often can be version conflicts with other packages, so a good practice for following along with the recipes in the book will be to create a new virtual Python environment where the packages we will use will be ensured to work properly.

Virtual Python environments are managed with the virtualenv tool. This can be installed with the following command:

~ $ pip install virtualenv

Collecting virtualenv

Using cached virtualenv-15.1.0-py2.py3-none-any.whl

Installing collected packages: virtualenv

Successfully installed virtualenv-15.1.0

Now we can use virtualenv. But before that let’s briefly look at pip. This command installs Python packages from PyPI, a package repository with literally 10’s of thousands of packages. We just saw using the install subcommand to pip, which ensures a package is installed. We can also see all currently installed packages with pip list:

~ $ pip list

alabaster (0.7.9)

amqp (1.4.9)

anaconda-client (1.6.0)

anaconda-navigator (1.5.3)

anaconda-project (0.4.1)

aniso8601 (1.3.0)

Packages can also be uninstalled using pip uninstall followed by the package name. I’ll leave it to you to give it a try.

Now back to virtualenv. Using virtualenv is very simple. Let’s use it to create an environment and install the code from github. Let’s walk through the steps:

  1. Create a directory to represent the project and enter the directory.
~ $ mkdir pywscb

~ $ cd pywscb
  1. Initialize a virtual environment folder named env:
pywscb $ virtualenv env

Using base prefix '/Users/michaelheydt/anaconda'

New python executable in /Users/michaelheydt/pywscb/env/bin/python

copying /Users/michaelheydt/anaconda/bin/python =>

/Users/michaelheydt/pywscb/env/bin/python

copying /Users/michaelheydt/anaconda/bin/../lib/libpython3.6m.dylib

=> /Users/michaelheydt/pywscb/env/lib/libpython3. 6m.dylib

Installing setuptools, pip, wheel...done.
  1. This creates an env folder. Let’s take a look at what was installed.
pywscb $ ls -la env

total 8

drwxr-xr-x 6 michaelheydt staff 204 Jan 18 15:38 .

drwxr-xr-x 3 michaelheydt staff 102 Jan 18 15:35 ..

drwxr-xr-x 16 michaelheydt staff 544 Jan 18 15:38 bin

drwxr-xr-x 3 michaelheydt staff 102 Jan 18 15:35 include

drwxr-xr-x 4 michaelheydt staff 136 Jan 18 15:38 lib

-rw-r--r-- 1 michaelheydt staff 60 Jan 18 15:38 pipselfcheck.

json
  1. New we activate the virtual environment. This command uses the content in the env folder to configure Python. After this all python activities are relative to this virtual environment.
pywscb $ source env/bin/activate

(env) pywscb $
  1. We can check that python is indeed using this virtual environment with the following command:
(env) pywscb $ which python

/Users/michaelheydt/pywscb/env/bin/python

With our virtual environment created, let’s clone the books sample code and take a look at its structure.

(env) pywscb $ git clone

https://github.com/PacktBooks/PythonWebScrapingCookbook.git

Cloning into 'PythonWebScrapingCookbook'...

remote: Counting objects: 420, done.

remote: Compressing objects: 100% (316/316), done.

remote: Total 420 (delta 164), reused 344 (delta 88), pack-reused 0

Receiving objects: 100% (420/420), 1.15 MiB | 250.00 KiB/s, done.

Resolving deltas: 100% (164/164), done.

Checking connectivity... done.

This created a PythonWebScrapingCookbook directory.

(env) pywscb $ ls -l

total 0

drwxr-xr-x 9 michaelheydt staff 306 Jan 18 16:21 PythonWebScrapingCookbook

drwxr-xr-x 6 michaelheydt staff 204 Jan 18 15:38 env

Let’s change into it and examine the content.

(env) PythonWebScrapingCookbook $ ls -l

total 0

drwxr-xr-x 15 michaelheydt staff 510 Jan 18 16:21 py

drwxr-xr-x 14 michaelheydt staff 476 Jan 18 16:21 www

There are two directories. Most the the Python code is is the py directory. www contains some web content that we will use from time-to-time using a local web server. Let’s look at the contents of the py directory:

(env) py $ ls -l

total 0

drwxr-xr-x 9 michaelheydt staff 306 Jan 18 16:21 01

drwxr-xr-x 25 michaelheydt staff 850 Jan 18 16:21 03

drwxr-xr-x 21 michaelheydt staff 714 Jan 18 16:21 04

drwxr-xr-x 10 michaelheydt staff 340 Jan 18 16:21 05

drwxr-xr-x 14 michaelheydt staff 476 Jan 18 16:21 06

drwxr-xr-x 25 michaelheydt staff 850 Jan 18 16:21 07

drwxr-xr-x 14 michaelheydt staff 476 Jan 18 16:21 08

drwxr-xr-x 7 michaelheydt staff 238 Jan 18 16:21 09

drwxr-xr-x 7 michaelheydt staff 238 Jan 18 16:21 10

drwxr-xr-x 9 michaelheydt staff 306 Jan 18 16:21 11

drwxr-xr-x 8 michaelheydt staff 272 Jan 18 16:21 modules

Code for each chapter is in the numbered folder matching the chapter (there is no code for chapter 2 as it is all interactive Python).

Note that there is a modules folder. Some of the recipes throughout the book use code in those modules. Make sure that your Python path points to this folder. On Mac and Linux you can sets this in your .bash_profile file (and environments variables dialog on Windows):

Export PYTHONPATH="/users/michaelheydt/dropbox/packt/books/pywebscrcookbook/code/py/modules" export PYTHONPATH

The contents in each folder generally follows a numbering scheme matching the sequence of the recipe in the chapter. The following is the contents of the chapter 6 folder:

(env) py $ ls -la 06

total 96

drwxr-xr-x 14 michaelheydt staff 476 Jan 18 16:21 .

drwxr-xr-x 14 michaelheydt staff 476 Jan 18 16:26 ..

-rw-r--r-- 1 michaelheydt staff 902 Jan 18 16:21 01_scrapy_retry.py

-rw-r--r-- 1 michaelheydt staff 656 Jan 18 16:21 02_scrapy_redirects.py

-rw-r--r-- 1 michaelheydt staff 1129 Jan 18 16:21 03_scrapy_pagination.py

-rw-r--r-- 1 michaelheydt staff 488 Jan 18 16:21 04_press_and_wait.py

-rw-r--r-- 1 michaelheydt staff 580 Jan 18 16:21 05_allowed_domains.py

-rw-r--r-- 1 michaelheydt staff 826 Jan 18 16:21 06_scrapy_continuous.py

-rw-r--r-- 1 michaelheydt staff 704 Jan 18 16:21

07_scrape_continuous_twitter.py

-rw-r--r-- 1 michaelheydt staff 1409 Jan 18 16:21 08_limit_depth.py

-rw-r--r-- 1 michaelheydt staff 526 Jan 18 16:21 09_limit_length.py

-rw-r--r-- 1 michaelheydt staff 1537 Jan 18 16:21 10_forms_auth.py

-rw-r--r-- 1 michaelheydt staff 597 Jan 18 16:21 11_file_cache.py

-rw-r--r-- 1 michaelheydt staff 1279 Jan 18 16:21

12_parse_differently_based_on_rules.py

In the recipes I’ll state that we’ll be using the script in /filename>.

Now just the be complete, if you want to get out of the Python virtual environment, you can exit using the following command:

(env) py $ deactivate

py $

And checking which python we can see it has switched back:

py $ which python

/Users/michaelheydt/anaconda/bin/python

Scraping Python.org with Requests and Beautiful Soup

In this recipe we will install Requests and Beautiful Soup and scrape some content from www.python.org. We’ll install both of the libraries and get some basic familiarity with them. We’ll come back to them both in subsequent chapters and dive deeper into each.

Getting ready

In this recipe, we will scrape the upcoming Python events from https:/ / www. python. org/events/ pythonevents. The following is an an example of The Python.org Events Page (it changes frequently, so your experience will differ):

Python

We will need to ensure that Requests and Beautiful Soup are installed. We can do that with the following:

pywscb $ pip install requests

Downloading/unpacking requests

Downloading requests-2.18.4-py2.py3-none-any.whl (88kB): 88kB downloaded

Downloading/unpacking certifi>=2017.4.17 (from requests)

Downloading certifi-2018.1.18-py2.py3-none-any.whl (151kB): 151kB

downloaded

Downloading/unpacking idna>=2.5,=3.0.2,=1.21.1,

How to do it

Now let’s go and learn to scrape a couple events. For this recipe we will start by using interactive python.

  1. Start it with the ipython command:
$ ipython

Python 3.6.1 |Anaconda custom (x86_64)| (default, Mar 22 2017,

19:25:17)

Type "copyright", "credits" or "license" for more information.

IPython 5.1.0 -- An enhanced Interactive Python.

? -> Introduction and overview of IPython's features.

%quickref -> Quick reference.

help -> Python's own help system.

object? -> Details about 'object', use 'object??' for extra

details.

In [1]:
  1. Next we import Requests
In [1]: import requests
  1. We now use requests to make a GET HTTP request for the following url:https://www.python.org/events/ python-events/ by making a GET request:
In [2]: url = 'https://www.python.org/events/python-events/'

In [3]: req = requests.get(url)
  1. That downloaded the page content but it is stored in our requests object req. We can retrieve the content using the .text property. This prints the first 200 characters.
req.text[:200]

Out[4]: 'nnn

Subscribe to the weekly Packt Hub newsletter. We'll send you the results of our AI Now Survey, featuring data and insights from across the tech landscape.

LEAVE A REPLY

Please enter your comment!
Please enter your name here