One of the awesome things about Python is how relatively simple it is to do pretty complex and impressive tasks. A great example of this is web scraping.
New Book: 'Beyond the Boring Stuff with Python' You've read a beginner resource like Automate the Boring Stuff with Python or Python Crash Course, but still don't feel like a 'real' programmer? Beyond the Basic Stuff with Python covers software development tools and best practices so you can code like a professional. Simple web scraping with Python Beautifulsoup. By Mathi Maheswaran Posted on March 20, 2021 Posted in Python No Comments. On the internet, we have a massive source of data. Whereas, those data have not to structure to analysis further. For example, if you want to analyze the weather information for one year, you have to collect one-year data.
This is an article about web scraping with Python. In it we will look at the basics of web scraping using popular libraries such as
- What is web scraping?
- What are
- Using CSS selectors to target data on a web-page
- Getting product data from a demo book site
- Storing scraped data in CSV and JSON formats
What is Web Scraping?
Some websites can contain a large amount of valuable data. Web scraping means extracting data from websites, usually in an automated fashion using a bot or web crawler. The kinds or data available are as wide ranging as the internet itself. Common tasks include
- scraping stock prices to inform investment decisions
- automatically downloading files hosted on websites
- scraping data about company contacts
- scraping data from a store locator to create a list of business locations
- scraping product data from sites like Amazon or eBay
- scraping sports stats for betting
- collecting data to generate leads
- collating data available from multiple sources
Legality of Web Scraping
There has been some confusion in the past about the legality of scraping data from public websites. This has been cleared up somewhat recently (I’m writing in July 2020) by a court case where the US Court of Appeals denied LinkedIn’s requests to prevent HiQ, an analytics company, from scraping its data.
The decision was a historic moment in the data privacy and data regulation era. It showed that any data that is publicly available and not copyrighted is potentially fair game for web crawlers.
However, proceed with caution. You should always honour the terms and conditions of a site that you wish to scrape data from as well as the contents of its
robots.txt file. You also need to ensure that any data you scrape is used in a legal way. For example you should consider copyright issues and data protection laws such as GDPR. Also, be aware that the high court decision could be reversed and other laws may apply. This article is not intended to prvide legal advice, so please do you own research on this topic. One place to start is Quora. There are some good and detailed questions and answers there such as at this link
One way you can avoid any potential legal snags while learning how to use Python to scrape websites for data is to use sites which either welcome or tolerate your activity. One great place to start is to scrape – a web scraping sandbox which we will use in this article.
An example of Web Scraping in Python
You will need to install two common scraping libraries to use the following code. This can be done using
pip install requests
pip install beautifulsoup4
in a command prompt. For details in how to install packages in Python, check out Installing Python Packages with Pip.
requests library handles connecting to and fetching data from your target web-page, while
beautifulsoup enables you to parse and extract the parts of that data you are interested in.
Let’s look at an example:
So how does the code work?
In order to be able to do web scraping with Python, you will need a basic understanding of HTML and CSS. This is so you understand the territory you are working in. You don’t need to be an expert but you do need to know how to navigate the elements on a web-page using an inspector such as chrome dev tools. If you don’t have this basic knowledge, you can go off and get it (w3schools is a great place to start), or if you are feeling brave, just try and follow along and pick up what you need as you go along.
To see what is happening in the code above, navigate to http://books.toscrape.com/. Place your cursor over a book price, right-click your mouse and select “inspect” (that’s the option on Chrome – it may be something slightly different like “inspect element” in other browsers. When you do this, a new area will appear showing you the HTML which created the page. You should take particular note of the “class” attributes of the elements you wish to target.
In our code we have
This uses the class attribute and returns a list of elements with the class
Then, for each of these elements we have:
The first line is fairly straightforward and just selects the text of the
h3 element for the current product. The next line does lots of things, and could be split into separate lines. Basically, it finds the
p tag with class
price_color within the
div tag with class
product_price, extracts the text, strips out the pound sign and finally converts to a float. This last step is not strictly necessary as we will be storing our data in text format, but I’ve included it in case you need an actual numeric data type in your own projects.
Storing Scraped Data in CSV Format
csv (comma-separated values) is a very common and useful file format for storing data. It is lightweight and does not require a database.
Add this code above the
if __name__ '__main__': line
and just before the line
print('### RESULTS ###'), add this:
store_as_csv(data, headings=['title', 'price'])
When you run the code now, a file will be created containing your book data in csv format. Pretty neat huh?
Storing Scraped Data in JSON Format
Another very common format for storing data is
Add this extra code above
if __name__ ...:
store_as_json(data) above the
print('### Results ###') line.
So there you have it – you now know how to scrape data from a web-page, and it didn’t take many lines of Python code to achieve!
Full Code Listing for Python Web Scraping Example
Scraping Web Pages Python
Here’s the full listing of our program for your convenience.
One final note. We have used
beautifulsoup for our scraping, and a lot of the existing code on the internet in articles and repositories uses those libraries. However, there is a newer library which performs the task of both of these put together, and has some additional functionality which you may find useful later on. This newer library is
requests-HTML and is well worth looking at once you have got a basic understanding of what you are trying to achieve with web scraping. Another library which is often used for more advanced projects spanning multiple pages is
scrapy, but that is a more complex beast altogether, for a later article.
Simple Web Scraping With Python 3
Working through the contents of this article will give you a firm grounding in the basics of web scraping in Python. I hope you find it helpful