Reddit Web Scraping Python

Web
  1. Scraping Reddit Using Python
  2. Reddit Web Scraping Python Code
  3. Reddit Praw

Scrapy is one of the most accessible tools that you can use to scrape and also spider a website with effortless ease.

Today lets see how we can scrape Reddit to get new posts from a subreddit like r/programming.

First, we need to install scrapy if you haven't already.

Once installed, go ahead and create a project by invoking the startproject command.

Scraping Reddit with Python and BeautifulSoup 4 In this tutorial, you'll learn how to get web pages using requests, analyze web pages in the browser, and extract information from raw HTML with BeautifulSoup. You can find a finished working example of the script we will write here. Let's get started with data collection from Reddit Reddit API: While web scraping is one among the famous(or infamous!) ways of collecting data from websites, a lot of websites offer APIs to access the public data that they host on their website. Scraping of Reddit using Scrapy: Python Web scraping is a process to gather bulk data from internet or web pages. The data can be consumed using an API. But there are sites where API is not provided to get the data.

This will ouput something like this.

And create a folder structure like this...

Now CD into the scrapingproject. You will need to do it twice like this.

Now we need a spider to crawl through the programming subreddit. So we use the genspider to tell scrapy to create one for us. We call the spider ourfirstbot and pass it the url of the subreddit

this should return successfull like this

Great. Now open the file ourfirstbot.py in the spiders folder... it should look like this...

Lets examine this code before we proceed...

The allowed_domains array restricts all further crawling to the domain paths specified here.

start_urls is the list of urls to crawl... for us, in this example, we only need one url.

The def parse(self, response): function is called by scrapy after every successfull url crawl. Here is where we can write our code to extract the data we want.

We now need to find the css selector of the elements we need to extract the data. Go to the url https://www.reddit.com/r/programming/ and right click on the Title of one of the posts and click on inspect. This will open thje Google Chrome Inspector like below...

You can see that the css class name of the title is _eYtD2XCVieq6emjKBH3m so we are going to ask to ask scrapy to get us the text property of this class like this.

Similarly, we try and find the class names of the votes element and the number of comments element (note that the class names might change by the time you run this code.

If you are unfaimiliar with css selectors, you can refer to this page by Scrapy https://docs.scrapy.org/en/latest/topics/selectors.html

We have to now use the zip function to map the similar index of multiple containers so that they can be used just using as single entity. so here is how it looks.

And now lets run this with the command .

And Bingo... you get the results as below.

Now lets export the extracted data to a csv file. All you have to do is to provide the export file like this

or if you want the data in the JSON format...

Scaling Scrapy

The example above is ok for small scale web crawling projects. But if you try to scrape large quantities of data at high speeds from websites like Reddit you will find that sooner or later your access will be restricted. Reddit can tell you are a bot so one of the things you can do is to run the crawler impersonating a web browser. This is done by passing the user agent string to the Reddit webserver so it doesnt block you.

Like this...

In more advanced implementations you will need to even rotate this string so Reddit cant tell its the same browser! Welcome to web scraping.

Reddit Web Scraping Python

If we get a little bit more advanced, you will realise that Reddit can simply block your IP ignoring all your other tricks. This is a bummer and this is where most web crawling projects fail.

Libraries

Investing in a private rotating proxy service like Proxies API can most of the time make the difference between a successful and headache free web scraping project which gets the job done consistently and one that never really works.

Plus with the 1000 free API calls running offer, you have almost nothing to lose by using our rotating proxy and comparing notes. It only takes one line of integration to its hardly disruptive.

Our rotating proxy server Proxies API provides a simple API that can solve all IP Blocking problems instantly.

With millions of high speed rotating proxies located all over the world,

With our automatic IP rotation

With our automatic User-Agent-String rotation (which simulates requests from different, valid web browsers and web browser versions)

With our automatic CAPTCHA solving technology,

hundreds of our customers have successfully solved the headache of IP blocks with a simple API.

The whole thing can be accessed by a simple API like below in any programming language.

We have a running offer of 1000 API calls completely free. Register and get your free API Key here.

Once you have an API_KEY from Proxies API, you just have to change your code to this...

We have only changed one line at the start_urls array and that will make sure we will never have to worry about IP rotation, user agent string rotation or even rate limits ever again.

Scrapy is one of the most accessible tools that you can use to scrape and also spider a website with effortless ease.

Today lets see how we can scrape Reddit to get new posts from a subreddit like r/programming.

First, we need to install scrapy if you haven't already.

Once installed, go ahead and create a project by invoking the startproject command.

This will ouput something like this.

And create a folder structure like this...

Now CD into the scrapingproject. You will need to do it twice like this.

Now we need a spider to crawl through the programming subreddit. So we use the genspider to tell scrapy to create one for us. We call the spider ourfirstbot and pass it the url of the subreddit

this should return successfull like this

Great. Now open the file ourfirstbot.py in the spiders folder... it should look like this...

Lets examine this code before we proceed...

The allowed_domains array restricts all further crawling to the domain paths specified here.

start_urls is the list of urls to crawl... for us, in this example, we only need one url.

Scraping Reddit Using Python

The def parse(self, response): function is called by scrapy after every successfull url crawl. Here is where we can write our code to extract the data we want.

We now need to find the css selector of the elements we need to extract the data. Go to the url https://www.reddit.com/r/programming/ and right click on the Title of one of the posts and click on inspect. This will open thje Google Chrome Inspector like below...

Reddit Web Scraping Python Code

You can see that the css class name of the title is _eYtD2XCVieq6emjKBH3m so we are going to ask to ask scrapy to get us the text property of this class like this.

Similarly, we try and find the class names of the votes element and the number of comments element (note that the class names might change by the time you run this code.

If you are unfaimiliar with css selectors, you can refer to this page by Scrapy https://docs.scrapy.org/en/latest/topics/selectors.html

We have to now use the zip function to map the similar index of multiple containers so that they can be used just using as single entity. so here is how it looks.

And now lets run this with the command .

Reddit Praw

And Bingo... you get the results as below.

Now lets export the extracted data to a csv file. All you have to do is to provide the export file like this

or if you want the data in the JSON format...

Scaling Scrapy

The example above is ok for small scale web crawling projects. But if you try to scrape large quantities of data at high speeds from websites like Reddit you will find that sooner or later your access will be restricted. Reddit can tell you are a bot so one of the things you can do is to run the crawler impersonating a web browser. This is done by passing the user agent string to the Reddit webserver so it doesnt block you.

Like this...

In more advanced implementations you will need to even rotate this string so Reddit cant tell its the same browser! Welcome to web scraping.

If we get a little bit more advanced, you will realise that Reddit can simply block your IP ignoring all your other tricks. This is a bummer and this is where most web crawling projects fail.

Investing in a private rotating proxy service like Proxies API can most of the time make the difference between a successful and headache free web scraping project which gets the job done consistently and one that never really works.

Plus with the 1000 free API calls running offer, you have almost nothing to lose by using our rotating proxy and comparing notes. It only takes one line of integration to its hardly disruptive.

Our rotating proxy server Proxies API provides a simple API that can solve all IP Blocking problems instantly.

With millions of high speed rotating proxies located all over the world,

With our automatic IP rotation

With our automatic User-Agent-String rotation (which simulates requests from different, valid web browsers and web browser versions)

With our automatic CAPTCHA solving technology,

hundreds of our customers have successfully solved the headache of IP blocks with a simple API.

The whole thing can be accessed by a simple API like below in any programming language.

We have a running offer of 1000 API calls completely free. Register and get your free API Key here.

Once you have an API_KEY from Proxies API, you just have to change your code to this...

We have only changed one line at the start_urls array and that will make sure we will never have to worry about IP rotation, user agent string rotation or even rate limits ever again.