Nokogiri Web Scraping

  1. What Is Web Scraping
  2. Nokogiri Web Scraping Tool
  3. Nokogiri Web Scraping Download
  4. Web Scraping With Python
  5. Nokogiri Web Scraping Tutorial

Nokogiri (鋸) makes it easy and painless to work with XML and HTML from Ruby. It provides a sensible, easy-to-understand API for reading, writing, modifying, and querying documents. It is fast and standards-compliant by relying on native parsers like libxml2 (C) and xerces (Java).

Nokogiri will provide us a format from where we can start to extract data out of the raw HTML. Then we have used Byebug. It will set a debugger that lets us interact with some of these variables. Web scraping with Nokogiri. Ask Question Asked 4 years, 8 months ago. Active 4 years, 8 months ago. Viewed 205 times 4 $begingroup$ At work we have a need to know what printers are getting dangerously low on their toner, and paper consumption, etc. So I've created a program that pulls the printer information off the websites the printers. Scraping with Nokogiri 2018, Oct 01 Web scraping is the practice of parsing a site’s HTML (or even DOM) and extracting meaningful data from it. Scraping can be a. This can still mean that today, but primarily when you hear screen scraping we’re talking about the web which means that chances are we’re really talking about HTML. Install We’re going to use Nokogiri, a Ruby gem (also a Japanese saw) to help us parse the HTML (or XML).

Guiding Principles¶

Some guiding principles Nokogiri tries to follow:

  • be secure-by-default by treating all documents as untrusted by default
  • be a thin-as-reasonable layer on top of the underlying parsers, and don't attempt to fix behavioral differences between the parsers

Features Overview¶

  • DOM Parser for XML and HTML4
  • SAX Parser for XML and HTML4
  • Push Parser for XML and HTML4
  • Document search via XPath 1.0
  • Document search via CSS3 selectors, with some jquery-like extensions
  • XSD Schema validation
  • XSLT transformation
  • 'Builder' DSL for XML and HTML documents


Support, Getting Help, and Reporting Issues¶

All official documentation is posted at (the source for which is at, and we welcome contributions).

Consider subscribing to Tidelift which provides license assurances and timely security notifications for your open source dependencies, including Nokogiri. Tidelift subscriptions also help the Nokogiri maintainers fund our automated testing which in turn allows us to ship releases, bugfixes, and security updates more often.


Your first stops for learning more about Nokogiri should be:

  • An excellent community-maintained Cheat Sheet

Ask For Help¶

There are a few ways to ask exploratory questions:

  • The Ruby Discord chat server is active at
  • The Nokogiri mailing list is active at
  • Open an issue using the 'Help Request' template at

Please do not mail the maintainers at their personal addresses.

Report A Bug¶

The Nokogiri bug tracker is at

Please use the 'Bug Report' or 'Installation Difficulties' templates.

Security and Vulnerability Reporting¶

Please report vulnerabilities at

Full information and description of our security policy is in

Semantic Versioning Policy¶

Nokogiri follows Semantic Versioning (since 2017 or so).

What Is Web Scraping

We bump Major.Minor.Patch versions following this guidance:

Major: (we've never done this)

  • Significant backwards-incompatible changes to the public API that would require rewriting existing application code.
  • Some examples of backwards-incompatible changes we might someday consider for a Major release are at


  • Features and bugfixes.
  • Updating packaged libraries for non-security-related reasons.
  • Dropping support for EOLed Ruby versions. Some folks find this objectionable, but SemVer says this is OK if the public API hasn't changed.
  • Backwards-incompatible changes to internal or private methods and constants. These are detailed in the 'Changes' section of each changelog entry.


  • Bugfixes.
  • Security updates.
  • Updating packaged libraries for security-related reasons.



  • Ruby >= 2.5
  • JRuby >=

Native Gems: Faster, more reliable installation¶

'Native gems' contain pre-compiled libraries for a specific machine architecture. On supported platforms, this removes the need for compiling the C extension and the packaged libraries, or for system dependencies to exist. This results in much faster installation and more reliable installation, which as you probably know are the biggest headaches for Nokogiri users.

Supported Platforms¶

As of v1.11.0, Nokogiri ships pre-compiled, 'native' gems for the following platforms:

  • Linux: x86-linux and x86_64-linux (req: glibc >= 2.17), including musl platforms like Alpine
  • Darwin/MacOS: x86_64-darwin and arm64-darwin
  • Windows: x86-mingw32 and x64-mingw32
  • Java: any platform running JRuby 9.2 or higher

To determine whether your system supports one of these gems, look at the output of bundle platform or ruby -e 'puts Gem::Platform.local.to_s'.

If you're on a supported platform, either gem install or bundle install should install a native gem without any additional action on your part. This installation should only take a few seconds, and your output should look something like:

Other Installation Options¶

Because Nokogiri is a C extension, it requires that you have a C compiler toolchain, Ruby development header files, and some system dependencies installed.

The following may work for you if you have an appropriately-configured system:

If you have any issues, please visit Installing Nokogiri for more complete instructions and troubleshooting.

How To Use Nokogiri¶

Nokogiri is a large library, and so it's challenging to briefly summarize it. We've tried to provide long, real-world examples at Tutorials.

Parsing and Querying¶

Here is example usage for parsing and querying a document:


Strings are always stored as UTF-8 internally. Methods that returntext values will always return UTF-8 encoded strings. Methods thatreturn a string containing markup (like to_xml, to_html andinner_html) will return a string encoded like the source document.


Some documents declare one encoding, but actually use a differentone. In these cases, which encoding should the parser choose?

Data is just a stream of bytes. Humans add meaning to that stream. Anyparticular set of bytes could be valid characters in multipleencodings, so detecting encoding with 100% accuracy is notpossible. libxml2 does its best, but it can't be right all the time.

If you want Nokogiri to handle the document encoding properly, yourbest bet is to explicitly set the encoding. Here is an example ofexplicitly setting the encoding to EUC-JP on the parser:

Nokogiri Web Scraping Tool

Technical Overview¶

Guiding Principles¶

As noted above, two guiding principles of the software are:

  • be secure-by-default by treating all documents as untrusted by default
  • be a thin-as-reasonable layer on top of the underlying parsers, and don't attempt to fix behavioral differences between the parsers

Notably, despite all parsers being standards-compliant, there are behavioral inconsistencies between the parsers used in the CRuby and JRuby implementations, and Nokogiri does not and should not attempt to remove these inconsistencies. Instead, we surface these differences in the test suite when they are important/semantic; or we intentionally write tests to depend only on the important/semantic bits (omitting whitespace from regex matchers on results, for example).


The Ruby (a.k.a., CRuby, MRI, YARV) implementation is a C extension that depends on libxml2 and libxslt (which in turn depend on zlib and possibly libiconv).

These dependencies are met by default by Nokogiri's packaged versions of the libxml2 and libxslt source code, but a configuration option --use-system-libraries is provided to allow specification of alternative library locations. See Installing Nokogiri for full documentation.

Nokogiri Web Scraping Download

We provide native gems by pre-compiling libxml2 and libxslt (and potentially zlib and libiconv) and packaging them into the gem file. In this case, no compilation is necessary at installation time, which leads to faster and more reliable installation.

See for more information on which dependencies are provided in which native and source gems.


The Java (a.k.a. JRuby) implementation is a Java extension that depends primarily on Xerces and NekoHTML for parsing, though additional dependencies are on isorelax, nekodtd, jing, serializer, xalan-j, and xml-apis.

These dependencies are provided by pre-compiled jar files packaged in the java platform gem.

See for more information on which dependencies are provided in which native and source gems.


See for an intro guide to developing Nokogiri.

Code of Conduct¶

We've adopted the Contributor Covenant code of conduct, which you can read in full in


This project is licensed under the terms of the MIT license.

See this license at


Web Scraping With Python

Some additional libraries may be distributed with your version of Nokogiri. Please see for a discussion of the variations as well as the licenses thereof.


  • Mike Dalessio
  • Aaron Patterson
  • Yoko Harada
  • Akinori MUSHA
  • John Shahid
  • Karol Bucek
  • Sam Ruby
  • Craig Barnes
  • Stephen Checkoway
  • Lars Kanis
  • Sergio Arbeo
  • Timothy Elliott
  • Nobuyoshi Nakada

Nokogiri Web Scraping Tutorial

Scraping websites can help you get valuable data but often times it is not easy. You will most likely run into challenges such as creating requests (you will need to learn how to code and use a library to create http requests which is what browsers make behind the scenes), setting the correct request headers (if you don’t set request headers such as the language and encoding, a server may return a 403 error instead of the html that you want), throttling (a website may only allow a certain number of requests in a certain amount of time to make sure that you don’t bog down their server), and getting your ip banned (sometimes a website will try and prevent you from crawling and ban your ip so you can’t make requests). We are going to show you how DataHen can handle all these difficult parts of scraping and make it easy for you to get the data you want.

If you prefer to skip this tutorial, you can clone this script directly here.

For this tutorial we are going to show you how to use DataHen to easily scrape information about television products from the following two different categories on “LED & LCD TVs” and “OLED TVs.” Specifically we are going to be scraping the following Amazon television data (also highlighted below): name, price, ASIN, seller, category, rating, number of reviews, product availability, and description.

We are going to assume you have Ruby 2.5.3 and the Nokogiri gem installed. If not follow this link here for instructions on how to install Ruby. Once Ruby is installed, make sure Rubygems is also installed and then run the following to install Nokogiri:

First let’s set up a new DataHen scraper. Install the DataHen Ruby gem with the following command:

You should see something similar to the following output after running this command:

Now that we have the DataHen gem installed we need to create our DataHen environment variable token. This will make it so our token is sent with every DataHen request. Run the following command:

We are now ready to create a scraper. Let’s create an empty directory first, and name it ‘amazon-tvs’:


Next let’s go into the directory and initialize it as a Git repository:

Now that we have our setup is finished, let's move on to the creating the seeders in Part II.