...

Web Scraping Google Without Getting Blocked

Google provides us with a crucial piece of information that has become the backbone of many businesses directly, or if I talk indirectly, every business is dependent on Google. An enormous amount of data can provide leverage over other companies in a particular field and an upper hand over their businesses.

But who provides this data? The answer is the Web Scrapers. Web Scrapers are used to obtain data from websites, usually in HTML format. Thus, if you are making your own Google scraper for an MNC or your project, follow these ten tips to avoid blocking while scraping Google. These tips are equally important while extracting data from other websites.

IP Rotation

There might be a chance of getting caught by Google if you are using the same IP address for every request. It is one of the easiest ways to get blocked by Google’s anti-scraping mechanism. But, you can solve this problem by rotating or using a new IP for every request. If you want to scrape millions of pages of Google, you require a large pool of proxies to turn them on every request. Instead of buying proxies, you can use various Google Search APIs available in the market that use a proxy cluster consisting of millions of IPs which can assist you in preventing your IP from being blocked by Google.

Google uses an advanced anti-bot mechanism to block the bots from scraping their websites. So, you have to use residential or mobile proxies to bypass blocking. Data Center Proxies get more easily blocked than residential. Thus you can’t use them for the longer term. Serpdog’s Google Search API solves all these problems for developers and allows them to scrape millions of web pages of Google without any hindrance.

User Agents

User Agent is a request header to identify the application, operating system, vendor, and version of the requesting user agent. If your User Agent doesn’t belong to a significant browser or if your user agents are not set then there is a chance that Google might block you or restrict you from seeing their content.

If you are using the same User Agents for every request, then this may cause your IP to get blocked in no time. So, what can be the solution to this problem? The solution is pretty simple, before scraping Google, you will have to collect a legitimate set of User Agents so that your web crawler or the bot looks like a real user. You can also get a list of User Agents from this NPM library, fake user agents.

You can check your user agent on Google by typing “What is my User-Agent” or by: http://www.whatsmyuseragent.com/.

HTTP Header Referrer

The Referrer header is an HTTP request header that allows the website to know from which site you are coming. It can be a great header if you want to bypass anti-scraping mechanisms.
“Referrer”: “https://www.google.com/”
It can make the bot look like a natural user. Think of it if you are scraping Google.com and you set the referrer as google.com telling the website, “Hey, I just came from visiting google.com.” It would portray your bot as an organic user in front of Google.

Make scraping slower

The speed at which your bot might be scraping the website can be easily detected by Google’s anti-bot mechanism, as we know the bot works at an inhuman speed. The overloading of requests might even crash the website, which is not beneficial for anyone.

So, adding random delays by making your bot sleep for a short period(for example, between 2–6 seconds) can help your scraper look like an organic user. Adding a cap or making requests within a defined limit will ensure that the website does not crash by mass requests, and you can also keep scraping.

Another way we can choose to scrape data in an organized method is to make a schedule to extract the data. For example, if you are beginning to scrape the data at 1:00 a.m. for some days, start the scraping sometime before 30 minutes or after 30 minutes. Then, start making requests in a balanced manner, which can help you bypass Google’s anti-bot mechanism.

Headless Browser

Headless Browser
Headless Browser

Google displays content based on your User Agent. It will present you with advanced featured snippets if you are on a higher version. These advanced featured snippets have a very hefty dependence on Javascript, and a simple HTTP won’t be able to return the content rendered by JS. Google can also know the web fonts, extensions, and browser cookies to check if the site visitor is an organic user. It makes the headless browser come into play.

To scrape the content rendered by Javascript, you can use tools like Puppeteer JS or Selenium, which can run in headless mode and can help extract the dynamic content on the website. But the main disadvantage with these tools is they are very high CPU intensive, which can cause them to crash sometimes. But still, you can use services like Browsercloud, which enables you to open the browser on their server instead of increasing the load on your server.

Scraping Data From Google Cache

Scraping Data From Google Cache
Scraping Data From Google Cache

Google also keeps a cached copy of the websites. So, you can try scraping out the cached pages as, in this case, you would be making requests to the cached copy rather than the actual website, which can help you avoid getting blocked. You can just type “http://webcache.googleusercontent.com/search?q=cache:” at the beginning and then the Google URL after cache “https://www.google.com/search?q=footabll&gl=us&tbm=isch.” For example, “https://webcache.googleusercontent.com/search?q=cache:https://www.google.com/search?q=footabll&gl=us&tbm=isch

This is not a complete solution, as it returns only a part of the data and prevents you from directly accessing the website. You should also keep in mind that this technique is only useful while scraping non-sensitive data that often keeps changing.

Change Your Scraping Pattern

You should apply various ways to scrape the data, as working with only one pattern can make your scraper to be blocked. Generally, bots are designed to crawl the data in a single pattern that doesn’t look like an organic user to the website after some time, as the pattern gets noticed by its anti-bot mechanism. On the other hand, if we talk about humans, they don’t perform the same task repeatedly. You can also think of yourself in this case.

A solution to this problem can be to perform random clicks on HTML elements, random scrolling, and other random activities that can make your bot look like an organic user.

Avoid Scraping Images

As we know, images are heavy objects, and because of this, extracting them would consume additional bandwidth compared to other elements. Also, images are frequently loaded on websites with the help of Javascript, which is one of the reasons behind their slow loading. This process of scraping images can influence your data extraction process by making your scraper slow.

Change in HTML Tags

One of the main things while scraping Google is to keep an eye on changing HTML tags of the elements. Google keeps changing its website to improve user experience and increase the quality of its search results, which can sometimes cause the scraper to break and the scraper would not be able to deliver the data you were expecting.

A solution to this problem is that you can run a test every 24 hours to detect if your parser is returning the exact piece of data you were expecting. If it is successfully giving the exact data then it is good, but if empty, you should return an alert from your server in the form of mail to yourself. Or another solution can be to run an API test and print the extracted results which would be better, as you will be able to see the data yourself and will be able to figure out which tags are not returning the expected results.

But maintaining your scraper daily can become very heavy work sometimes, as you have to look after your parser according to changing CSS selectors. To solve this problem, you can use Serpdog’s Google Search API. It handles all the problems of maintaining the parser on its behalf, you don’t have to search for tags from the complex HTML data, and the main thing is you get ready-made structured JSON data.

Captcha Solving

Captcha Solving
Captcha Solving

Captchas are designed to differentiate between organic users and automated bots. It gives challenges like grading tests or puzzle tests to the user, which can be typically passed by a human but prove difficult for a computer in the majority of cases. Sometimes scraping websites at a large scale can cause your scraper to be blocked, and your bot will see captchas instead of search results, which causes you to use captcha-solving services, but I must say they are not ideal for scraping as they are very slow and expensive. A better solution for this is to spread your requests, then there might be less chance of blocking your scraper, and your bot will not have to see these captchas while scraping.

Summary

In this tutorial, we learned some tips to scrape Google. I am hopeful that after reading this article, you will feel comfortable implementing some advanced scraping techniques in your scraper. Feel free to message me if I missed something.

Please share it on social media if you like the blog.

Follow me on Twitter. Thanks for reading!

Additional Resources

  1. Web Scraping Google With Node JS — A Complete Guide
  2. Scrape Google Maps Reviews
  3. Web Scraping Google News Results
  4. Scrape Google Organic Search Results
  5. Scrape Google Shopping Results