Web Scraping Explained: Why Proxies Are Needed for Scraping

Image for Web Scraping Explained: Why Proxies Are Needed for Scraping

Web scraping is essentially the process of extracting data from websites. All the job of extracting data on a website is carried out by a piece of code that is called a “scraper”.

According to a report on Upwork, first, the scraper sends a “GET” query to a specific website. Then, it parses an HTML document based on the received result.

Web scraping and crawling isn’t necessarily illegal by themselves. But the practice is a grey area, to say the least. Depending on who you ask, web scraping is either loved or loathed.

The legality of scraping basically boils down to the purpose you use it. You could safely scrape or crawl your own website, for example, without any problems.

The scraped data is extracted and saved to a local file in your computer or to a database in table (spreadsheet) format. Scraped data can include the following:

  • Text
  • Product items
  • Videos
  • Images, and
  • Contact information, like phone numbers and emails.


Legitimate Business Use of Web Scraping


Web scraping is used by many digital businesses that rely on data harvesting. Lawful uses include search engine bots that crawl websites, analyze their content and then rank them.

Market research companies are also known to use scrapers to pull data from social media and forums for a variety of reasons, including for sentiment analysis.

A court also recently ruled in favor of hiQ Labs, a San Francisco-based startup, which scraped publicly available LinkedIn profiles to offer its clients what is touted as “a crystal ball that helps you determine skills gaps or turnover risks months ahead of time.” The judge in this case said that it is legal to scrape publicly available data from LinkedIn, despite the professional’s social network's protests that this violated user privacy.

In 2001, another judge ruled in favor of scraping after a travel agency sued a competitor who had “scraped” its prices from its website to help the rival set its own prices. The judge said that while this scraping was not welcomed by the travel agency’s owner, it was not sufficient to make it “unauthorized access” for the purpose of federal hacking laws.

But, it’s also worth noting that in 2009 Facebook won one of the first copyright suits against a web scraper who’d scraped and “made unauthorized copies of the Facebook website.” The judge’s ruling in favor of Facebook means that you can run into trouble when you scrape someone else’s website and disregard their Terms of Service (ToS) as it happened in this case.

Still, the ruling against scraping in the Facebook lawsuit raised many questions of its own, including questions about copyright laws, “fair use” doctrine and user privacy in the tech-driven world we are living in today.



Using Web Scraping in Your Business


If you ever deployed a web scraper in production, you would have noticed the rate-limits imposed by websites. This limit is often enforced by blocking the IP address of the scraper and limiting its ability to reach the target website's resources.

Any developer facing these issues has two options: either slow down the tool or distribute its requests through multiple IP addresses.

The first option is not viable because it will slow down any project in production. However, the second option of spreading requests through multiple IPs is possible thanks to proxy servers.

Do you need your scraper to run at full speed without any issues?

Here's everything there is to know about proxies for scraping projects. The focus will be on private proxies (proxies maintained and sold by various companies).


Proxies and What They Do in Scraping Projects



Proxies are servers designed to handle their users' traffic. And, at the same time, act as an intermediary between its users and the rest of the web. It might sound confusing, but it is straightforward and simple.

A proxy server's sole job is to hide (mask) the IP address of its user and display to the accessed websites the server's IP address. This is done by handling its user's traffic and passing it through the server.

In this way, any website accessed through a proxy will see only the proxy server's IP address.


Why You Need Proxies for Scraping


As mentioned above, the main reason for which you will use proxies is to hide the real IP address.

Here are three reasons why proxies are needed for scraping.


i). They mask the IP address of your scraper - this is a great feature, especially when you need to access geo-specific content, but you reside in another country. For example, to access Amazon's offers and prices for Florida, if you live in Canada, you use a private proxy from Miami. In this way, Amazon will see your requests as originating from Florida and not from Canada.

ii). Proxy usage helps you avoid IP ban/blocks - when doing web scraping, you always risk blocking your IP address because of rate limits. With private proxies, you bypass this issue by using a different IP address. For example, by rotating your proxies constantly, every request you send will reach a website through a different proxy IP. In this way, you won't have to worry about blocks because each proxy IP address you use won't be used to send two consecutive requests.

iii). Bypass any limits with proxies - certain websites are geo-restricted to users from a particular city or state. In contrast, others will limit the content displayed to users from a specific area (for example, US publications limiting content for European users because of GDPR). In this case, proxies are used to avoid any limits and restrictions and extract (scrape) unadulterated data.



Proxy Types Used for Scraping


There are several types of private proxies:

  1. Datacenter proxies - with servers and IP residing in big data centers
  2. Residential proxies - with IPs rented from residential users
  3. Mobile proxies - with IP address from mobile ISP (Verizon, AT&T, etc)

Your choice of proxies for scraping should depend on your project requirements.

However, as a rule of thumb, if during the scraping process you do not need to login an account to access web resources, the best proxies for your project are the cheapest ones you can find.

And the cheapest proxy depends on how many IPs you need. If you need less than 1000 proxies, then datacenter proxies - with pricing based per IP - are more reasonable.

On the other hand, if you need thousands of proxies at once, then residential proxies - with usage-based pricing - are the cheaper option in the long run.

So, the bottom line, any proxy service will work for a scraping project. Your primary focus should then be on pricing and your budget.


Picking the Best Proxies for a Project


With hundreds of proxy services available today and a large number of proxy types (SEO proxies, mobile proxies, residential or rotating ones), it can be challenging to get the most suited service for a project.

A starting point to look for proxies is resources like BestProxyProviders.com, which has reviewed several proxy services and picked the best ones for different uses.

Chris Roark is addicted to anything tech-related. He writes about innovation, automation, marketing and technology. But here, Chris writes mainly about private proxies.