Data extraction using API scraping and main challenges

Kristinelpetrosyan
3 min readMar 14, 2021

--

Going forward in my job search I have to deal with different projects and most of them ask to collect initial data. I’ve done few projects that involve API scraping and it is a common task.

Since API scraping has many challenges, we will first focus on the major ones. We’ll walk through the fundamental concepts behind API scraping.

Challenges:

1. Rate Limiting

One of the major challenges for API scraping is rate limiting. For just about any API you will probably be hitting one of these two types of rate limiting:

DDOS protection

Almost every production API will block your IP address if you start hitting the API with almost1,000 requests per second. This means your API scraper tool will be prohibited from accessing the API, potentially indefinitely. This is meant to prevent DDOS attacks which can disrupt service of the API for other API consumers. Unfortunately, it’s quite easy to inadvertently trigger these protection if you’re not careful, especially if you are using multiple API scraping clustered servers.

Standard Rate Limiting and Throttling

Most APIs will limit either your API requests based on your IP or your API key to a certain timeframe . These throttling limits may be vary for different endpoints in a single service.

2. Error Handling

some examples of Errors include:

  • Rate limiting: Even if you are careful, sometimes a rate limiting error still occurs. You’ll need a strategy to retry an API request at a later time when the rate limiting has subsided.
  • Not Found: Different APIs return “not found” error responses in different ways. Some throw a 404 while others may return a 200 but contain an error message in the API response. Your application might not care if something is not found, but it’s still important to consider that this type of error may happen.
  • Other errors: You may want to report every error that happens without crashing your app.

3. Pagination

Pagination is a common challenge with very large sets of results. Most APIs include pagination for hundreds of records/items. Generally, there are two methods of pagination that an API will use:

  1. Cursor: A pointer to the next record returned by the last record. The pointer will usually be the ID of the record or item.
  2. Page number: Standard pagination. You keep passing page numbers sequentially until there are no more results.

4. Concurrency

If the results are quite large , you probably want to have some sort of concurrency and parallel processing. That is, you will want to make multiple API requests happen at the same time. However, you don’t want to make too many concurrent requests in order to prevent rate limiting/DDOS protection.

Logging and Debugging

There is so much that can go wrong when scraping an API. For this you will need an effective logging and debugging strategy. The api-toolkit includes a progress bar to indicate what’s going on at any point during API scraping.

I will describe in more details in next article. Hope this was helpful and if you have any questions please contact me @kristinelpetrosyan.

--

--

Kristinelpetrosyan
Kristinelpetrosyan

Written by Kristinelpetrosyan

Data Science student @Flatiron-School

No responses yet