API rate limit

This exercise describes what an API rate limit means and how to set a rate limit for your API.

Objective

Set a rate limit for your API to prevent spamming or brute force attacks.

Description

For this exercise use this demo data: And create a simple GET route handler that returns this list of the products but also sets a rate limit for the API.

Rate limiting protects your APIs from overuse by limiting how often each user can call the API. This protects them from inadvertent or malicious overuse. Without rate limiting, each user may request as often as they like, which can lead to “spikes” of requests that starve other consumers. After rate limiting is enabled, they are limited to a fixed number of requests per second.

Rate limiting is very important for public APIs where you want to maintain a good quality of service for every consumer, even when some users take more than their fair share. You also may want to rate limit APIs that serve sensitive data, because this could limit the data exposed if an attacker gains access in some unforeseen event.

There are several rate-limiting algorithms and this article talks in detail about them: https://konghq.com/blog/how-to-design-a-scalable-rate-limiting-algorithm/

How to set up API rate limits?

There are a few different ways to set up API rate limits. The most common way is to use a “leaky bucket” algorithm, which defines the maximum number of requests that can be made per second. If the API receives more requests than this limit, it will “leak” requests until the limit is reached again. This approach helps to ensure that API services can remain responsive even during bursts of traffic.

Another common way to set up API rate limits is to use a “token bucket” algorithm. This approach defines a maximum number of tokens that can be used over time. Each request consumes a token, and new tokens are generated at a fixed rate. This approach helps to ensure that API services can remain responsive even when there are sustained periods of high traffic. Whichever approach you choose, API rate limits can help you manage API traffic and keep your API service available and responsive.

Scenario:

The items API allows the external developers to use the items resource to retrieve a list of menu that are listed on. Since this is a public-facing API, it is necessary to add some protection layer to it in order to avoid any malicious attacks or misuse of the API. One such protection layer is to set a rate limit for the API. The objective of this exercise is to set a rate limit for the product API.

Acceptance criteria

  • Use this data as a list of menus its
  • Create a simple API that returns a list of menu it’s (GET request)
  • Set the following rate limit for this API:
  • Limit each IP to make 11 requests per second.
  • If more than 11 requests are made per second, the API should return a 429 (Too many requests) error.

Hints

  • Use the package express-rate-limit

 

Solution

What is the API rate limit?

API rate limits are put in place to control the amount of traffic that a server can handle. They are commonly used to prevent overloading a server with too many requests, which can slow down the system or cause it to crash. API rate limits usually work by limiting the number of requests that can be made within a certain period of time. For example, a rate limit might allow 100 requests per minute. If more than 100 requests are made within that minute, then the extra requests will be throttled or blocked. Rate limits are often implemented by web applications in order to protect themselves from denial-of-service attacks, which can overload a system by making too many requests.

Install and require the express-rate-limit package

npm i express-rate-limit
const rateLimit = require("express-rate-limit");

This code snippet below sets the parameters for our rate limiter and assigns it to a constant “limiter” to be used as a middleware in our route.

const limiter = rateLimit({
  windowMs: 1000, // 1 seond
  max: 11, // limit each IP to 11 requests per windowMs
});

max: Maximum number of connections during windowMs milliseconds before sending a 429 response.

windowMs: Timeframe for which requests are checked/remembered. Also used in the Retry-After header when the limit is reached.

app.get("/", limiter, (req, res) => {
  res.send(products);
});

Limiter is used as a middleware And when the base route is hit for this express app, it responds with a list of products.

This ensures that a user from an IP cannot make more than 11 requests per second. This adds a layer of protection for misuses of our public API.