What is Rate Limiting?
In order to control the use of the Riot Games API, we set limits on how many times endpoints can be accessed within a given time period. These limits are put in place to minimize abuse, to maintain a high level of stability, and to protect the underlying systems that back the API from being overloaded. The underlying systems are the same systems that power League of Legends, so if they are overloaded, player experience suffers, and our first priority is to protect that experience.
Rate Limiting Types
There are two types of limits used in the API infrastructure. The first type of limit is enforced on a per API key basis and is called a user rate limit. User rate limits are enforced per region. Every call made to any Riot Games API endpoint in a given region counts against the user rate limit for that key in that region, except where noted on the API Reference page. For example, calls to the static data API do not count against the user rate limit. If you have not already done so, please visit the API Keys page for more information.
The second type of limit is enforced on a per service basis and is called a service rate limit. Service rate limits are also enforced per region. Every call made to any endpoint for a given Riot Games API service in a given region counts against the service rate limit for that service in that region. When service rate limits apply, we will document them, including which endpoints are part of the rate limited service.
These two limits enforced by the API infrastructure are not the only gateways to the data provided. Some of the underlying services that back certain endpoints may also implement their own rate limits, independently of the API infrastructure.
HTTP Headers and Response Codes
If a call exceeds the user or service rate limit for a given period of time, then subsequent calls made to limited endpoints will return a 429 "Rate limit exceeded" HTTP response until the rate limit expires.
In addition to the response code, some additional headers will be included in the response that provide more information.
|X-Rate-Limit-Type||The rate limit type, either "method", "service", or "application".||Included in any 429 response. "method" indicates you have exceeded the individual limits for that method. "application" indicates you have exceeded the total rate limit for your application. "service" is returned if the underlying platform service is rate limiting it's connections from the Riot API layer, regardless of your own personal ratelimits.|
|Retry-After||The remaining number of seconds before the rate limit resets. Applies to both user and service rate limits.||Included in any 429 response where the rate limit was enforced by the API infrastructure. Not included in any 429 response where the rate limit was enforced by the underlying service to which the request was proxied.|
|X-App-Rate-Limit-Count||The number of calls that have been made during a specific rate limit. See the X-App-Rate-Limit-Count Header section below for more information.||Included in the response for all API calls that enforce an application rate limit. See the X-App-Rate-Limit-Count Header section below for more information.|
|X-Method-Rate-Limit-Count||The number of calls to a specific method that have been made during a specific rate limit. See the X-Method-Rate-Limit-Count Header section below for more information.||Included in the response for all API calls that enforce a method rate limit. See the X-Method-Rate-Limit-Count Header section below for more information.|
This header will be included in the response for all API calls that enforce an application rate limit. For example, calls to the lol-static-data-v1.2 endpoints will not include this header in the response because calls to that API are not rate limited. The X-App-Rate-Limit-Count header contains a comma-separated list of each of the overall rate limits associated with your API key (most people have two; a 10 second rate limit and a 10 minute rate limit). For each of these rate limits we display two pieces of information which are separated by a colon; the number of calls you've made and the time window of that particular rate limit in seconds.
For example, let's say you have the following rate limits:
- 100 calls per 1 second
- 1,000 calls per 10 seconds
- 60,000 calls per 10 minutes (600 seconds)
- 360,000 calls per 1 hour (3600 seconds)
When you make your first call, the X-App-Rate-Limit-Count header will return the following.
If you make a second call (3 seconds later) the X-App-Rate-Limit-Count header will return the following.
Notice how the first rate limit has reset because its time window is 1 second and how the second call still counts toward the other three rate limits.
Here's another example.
The X-App-Rate-Limit-Count header in this example shows that 450 API calls were made within the 10 second rate limit window and 2000 API calls were made within the 10 minute (600 second) rate limit window.
The X-Method-Rate-Limit-Count header is functionally identical to the X-App-Rate-Limit-Count header, except instead of reflecting the overall rate limits associated with your API key it reflects the per endpoint rate limit for your API key. Each endpoint in the API can and likely will have a different rate limit. The default rate limit for each method can be found in the table below, these rate limits may be overridden on a per API key basis.
|Endpoint||Default Rate Limit|
|All Endpoints||2000 requests / second|
Respect Rate Limits in Your Code
You will want to make sure you're using your rate limit efficiently, which means designing your code to function under the rate limit (e.g., your code only ever will make a max of X calls per Y seconds, depending on your rate limit). There are many ways to achieve this goal, but one mechanism is to create a queue where you add all the calls you need to make and a certain number get executed within a certain time frame.
When you do get back a 429 response, you can also use the headers listed in the previous section to ensure that you back off and prevent further rate limit violations. If the X-Rate-Limit-Type is "user", then you have exceeded your user rate limit and should make no further calls to any Riot Games API endpoint for the number of seconds specified by the Retry-After header. If the X-Rate-Limit-Type is "service", then the service rate limit has been exceeded. In this case, you should make no further calls to the same endpoint in the same region for the number of seconds specified by the Retry-After header.
If the rate limit was enforced by the underlying service to which the request was proxied, the above headers will not be included. In that case, your code cannot use the same mechanism to handle these responses. Instead, your code would simply need to back off for a reasonable amount of time (e.g., 1 second) before trying again the same request.
Tips to Avoid being Rate Limited
One of the worst experiences from the player's perspective is trying to use an awesome application that doesn't work. Regardless of what awesome experience you have created, the player will expect your application to function properly. That's why we recommend taking extreme care when crafting your code. Please note that some features you might want to provide are impossible with a rate limit, especially when it comes to the freshness of results. It is not required that you implement the tips provided below to obtain an approved production key. These tips are intended as best practices to improve your application's efficiency and avoid hitting your rate limit.
In addition to defensive programming, caching most, if not all, of the requests that your application makes will improve its performance. A local cache is especially helpful when many players request the same data over a short period of time (e.g., a pro player's recent games). In general, you should store API responses in your application or on your site if you expect a lot of use. For example, don't call the API every time a page on your website is loaded. Instead, load your page from a locally cached version of the API data. You can keep this cache updated by infrequent calls to the API that store the results in the cache.
While there is more than one solution to the problem of caching, we are frequently asked to explain how it works. We encourage you to seek out and read tutorials or primers on caching before deciding on the solution that works best for you, but we provide an example here of how one could implement a local cache.
One way to build a local cache is for the application to store each response it gets back from an API call in a local data store and assign a time after which this data becomes invalid or expires. For example, if you are storing match history and you want it to be updated frequently, 30 minutes is a reasonable expiry time, since a game lasts about 30 minutes on average. If you are storing information that doesn't change often, such as profile icons and summoner names, you could use a much longer expiry time, such as 24 hours. For information that only changes once per patch, such as game assets and resources returned by the static data API, you could make your expiry even longer. When your application requests information, it would first check your local cache to see if you have the requested data cached. If it isn't in cache, your application would fetch the data from the API and store it. If it is in cache, then your application would check if the data is still valid (hasn't expired). If the data has expired, your application would make an API call to refresh the information in cache. Note that for some use cases, the application might be better served by having a background thread that is refreshing expired data in cache, rather than checking it synchronously when a user makes a request to access data.
Prioritize Active Users
If your site keeps track of many players (for example, fetching their recent games or current statistics), consider only requesting data for players who have recently visited or signed into your site, or players that get looked up more frequently by your users.
Adapt to Results
If your application frequently queries for a set of players, you can introspect on the data to determine which players' data changes frequently and which change rarely. If some subset of players haven't had any change in their data for long periods of time, consider querying for their data less often or not at all. By using a back-off you can keep up to date on their data, but not waste cycles requesting data that very rarely changes.
We ask that you honor the rate limit. If you or your application abuses the rate limits we will blacklist it. If your application is blacklisted, all calls to the API with its API key will return a 403 response code, even if you regenerate the key. Blacklisting is enforced in phases. The first few times your application is blacklisted, it will be blacklisted only temporarily, but for a larger time period on each occurrence. After enough violations, the blacklisting will be permanent. If your application has been blacklisted and you think there has been an error you can submit an application note to address the issue, including the following information:
- Explain why you think your application was blacklisted.
- Describe in detail how you have fixed the problem that you think caused you to be blacklisted.