What is Rate Limiting?
In order to control the use of the Riot Games API, we set limits on how many times endpoints can be accessed within a given time period. These limits are put in place to minimize abuse, to maintain a high level of stability, and to protect the underlying systems that back the API from being overloaded. The underlying systems are the same systems that power League of Legends, so if they are overloaded, player experience suffers, and our first priority is to protect that experience.
Rate Limiting Types
There are three types of limits used in the API infrastructure - app, method and service rate limits.
The first type of limit is enforced on a per API key basis and is called an app rate limit. App rate limits are enforced per region. Every call made to any Riot Games API endpoint in a given region counts against the app rate limit for that key in that region, except where noted on the API Reference page. For example, calls to the static data API do not count against the app rate limit. If you have not already done so, please visit the API Keys page for more information.
The second type of limit is enforced on a per endpoint (or "method") basis for a given API key and is called a method rate limit. Method rate limits are enforced per region. Every call made to any Riot Games API endpoint in a given region counts against the method rate limit for the given method and API key in that region,
The third type of limit is enforced on a per service basis and is called a service rate limit. Service rate limits are also enforced per region. Every call
made to any endpoint for a given Riot Games API service in a given region counts against the service rate limit for that service in that region. When service
rate limits apply, we will document them, including which endpoints are part of the rate limited service.
These limits enforced by the API infrastructure are not the only gateways to the data provided. Some of the underlying services for certain endpoints may also implement their own rate limits, independently of the API infrastructure. In these cases, you will get a 429 error response, but there will be no X-Rate-Limit-Type header included in the response. Only when the rate limiting is enforced by the API edge infrastructure will this header be included.
HTTP Headers and Response Codes
If a call exceeds the app or service rate limit for a given period of time, then subsequent calls made to limited endpoints will return a 429 "Rate limit exceeded" HTTP response until the rate limit expires.
In addition to the response code, some additional headers will be included in the response that provide more information.
|X-Rate-Limit-Type||The rate limit type, either "method", "service", or "application".||Included in any 429 response. "method" indicates you have exceeded the individual limits for that method. "application" indicates you have exceeded the total rate limit for your application. "service" is returned if the underlying platform service is rate limiting it's connections from the Riot API layer, regardless of your API key's application or method rate limits.|
|Retry-After||The remaining number of seconds before the rate limit resets. Applies to both user and service rate limits.||Included in any 429 response where the rate limit was enforced by the API infrastructure. Not included in any 429 response where the rate limit was enforced by the underlying service to which the request was proxied.|
|X-App-Rate-Limit||The application rate limits currently being applied to your API key. See the Application Rate Limit Headers section below for more information.||Included in the response for all API calls that enforce an application rate limit. See the Application Rate Limit Headers below for more information.|
|X-Method-Rate-Limit||The method rate limits currently being applied to your API key. See the Method Rate Limit Headers section below for more information.||Included in the response for all API calls that enforce a method rate limit. See the Method Rate Limit Headers section below for more information.|
|X-App-Rate-Limit-Count||The number of calls to a specific method that have been made during a specific rate limit. See the Application Rate Limit Headers section below for more information.||Included in the response for all API calls that enforce a method rate limit. See the Application Rate Limit Headers section below for more information.|
|X-Method-Rate-Limit-Count||The number of calls to a specific method that have been made during a specific rate limit. See the Method Rate Limit Headers section below for more information.||Included in the response for all API calls that enforce a method rate limit. See the Method Rate Limit Headers section below for more information.|
Application Rate Limit Headers
The X-App-Rate-Limit and X-App-Rate-Limit-Count headers will be included in the response for all API calls that enforce an application rate limit. For example, calls to the lol-static-data-v1.2 endpoints will not include this header in the response because calls to that API are not rate limited.
The X-App-Rate-Limit header contains a comma-separated list of each of the overall rate limits associated with your API key (most people have two; a 10 second rate limit and a 10 minute rate limit). The value of the header contains each of your API key's rate limits in a comma separated list. For each rate limit bucket, the value includes the number of calls your API key is allowed to make during that bucket and the duration of the bucket in seconds, separated by a colon.
The X-App-Rate-Limit-Count header is very similar. The only difference is that for each rate limit bucket, the value includes the number of calls your API key has already made during that bucket, instead of the number of calls it is allowed to make.
For example, let's say you have the following rate limits:
- 100 calls per 1 second
- 1,000 calls per 10 seconds
- 60,000 calls per 10 minutes (600 seconds)
- 360,000 calls per 1 hour (3,600 seconds)
When you make your first call, the X-App-Rate-Limit and X-App-Rate-Limit-Count headers will contain the following.
X-App-Rate-Limit: 100:1,1000:10,60000:600,360000:3600 X-App-Rate-Limit-Count: 1:1,1:10,1:600,1:3600
If you make a second call (3 seconds later) the X-App-Rate-Limit-Count header will return the following.
X-App-Rate-Limit: 100:1,1000:10,60000:600,360000:3600 X-App-Rate-Limit-Count: 1:1,2:10,2:600,2:3600
Notice how the first rate limit has reset because its time window is 1 second and how the second call still counts toward the other three rate limits.
Here's another example.
X-App-Rate-Limit: 1000:10,60000:600 X-App-Rate-Limit-Count: 450:10,2000:600
The X-App-Rate-Limit header in this example shows this API key has a rate limit of 1,000 calls per 10 seconds and 60,000 calls per 600 seconds. The X-App-Rate-Limit-Count header shows that 450 API calls were made within the 10 second rate limit window and 2,000 API calls were made within the 10 minute (600 second) rate limit window.
Method Rate Limit Headers
The X-Method-Rate-Limit and X-Method-Rate-Limit-Count headers are functionally identical to the The X-App-Rate-Limit and X-App-Rate-Limit-Count headers, except instead of reflecting the overall rate limits associated with your API key it reflects the per endpoint rate limits for your API key. Each endpoint in the API can and likely will have a different rate limit. The default rate limit for each method can be found in the table below, these rate limits may be overridden on a per API key basis.
|Method||Default Rate Limit|
|500 requests / 10 seconds|
|1,000 requests / 10 seconds|
|All other endpoints||20,000 requests / 10 seconds|
Respect Rate Limits in Your Code
You will want to make sure you're using your rate limit efficiently, which means designing your code to function under the rate limit (e.g., your code only ever will make a max of X calls per Y seconds, depending on your rate limit). There are many ways to achieve this goal, but one mechanism is to create a queue where you add all the calls you need to make and a certain number get executed within a certain time frame. Note that if you use multiple threads all simultaneously making calls to the API, you will need to design your application properly so that your API key's rate limits are properly respected across threads to avoid blacklisting. If you have questions about how to design or implement your application rate limiting properly, please hop onto our forums or into our discord server; there are plenty of friendly folks happy to help out.
When you do get back a 429 response, you can also use the headers listed in the previous section to ensure that you back off and prevent further rate limit violations. If the X-Rate-Limit-Type is "user", then you have exceeded your user rate limit and should make no further calls to any Riot Games API endpoint for the number of seconds specified by the Retry-After header. If the X-Rate-Limit-Type is "service", then the service rate limit has been exceeded. In this case, you should make no further calls to the same endpoint in the same region for the number of seconds specified by the Retry-After header.
If the rate limit was enforced by the underlying service to which the request was proxied, the above headers will not be included. In that case, your code cannot use the same mechanism to handle these responses. Instead, your code would simply need to back off for a reasonable amount of time (e.g., 1 second) before trying again the same request.
Tips to Avoid being Rate Limited
One of the worst experiences from the player's perspective is trying to use an awesome application that doesn't work. Regardless of what awesome experience you have created, the player will expect your application to function properly. That's why we recommend taking extreme care when crafting your code. Please note that some features you might want to provide are impossible with a rate limit, especially when it comes to the freshness of results. It is not required that you implement the tips provided below to obtain an approved production key. These tips are intended as best practices to improve your application's efficiency and avoid hitting your rate limit.
In addition to defensive programming, caching most, if not all, of the requests that your application makes will improve its performance. A local cache is especially helpful when many players request the same data over a short period of time (e.g., a pro player's recent games). In general, you should store API responses in your application or on your site if you expect a lot of use. For example, don't call the API every time a page on your website is loaded. Instead, load your page from a locally cached version of the API data. You can keep this cache updated by infrequent calls to the API that store the results in the cache.
While there is more than one solution to the problem of caching, we are frequently asked to explain how it works. We encourage you to seek out and read tutorials or primers on caching before deciding on the solution that works best for you, but we provide an example here of how one could implement a local cache.
One way to build a local cache is for the application to store each response it gets back from an API call in a local data store and assign a time after which this data becomes invalid or expires. For example, if you are storing match history and you want it to be updated frequently, 30 minutes is a reasonable expiry time, since a game lasts about 30 minutes on average. If you are storing information that doesn't change often, such as profile icons and summoner names, you could use a much longer expiry time, such as 24 hours. For information that only changes once per patch, such as game assets and resources returned by the static data API, you could make your expiry even longer. When your application requests information, it would first check your local cache to see if you have the requested data cached. If it isn't in cache, your application would fetch the data from the API and store it. If it is in cache, then your application would check if the data is still valid (hasn't expired). If the data has expired, your application would make an API call to refresh the information in cache. Note that for some use cases, the application might be better served by having a background thread that is refreshing expired data in cache, rather than checking it synchronously when a user makes a request to access data.
Prioritize Active Users
If your site keeps track of many players (for example, fetching their recent games or current statistics), consider only requesting data for players who have recently visited or signed into your site, or players that get looked up more frequently by your users.
Adapt to Results
If your application frequently queries for a set of players, you can introspect on the data to determine which players' data changes frequently and which change rarely. If some subset of players haven't had any change in their data for long periods of time, consider querying for their data less often or not at all. By using a back-off you can keep up to date on their data, but not waste cycles requesting data that very rarely changes.
We ask that you honor the rate limit. If you or your application abuses the rate limits we will blacklist it. If your application is blacklisted, all calls to the API with its API key will return a 403 response code, even if you regenerate the key. Blacklisting is enforced in phases. The first few times your application is blacklisted, it will be blacklisted only temporarily, but for a larger time period on each occurrence. After enough violations, the blacklisting will be permanent. If your application has been blacklisted and you think there has been an error you can submit an application note to address the issue, including the following information:
- Explain why you think your application was blacklisted.
- Describe in detail how you have fixed the problem that you think caused you to be blacklisted.