View the original community article here
Last tested: Oct 7, 2019
We only rate limit endpoints whose functionality could be abused (deliberately or accidentally) to send spam or to execute a denial of service attack on a third party. For example, the scheduled_plan_run_once endpoint is rate limited to 10 calls per second. If your app runs into this limit, you'll get an HTTP status of 429 in the response. All you need to do to avoid this is to have your script take a nap between calls to the scheduled_plan_run_once API. A sleep() or delay() of half a second (500 milliseconds) between calls would be ample.
There are some peculiarities with rate limiting in a clustered environment. The rate limit trap may not fire exactly at the prescribed limit, but be a little more lax in a cluster than on a single instance. The rate limit doesn't need to be exact, it's only there to prevent malicious abuse of the system.
Additionally, every node has a hard coded active thread limit of 200, and it is possible to overwhelm an instance or even a cluster with more threads than it can handle. This can significantly slow down an instance or even cause it to become unresponsive.
If you want API calls to be rate limited beyond the scope of those endpoints where Looker has predefined limits (for example, to prevent your users from abusing the API or overwhelming the instance), you can accomplish this by setting up a load balancer or rate-limiting proxy on your network to limit API request call rates.