View the original community article here
Last tested: Jun 24, 2020
There could be a few reasons for this, first off table calcs operate over the entire result set. So say one of the calcs makes a list out of the column, that is a 5000 entry list in the simplest case, 50000 in the 20+ minute case. If you then try to take the maxof that, one of these will take much longer. It doesn't have to be a crazy looking table calc to be gnarly in terms of processing time.
Second, there is also a global limit of 3 table calcs at a time that one looker instance (or node in a cluster) will process on the backend. So very long time intervals might include the time queued waiting for one of those table calc worker threads to get free.
So what next? The less drastic though less feasible would be to suggest you change there querying habits. Moving table calc logic into the lookml/database OR being wary of how many queries they are firing off/downloading that contain table calcs. The more drastic option but perhaps the more scalable, would be clustering since now that global limit of 3 will be multiplied by the number of nodes in their cluster.