Select statements that are in execution during database CPU utilization peak times - projects table
After investigating spikes over 70% of CPU usage on the database here:
Link to reports/investigations
we find out this statement being executed thousands of times:
Query |
---|
SELECT "projects".* FROM "projects" WHERE "projects"."id" = $1 LIMIT $2 |
QueryId: -1204356417117018042
73367110635711796
This query is being executed from 500 to 1500 times a second:
Could be possible creating or improving our caching on those tables? Could we have a unified cache to query, and when we execute updates on it, we could refresh the cache?
Metrics
Collected from https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/snippets/2047044
Metric | Measurement |
---|---|
Total number of calls | 747,631 377.44/sec 1.00/call 3.52% |
% of Total time | 2.25% |
TPS | |
Duration | 33,175.60 ms 16.749 ms/sec 0.044 ms/call |
Edited by Thong Kuah