fix: Account for different `method`s in Vertex AI usage
The current metric fails for some time windows, because of multiple method
values that become ambiguous in the binary
/
operation when combined with the ignoring (method)
in our current metric (see gitlab-com/gl-infra/scalability#2515 (comment 1557686088)).
I added the ignoring (method)
bit on !6146 (merged), and as far as I can recall my logic was "this label is in the numerator but not on the denominator, so for this to work I'll remove it". That makes the binary operator work as long as there's no metrics for the same base_model
but different method
s, which is true in most cases, as we mostly use the API. What I'm assuming happens is that someone makes a prediction request via the UI, and that breaks the metric. Instead of ignoring
, we should add up the usage from all methods.