Log LLM stop reason and tokens from LLM response
What does this MR do and why?
- This MR adds logging for stop reason and tokens from LLM response
How to set up and validate locally
Prerequisites:
- Make sure the AI features are enabled locally using this guide
- Assign
GitLabDuo
as a reviewer either by commenting/assign_reviewer @GitLabDuo
or selectGitLabDuo
from the reviewers dropdown.
Validate:
-
Go to log/application_json.log
and search forLLM response metrics
-
Make sure all the required fields are correctly logged
Testing
- Logged in
log/application_json.log
:
{
"severity": "INFO",
"time": "2025-04-25T15:33:40.777Z",
"correlation_id": "01JSPSA5JFY2618TXZT3D5V490",
"meta": {
"caller_id": "Llm::CompletionWorker",
"feature_category": "ai_abstraction_layer",
"organization_id": 1,
"remote_ip": "::1",
"http_router_rule_action": "classify",
"http_router_rule_type": "SESSION_PREFIX",
"user": "root",
"user_id": 1,
"client_id": "user/1",
"root_caller_id": "GraphqlController#execute"
},
"message": "LLM response metrics",
"merge_request_id": 159,
"response_id": "msg_01KvPgsfswtnfmKfjRnjQiZB",
"stop_reason": "end_turn",
"input_tokens": 4087,
"output_tokens": 442
}
MR acceptance checklist
Evaluate this MR against the MR acceptance checklist. It helps you analyze changes to reduce risks in quality, performance, reliability, security, and maintainability.
Related to #537623 (closed)