Fix LLM AI client not returning a HTTP response with 204 responses
What does this MR do and why?
Previously Llm::AiGateway::Client#request
returned nil
if the AI
Gateway return a 204 No Content
response. However, this makes it
impossible to discern whether the request was successful or whether
the server returned a 5xx error.
This was happening because run_retry_with_exponential_backoff
returned nil
if the body were blank. To fix this, return the
HTTParty response even in this case.
References
None. I discovered this in development.
MR acceptance checklist
Please evaluate this MR against the MR acceptance checklist. It helps you analyze changes to reduce risks in quality, performance, reliability, security, and maintainability.
Merge request reports
Activity
assigned to @stanhu
added pipelinetier-1 label
changed milestone to %17.6
2 Warnings Potentially Non-Compliant AI Logging DetectedThis merge request contains AI logging that may not comply with GitLab's AI data usage policies.
Please ensure proper warnings are included and review the AI logging documentation.To resolve this:
- Ensure you're not logging sensitive or personal information.
- Consider if the logging should be gated behind the
expanded_ai_logging
feature flag - this means usinglog_conditional_info
method.
For more information, see: https://docs.gitlab.com/ee/user/gitlab_duo/data_usage.html
This MR changes code in ee/
, but its Changelog commit is missing theEE: true
trailer. Consider adding it to your Changelog commits.The following files contain potentially non-compliant AI logging:
ee/lib/gitlab/llm/vertex_ai/client.rb
Reviewer roulette
Category Reviewer Maintainer backend @micthomas
(UTC-5, 3 hours ahead of author)
@alejandro
(UTC-5, 3 hours ahead of author)
Please refer to documentation page for guidance on how you can benefit from the Reviewer Roulette, or use the GitLab Review Workload Dashboard to find other available reviewers.
If needed, you can retry the
danger-review
job that generated this comment.Generated by
DangerEdited by Ghost Useradded bugfunctional label and removed backend label
requested review from @mksionek
- Resolved by Gosia Ksionek
added 1 commit
- 13a83af3 - Fix LLM AI client not returning a HTTP response with 204 responses
- A deleted user
added backend label
removed review request for @mksionek
added 1 commit
- 44029c92 - Fix LLM AI client not returning a HTTP response with 204 responses
added 1 commit
- 1fb48fb0 - Fix LLM AI client not returning a HTTP response with 204 responses
- Resolved by Gosia Ksionek
- Resolved by Gosia Ksionek
- Resolved by Gosia Ksionek
requested review from @mksionek, @dbiryukov, and @maddievn