Pass token limitation info to frontend (Explain code)
What does this MR do and why?
Right now we store token limitation information (totalModelTokenLimit, maxResponseTokens) on both the frontend and backend for the Explain code feature.
In this MR we pass down the token limitation information from the backend to the frontend so that we have one central place for maintaining these values.
Screenshots or screen recordings
(No visual changes are expected)
How to set up and validate locally
- Enable the following feature flags:
explain_codeexplain_code_snippetopenai_experimentationai_experimentation_apiexplain_code_chat
- Add an OpenAI API key by following the steps in !116364 (merged)
- Explain code chat should work as it did before.
MR acceptance checklist
This checklist encourages us to confirm any changes have been analyzed to reduce risks in quality, performance, reliability, security, and maintainability.
-
I have evaluated the MR acceptance checklist for this MR.
Related to #408673 (closed)
Edited by Jacques Erasmus