feat: support gpt-5 for self-hosted model
What does this merge request do and why?
Self-hosted GitLab instances using AI Gateway encounter 500 errors when attempting to use openai/gpt-5 or openai/o3 models due to LiteLLM compatibility issues with newer OpenAI model requirements.
There aree a few issues with the following configuration of the gpt-5 model
-
LiteLLM bug fix for O-series and GPT-5 parameter handling: OpenAI gpt-5 series does not support "max_tokens" parameter and temperature values that are not = 1 #
- This fix addresses the
max_tokensvsmax_completion_tokensparameter issue and temperature constraints - Fixed in !3800 (merged)
- This fix addresses the
-
Updated
chat/react/gpt/1.0.0.ymlto usetemperature: 1.0for GPT ReAct Chat agent- There was no family for the following
model_configurationcheck to use the base temperature 0.0 as a parameter which prevented using the followinggpt-5models.
- There was no family for the following
-
Removed unsupported
stopparameter that may cause issues with newer models>> from litellm import get_supported_openai_params >>> >>> def show_gpt5_params(model: str = "gpt-5") -> None: ... """Display supported OpenAI parameters for GPT-5 models.""" ... params = get_supported_openai_params(model=model) ... ... print(f"\nSupported parameters for {model}:") ... print("-" * 60) ... for param in sorted(params): ... print(f" • {param}") ... print(f"\nTotal: {len(params)} parameters") ... >>> # Usage >>> show_gpt5_params("gpt-5") Supported parameters for gpt-5: ------------------------------------------------------------ • audio • extra_headers • function_call • functions • logit_bias • max_completion_tokens • max_retries • max_tokens • modalities • n • parallel_tool_calls • prediction • reasoning_effort • response_format • safety_identifier • seed • service_tier • stream • stream_options • temperature • tool_choice • tools • user • web_search_options Total: 24 parameters
How to set up and validate locally
- Set up self-managed Gitlab instance with AI Gateway
- Navigate to
GitLab Duo Self-Hostedin Duo admin dashboard - Set up a self-hosted model pointing at
https://api.openai.com/v1with model identifieropenai/gpt-5oropenai/o3and include a valid OpenAI API key - Click
Test connection - Change model identifier to
openai/gpt-4o. Test connection will work successfully
| Model Setup | Request |
|---|---|
|
|
|
|
Merge request checklist
-
Tests added for new functionality. If not, please raise an issue to follow up. -
Documentation added/updated, if needed. -
If this change requires executor implementation: verified that issues/MRs exist for both Go executor and Node executor or confirmed that changes are backward-compatible and don't break existing executor functionality.
Edited by Nathan Weinshenker



