fix: allow fallback to base
, with overridden model_class_provider
What does this merge request do and why?
First step towards #1523 (closed)
This change fixes a bug where the AI system would fail when users selected a different AI model than what was originally configured in the base
prompt template.
base
prompts are set up to use anthropic
as the provider, but if a user wanted to use a different model (like GPT-5 from OpenAI), the system would try to use the wrong provider and fail (applicable only when there is no gpt_5
specific prompt for that feature, so it falls back to base
prompt).
Internally, the root cause is the fact that when falling back to base
, model_class_provider: anthropic
continue to be used, but the model params are set to be an OpenAI model.
So, essentially, the API call will be made to https://api.anthropic.com/v1/messages
(because, model_class_provider: anthropic
, so the Anthropic factory is used) but the model's identifier is set to, say, gpt-5-2025-08-07
, which is a non-existent Anthropic model, so it errors out, so like:
HTTP Request: POST https://api.anthropic.com/v1/messages?beta=true "HTTP/1.1 404 Not Found" [httpx] correlation_id=01K6CVFYAEB9896JC2JDN1ZAKR gitlab_global_user_id='ui/A6IWVVUv68Ch3UG1bk76xfMiee6Thm9TTcuwlpxs=' workflow_id=74
2025-09-30T08:23:16.073878Z [error ] Error code: 404 - {'type': 'error', 'error': {'type': 'not_found_error', 'message': 'model: gpt-5-2025-08-07'}, 'request_id': 'req_011CTeGCrg9A3KZJfj48AWdC'}
The fix allows the system to look up the correct provider for any model from a central configuration file, so when someone selects GPT-5, but if a gpt_5
prompt isn't available for that feature, it automatically knows to use OpenAI's provider regardless of what the original prompt specified. This makes model selection more flexible and prevents errors when users switch between different AI models.
In this MR, I am specifying only OpenAI models to use model_class_provider
, but this can be extended across all models specified in models.yml
, and then model_class_provider
can be cleaned up from all prompt files. But this can come in a follow-up MRs, where we should
- assign
model_class_provider
to all models withinmodels.yml
. - remove
model_class_provider
from all prompt files.
How to set up and validate locally
Numbered steps to set up and validate the change are strongly suggested.
Merge request checklist
-
Tests added for new functionality. If not, please raise an issue to follow up. -
Documentation added/updated, if needed. -
If this change requires executor implementation: verified that issues/MRs exist for both Go executor and Node executor or confirmed that changes are backward-compatible and don't break existing executor functionality.