Skip to content

LLM model use transparency

At times, it can be difficult to tell which model generated a response. Because there is a possibility of using a fallback model (still?), it's possible that a different model than the one set in the settings is used. During testing, it was hard to distinguish which model actually ran.

Suggestion: Add a tag to the chat preview that shows which model generated a given message, ideally noting if a fallback model was triggered or not.

Example:

  • Successful message GPT-40

  • Fallback method Fallback: GPT-40-mini