Fix max token length for fake models
What does this merge request do and why?
A recent change that introduced a MAX_MODEL_LEN
setting that is enforced broke the USE_FAKE_MODELS
setting since the value defaults to something very small (1), making it impossible to send a normal prompt.
This MR increases the default max len to 2048 on the base class instead, which should be generous enough to work with a variety of prompts.
How to set up and validate locally
Numbered steps to set up and validate the change are strongly suggested.
Merge request checklist
-
Tests added for new functionality. If not, please raise an issue to follow up. -
Documentation added/updated, if needed.
Edited by Matthias Käppler