Train LLM from apps
Description
Relates to #1558 (closed) and #1560 (closed), however instead of deploying a pre-trained model for a single task, it would be better to allow users to upload documents and store them continuously in the containers. That way users can pre-train their "own" model with their own data and store it, instead of sending documents every time in each request.
To upload documents (.pdf, .txt, .md, etc.) users can send them for example to ai/train
with a specified name for the model:
url: { container: ai/train }
body:
object.from:
file: { prop: file }
modelName: customLLM
Then the model can be used in the already existing requests structure by specifying the model name:
url: { container: ai/llm }
body:
object.from:
input: { prop: message }
model: customLLM
.....
And there should be an endpoint, from which users can delete their own models:
url: { container: ai/delete }
body:
model: customLLM
Editing of data in custom models gets more complicated when there is no GUI, especially when requests must be send for each task. Therefore I don't include it as a required feature at the end.