Decide a new annotation design for publishing process
Problem to solve
Annotations included in the python file are redundant, the user must write the same content as in the parameter declaration in the code. We must find a better way to save the annotations. This new method should allow:
-
Multiple entry-points, more than one python code can be executable in a repository.#1101 (closed) -
Easy error detection for user, currently if they make mistakes they only know when running the experiment. This is very inefficient. -
Easy verification for backend, this means that backend can read the parameters and keep them in a easy way, that is also readable for the users.
Intended users
All users.
User experience goal
-
Very easy publishing process (from user experience perspective) -
Saving time avoiding errors in the parameter creation process. -
Saving the parameters must be reliable but also fast. (MLReef must provide a clear way to do the annotations, but also this way should be fast to debug for the user)
Proposal for Technical Solution
Here are some ideas:
-
User interface assisted parameter creation before publishing, the parameters get saved within the experiment and are sent directly to the backend. This means that the parameters are only modifiable through the UI.
-
Config json file that the user can write themselves or use an assistant (it can be the same as described above) that will contain the parameters, this option holds the opportunity to help us save other data concerning the execution of the experiment.
-
Keep annotations as they are now (using python decorators), adding a new mlreef library dependency to remove the ugly empty functions that we are using right now to avoid the errors that come with the decorators. I would say that also we can change the annotation syntax in a way that python parser would ignore it, this way the decorators do not interfere with the user’s code.
-
Keep both options annotations (but using the part of the code that declares the parameters) in the file and also with a config file, if the user decides to go with annotations in the file, backend will parse it and create the config file automatically according to the contents of the annotations.
Note: We need to remove the mandatory params input-path and output-path. The user’s may not want to add any parameter, we are forcing them to add arguments that were not present in their code originally, making it a bit harder to adapt the code for publishing.
I think we should instead have a base directory , and an output directory that belongs to the experiment not to the code itself.
Permissions and Security
Documentation
Availability, Testing & Test Cases
What does success look like, and how can we measure that?
Additional Notes
What is the type of buyer?
Is this a cross-stage feature?
Links / references
/cc @si-ge-st