Design: UX Vision Wireframes (Iteration 2)

Overview

This issue covers the second iteration of designs towards the Model Registry demo outlined in Comprehensive UX Vision (&13) and MLOps: Model Registry (&12).

This process is meant to be blue-sky and iterative, incorporating feedback from the following sources:

Additional Key Needs/Explorations

  • Structural linking to CI/CD to recognize models within that structure
  • Candidate and/or version comparison
  • Integration with Analyze > Model Experiments
  • Integration with Custom Models

Feedback (from Iteration 1)

  • Programmatic flow
    • Jupyter notebook - you create a version or a model, or train a candidate and pull it in
  • UI users don't necessarily think about models vs. versions
    • If we go to HuggingFace or other registries, the only understanding of a model is the literal file.
    • Versioning happens in the background - it's only seen in the git
  • Creation
    • Within typical workflows, a model isn't created via a registry, it's the other way around
      • A registry occurs AFTER a data scientist has already been experimenting within an environment (Jupyter) and they want to understand more about the model
        • What it's used for
        • How to version
        • Where and how and what
        • Deployment
        • Inference strategy
        • Retrain
        • Monitor
  • Previous: Model = epic, version/candidate = child issue. This is not correct
    • New: Model = Repo, Version = Git tag, Candidates = merge request
      • Candidates are independent of versions
    • We have an opportunity to re-define that workflow in a way that makes more sense to our users
    • Reality: The 'Epic' is more related to work within the Jupyter notebook (or similar environment) OR in a pipeline
      • How do we 'pull' a model from a pipeline within GitLab?
        • Would we import it from a project? Are there specific code segments? Is this only doable as the outcome of an experiment? (action-led)
  • Potential new model:
    • A model is purely the model card (mostly informative)
    • Versions are locked, 'final' outcomes of validated and experimented candidates
      • Packaged and done - immutable
    • Candidates are the 'sandbox'
      • Can be deployed, experimented with, trained, etc.
  • Permissions (can remove)
    • These will only be inherited from the group - no object-level permissions (at least for now)
  • Artifacts browser
    • Model card version can be purely information (if at all needed)
      • Most likely, all versions will have the same artifacts, with different sizes
        • ex - model.tar will always be model.tar
      • No need for functionality (except maybe download ONLY) on the model card
  • Locking
    • Mechanism for locking model versions
      • When a candidate becomes a version - so changes can't be made to that outcome?
        • EX - governance would have a process that once a candidate is a version, it's no longer editable
  • Restructure model cards
    • Issues imply a list of actions to take - that's not necessarily true for the model card
      • Actions, such as activity, performance, runs (testing and experimentation), etc. should be grouped more intuitively, not necessarily by how issues/epics are handled
  • Deployment and runs
    • What do these look like? How do they happen? What's the expected behavior here?
      • These should be whittled out more in this iteration to gain technical insight into how this works
  • More to come!

Resources

Edited by Graham Bachelder