Solution validation for mass-integrations
What's this issue all about? (Background and context)
The Ecosystem team is working on solving the problem of Instance/Group owners having to manually manage an integrations' settings across 1,000+ projects.
Ecosystem is working to bring project level integrations up to the Group and Instance levels. This update will allow Instance/Group owners to setup and manage an integration setting's from one place, which will then propagate the settings down to the individual project integrations, ideally keeping the desired integration settings in-sync across groups and or projects.
Relevant links:
- Design related issue: gitlab#208008 (closed)
- Feature epic: &2137
Currently, a feature called Service Templates aims to solve this problem but has the following limitations:
- Only works on self-managed instances (no Group level)
- Does not support realtime management of settings
What are the overarching goals for the research?
- To understand how users manage their existing project level integrations.
- To validate the design decisions that have been made for the MVC-1 and ideal state of the mass-integration feature.
What hypotheses and/or assumptions do you have?
- Based on comments like this &2137 (comment 283004588) this feature has been requested for some time now.
- Every organization will most likely use the new Instance/Group level integrations in different ways depending on their size, goals, and infrastructure setup.
What research questions are you trying to answer?
These questions are not meant to be blockers but will improve our understanding of our Instance/Group owners needs and improve the confidence level of our designs.
- Persona validation: Understand who in the organization:
- Sets up integrations on a mass level
- Once an integration is setup, is there someone else that manages that integration(s)?
- How does an Instance/Group owners communicate to a project owner about integration settings changes?
- User story validation: We have list of user stories within the epic, this will be a chance to validate those stories and understand their priorities. We may also learn about some new user stories.
- How are users currently using Service Templates?
- Will they understand the new way of managing integrations?
- Are there any potential friction points that will make them hesitant to upgrade to the new way of managing integrations?
- How do they deactivate Service Templates and why?
What persona, persona segment, or customer type experiences the problem most acutely?
Specifically, those who are currently leveraging Service Templates and/or those managing a large number of projects with integrations.
What business decisions will be made based on this information?
- This feature is already in development and this researched is focused on improving our current design proposal.
What, if any, relevant prior research already exists?
Nothing relevant to Service Templates, but found the follow issues related to general integrations and the Jira integration:
- System Usability Scale (SUS) Survey - Q3 2019 - Optimizing GitLab - "Integrations" Responses
- SUS Survey - Q1 FY2021 - Optimizing GitLab - Integrations
Who will be leading the research?
What timescales do you have in mind for the research?
13.1
Relevant links (script, prototype, notes, etc.)
-
Product Designer: Create a prototype. (Prototype A, Prototype B, Prototype C) -
Product Designer: Create the screening survey in Qualtrics. -
Product Designer: Open a Recruiting request
issue. Assign it to the relevant Research Coordinator.(#877 (closed)) -
Product Designer: Draft the usability testing script. -
UX Researcher: Review the usability testing script and provide feedback. -
Product Designer: Invite the UX Research calendar and any other interested parties to the usability testing sessions. -
Product Designer: Conduct one usability testing session. Amend script if necessary. -
Product Designer: Conduct remaining usability testing sessions. -
Product Designer: Open an Incentives request
. Assign it to the relevant Research Coordinator. -
Research Coordinator: Pay users. -
Product Manager and Product Designer: Synthesize the data and identify trends, resulting in findings. -
UX Researcher: Review findings and provide feedback, if needed.
-
-
Product Designer: Document insights in Dovetail. -
UX Researcher: Sense check the documented findings.
-
-
Product Designer: Update the Solution validation
research issue. Link to Dovetail findings. Unmark asconfidential
if applicable. Close issue.
Post research
Methodology
We spoke with a total of 7 participants over the course of 2 weeks.
1 out of the 7 participants was a Backend engineer manager from GitLab
Company size:
- 2 participants = 11 - 100 people
- 1 participant = 101 - 500 people
- 2 participants = 501 - 1000 people
- 1 participant = 1001 - 10,000 people
GitLab package: 5 out 7 participants were on a self-hosted instances, 2 participants did not know want package their organization was using.
Number of projects in their largest group:
- 1 participant = 1 - 10 projects
- 2 participant = 50 - 100 projects
- 3 participant = 10 - 20 projects
All conversations took place over Zoom and the participant was asked to share their screen when walking through the different prototypes
Prototypes that were tested:
Note: After the first 3 participants were tested, we iterated on the script and decided to introduce Prototype B (v2) which includes a tabbed layout and Prototype A was not shown anymore, since it wasn’t adding to any new learnings.
Learnings
The following insights emerged from this study:
- Contextual micro copy and correct nomenclature can make or break a feature
- The table showing the projects with overriding settings was preferred in a separate view
- Mass-integration type functionality could be useful in other areas of GitLab
- Surface "who" over "when"
- Modals should be used as a confirmation, not as a place to make potentially irreversible decisions See the Insights view for more details.
Areas to focus on
Based on the number of highlights per tag found in the Charts view, the top 3 areas to focus on are:
- Unclear copy - Participants didn’t either understand what a feature did or didn’t know what would happen next in a flow due to the lack of micro copy or unclear use of terminology
- UI feedback - This was feedback specific to the flow and components that were being used
- Feature request - Once the participant understood the purpose of the given feature, this was feedback specific to how they saw it could be improved