Skip to content

PV: Understanding the needs/goals/workflow of Code Quality Users through External Interviews

What did we learn?

More information may be found in this google sheet (gitlab only).

Overarching Goals Insight
Identify key workflows and processes of our Code Quality users so that we can direct our future efforts towards the most impactful features. Code quality is often a natual stopping point in the workflow bc engineers have to test their commits and often have an MR that is reviewed. However, buy-in for code quality is mixed in orgs and the general sentiment is the more automated the better, and the more verbose the logging is, the better. Devs want to spend as little time on code quality as possible, even when they do know the value in it
Understand the extent to which development teams already use linters or other analysis tools to achieve their code quality goals. Linter and other code quality tools are quite common for orgs that are more DevOps mature. Most teams use the tools on every commit, while half of those teams do allow some or all the rules to fail. It is common for the code quality tools are used on a namespace level - in which every project has the tool on by default, but it can be disabled when needed.
Understand the need, if any, to bring Code Quality reporting out of the project view and into a higher level (groups/namespaces) for. There would be a strong usecase to bring the reporting of the code quality tools to the namespace level. While it was not commonly listed as a pain point or challenge, most of the participants cited that their org uses the tool on a namespace or at least multi=project level.

What's this issue all about? (Background and context)

The Code Quality feature has existed since at least milestone %13.0, first falling under Verify. The feature was moved to Secure because of similarities to security-focused static code analysis which is covered by SAST. Code Quality has two components: Scanning and Reporting. As Secure has worked on Code Quality during this time, a number of problems have risen with similar themes around the lack of flexibility of the feature and the time to scan for the engine. These problems are represented in the category direction for Code Quality through the planned removal of the Docker-in-Docker requirement, as well as the removal of the entire CodeClimate engine that is currently used for code scanning.

However, while Secure's experience has grown during this time, our understanding of the users' needs has not. UX Research has not been conducted on Code Quality since mid-2021. With the major developments planned in the category direction, now is the most impactful time to evaluate our users' needs, motivations, and primary goals for Code Quality.

We don't want to work off of guesses or mimic competing solutions—we want to make sure we're building the right system to deliver value to customers within the unique context of the GitLab platform.

What are the overarching goals for the research?

  • Identify key workflows and processes of our Code Quality users so that we can direct our future efforts towards the most impactful features.
  • Understand the extent to which development teams already use linters or other analysis tools to achieve their code quality goals.
  • Understand the need, if any, to bring Code Quality reporting out of the project view and into a higher level (groups/namespaces) for.

What hypotheses and/or assumptions do you have?

  • Development teams already use tools like eslint, pep8, or go lint to achieve basic code quality checking.
  • Teams would benefit from automated processing of the outputs of their existing linters.
  • Organizations have quality goals that require analysis across projects and teams; they may use quality checks as part of a compliance program.
  • Customers want to treat security and quality findings differently; critical security vulnerabilities could have serious risk to a business, while critical quality findings could present more limited risk.
  • Quality and security findings are "owned" by different teams within customer organizations, but development teams are tasked with responding to each.
  • What are the goals of Code Quality users?
    • Reduce tech debt?
    • Make code reviews easier and faster?
    • Avoid security bugs? Force development teams to meet organization-wide quality standards?
    • Prioritize investment in developer education?

What research questions are you trying to answer?

  • Goals
    • What are the goals of Code Quality users?
    • What are the goals related to reporting code quality outputs?
      • Do those goals ever change? (depending on the scope of work/team involved/etc)
  • Needs
    • What do users need from Code Quality to achieve their goal?
    • How can Code Quality change to fulfill their needs more?
    • How often might a user need Code Quality to operate on group or namespace level as opposed to just a project level?
  • Workflow
    • What are the most important steps within Code Quality that users take when attempting to completing their goal?
    • What are the pain points they experience while attempting to complete their goal?
    • What other tools are they using, and how?
      • Do individuals or teams within their organization use linter or other static analysis tools?

What persona, persona segment, or customer type experiences the problem most acutely?

User Personas:

Buyer personas:

What business decisions will be made based on this information?

This research will inform how we design the next generation of code quality scanning, including product design/packaging and technical architecture. We hope our findings will clarify on which axes we should interact with existing developer tooling, and on which axes we should pursue unique value.

For example:

  • Should we make it easier to process the outputs of language-specific linters or analyzers?
  • Do our customers already use linters or other tools, as we expect they do?
  • Should we invest in language-independent measures like complexity or maintainability, aside from the types of checks offered by linters or other tools?

@connorgilbert plans to create an Opportunity Canvas (Lite) incorporating the findings.

What, if any, relevant prior research already exists?

What timescales do you have in mind for the research?

Ideally we would be able to begin work on the new scanning system in FY23-Q2. It would be great to get an early sense of the direction (for example, should we process linter outputs?) as early as possible so that we can proceed toward an MVC; more sophisticated or subtle questions that don't affect an MVC can wait longer.

Who will be leading the research?

@connorgilbert Will be the final voice for the goals of the research, and @moliver28 will be the lead for research design.

@moliver28 will be the primary DRI for the actual research (depending on the methodology), but both parties will be heavily involved in the hands-on research.

Relevant links (opportunity canvas, discussion guide, notes, etc.)

TODO Checklist

Edited by Michael Oliver