Commit 81838de2 authored by Ben Leduc-Mills's avatar Ben Leduc-Mills Committed by Adam Smolinski
Browse files

Remove ux-research-training, move to ux-research

parent f4a14ded
......@@ -592,7 +592,7 @@ sites/handbook/source/handbook/marketing/corporate-marketing/corporate-communica
 
^[UX Research]
/sites/handbook/source/handbook/product/ux/ux-research/ @clenneville @asmolinski2 @laurenevans @alasch @david
/sites/handbook/source/handbook/product/ux/ux-research-training/ @clenneville @asmolinski2 @laurenevans @alasch @david
/sites/handbook/source/handbook/product/ux/ux-research/ @clenneville @asmolinski2 @laurenevans @alasch @david
/sites/handbook/source/handbook/product/ux/ux-research-coordination/ @clenneville @asmolinski2 @laurenevans @alasch @david
/sites/handbook/source/handbook/product/ux/persona-creation/ @clenneville @asmolinski2 @laurenevans @alasch @david
/sites/handbook/source/handbook/product/ux/qualtrics/ @clenneville @asmolinski2 @laurenevans @alasch @david
......
......@@ -163,7 +163,7 @@
embed: v2
- name: Usability benchmarking overall score by stage
base_path: "/handbook/product/ux/performance-indicators/"
definition: This PI tracks the overall stage score for <a href="https://about.gitlab.com/handbook/product/ux/ux-research-training/usability-benchmarking/">usability benchmarking</a> studies performed across stage groups as they change over time. The tasks and workflows that comprise each benchmarking study are derived from JTBD for one or more target personas typical for the stage running the study. The overall score for each study takes into account the performance of each task that was tested, through metrics like completion rate, severity, and customer effort score (CES). The scale is 0-100, where 90-100 is ‘Great’, 80-89 is ‘Good’, 70-79 is ‘Fair’, 69 and below is ‘Poor’.
definition: This PI tracks the overall stage score for <a href="https://about.gitlab.com/handbook/product/ux/ux-research/usability-benchmarking/">usability benchmarking</a> studies performed across stage groups as they change over time. The tasks and workflows that comprise each benchmarking study are derived from JTBD for one or more target personas typical for the stage running the study. The overall score for each study takes into account the performance of each task that was tested, through metrics like completion rate, severity, and customer effort score (CES). The scale is 0-100, where 90-100 is ‘Great’, 80-89 is ‘Good’, 70-79 is ‘Fair’, 69 and below is ‘Poor’.
target: 5% increase in overall score from previous benchmarking, maintaining an overall score above 79/100.
org: UX Department
is_key: false
......
......@@ -1014,7 +1014,7 @@
- sources:
- /handbook/engineering/ux/ux-researcher-training
- /handbook/engineering/ux/ux-researcher-training/
target: /handbook/product/ux/ux-research-training/
target: /handbook/product/ux/ux-researcher-training/
comp_op: '='
- sources:
- /job-families/legal
......@@ -4357,7 +4357,7 @@
- sources:
- /handbook/engineering/ux/ux-research-training/strategic-research-at-gitlab/cross-stage-research-program.html
- /handbook/engineering/ux/ux-research-training/strategic-research-at-gitlab/devops-platform-research-program.html
target: /handbook/product/ux/ux-research-training/strategic-research-at-gitlab/gitlab-adoption-research-program.html
target: /handbook/product/ux/ux-research/strategic-research-at-gitlab/gitlab-adoption-research-program.html
comp_op: '='
- sources:
- /handbook/hiring/resource-guide
......
......@@ -131,9 +131,9 @@ The growth of a world class product is built from a well maintained backlog. Pro
 
#### Description
 
To ensure the right solutions are delivered, the team must start their work with a [validated problem](/handbook/product/ux/ux-research-training/problem-validation-and-methods). This can take [many forms](/handbook/product/ux/ux-research-training/problem-validation-and-methods/#foundational-research-methods) and be achieved through Product Manager and UX Researcher collaboration.
To ensure the right solutions are delivered, the team must start their work with a [validated problem](/handbook/product/ux/ux-research/problem-validation-and-methods). This can take [many forms](/handbook/product/ux/ux-research/problem-validation-and-methods/#foundational-research-methods) and be achieved through Product Manager and UX Researcher collaboration.
 
If the problem is documented and well-understood, it may be possible to quickly move through this phase by documenting the known data about the user problem. A documented problem can be categorized as a pre-existing experience from feedback directly from users or an issue that has user engagement confirming that the problem is experienced by multiple users. A well-understood problem can be one that has a series of documented qualitative research from customer interviews led by the Product Manager, triangulating [different sensing mechanisms](/handbook/product/product-processes/#sensing-mechanisms) confirming the problem, or using known data. Some examples of known data include [User Requested Issues](https://app.periscopedata.com/app/gitlab/480786/User-Requested-Issues) or pre-existing [`Actionable Insights`](/handbook/product/ux/ux-research-training/research-insights/#how-to-document-actionable-insights) from prior research. To document that a problem is well-understood, link the known data and any customer calls to the relevant issues and epics.
If the problem is documented and well-understood, it may be possible to quickly move through this phase by documenting the known data about the user problem. A documented problem can be categorized as a pre-existing experience from feedback directly from users or an issue that has user engagement confirming that the problem is experienced by multiple users. A well-understood problem can be one that has a series of documented qualitative research from customer interviews led by the Product Manager, triangulating [different sensing mechanisms](/handbook/product/product-processes/#sensing-mechanisms) confirming the problem, or using known data. Some examples of known data include [User Requested Issues](https://app.periscopedata.com/app/gitlab/480786/User-Requested-Issues) or pre-existing [`Actionable Insights`](/handbook/product/ux/ux-research/research-insights/#how-to-document-actionable-insights) from prior research. To document that a problem is well-understood, link the known data and any customer calls to the relevant issues and epics.
 
If the problem is nuanced or not yet well understood, then it will likely take longer to validate with users properly. This phase's primary outcome is a clear understanding of the problem, along with a simple and clear way to communicate the problem to various stakeholders. Although optional, it is recommended to use an [Opportunity Canvas](/handbook/product/product-processes/#opportunity-canvas) as a tool that helps individuals better understand a problem, and communicate it to various stakeholders. An Opportunity Canvas can also be used to recommend creation of a new category including asking for new resourcing.
 
......@@ -141,7 +141,7 @@ If the problem is nuanced or not yet well understood, then it will likely take l
 
| Outcomes | Activities | DRI |
|----------|------------|-----|
| <i class="fab fa-gitlab fa-fw" style="color:rgb(252,109,38); font-size:1.25em" aria-hidden="true"></i> **Thorough understanding of the problem**: The team understands the problem, who it affects, when and why, and how solving the problem maps to business needs and product strategy. | - Create an issue using the [Problem Validation Template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Problem%20Validation.md) in the GitLab project. <br/>- Complete an [Opportunity Canvas](/handbook/product/product-processes/#opportunity-canvas). <br/>- Schedule a review of the opportunity canvas for feedback. <br/>- Create an issue using the [Problem Validation Research Template](https://gitlab.com/gitlab-org/ux-research/-/blob/master/.gitlab/issue_templates/Problem%20validation.md) in the UX Research project and work with UX Researcher to execute the research study. <br/>- Validate your problem with users using any of the [proposed methods](/handbook/product/ux/ux-research-training/problem-validation-and-methods/) and [document your findings in Dovetail](/handbook/product/ux/dovetail/). | Product Manager |
| <i class="fab fa-gitlab fa-fw" style="color:rgb(252,109,38); font-size:1.25em" aria-hidden="true"></i> **Thorough understanding of the problem**: The team understands the problem, who it affects, when and why, and how solving the problem maps to business needs and product strategy. | - Create an issue using the [Problem Validation Template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Problem%20Validation.md) in the GitLab project. <br/>- Complete an [Opportunity Canvas](/handbook/product/product-processes/#opportunity-canvas). <br/>- Schedule a review of the opportunity canvas for feedback. <br/>- Create an issue using the [Problem Validation Research Template](https://gitlab.com/gitlab-org/ux-research/-/blob/master/.gitlab/issue_templates/Problem%20validation.md) in the UX Research project and work with UX Researcher to execute the research study. <br/>- Validate your problem with users using any of the [proposed methods](/handbook/product/ux/ux-research/problem-validation-and-methods/) and [document your findings in Dovetail](/handbook/product/ux/dovetail/). | Product Manager |
| <i class="fab fa-gitlab fa-fw" style="color:rgb(252,109,38); font-size:1.25em" aria-hidden="true"></i> **Update issue/epic description**: A well understood and clearly articulated customer problem is added to the issue, and will lead to successful and efficient design and development phases. | - Ensure your issue is up-to-date with the latest understanding of the problem. <br/>- Understand and document (in the issue) the goals that people want to accomplish using the [Jobs to be Done (JTBD)](/handbook/product/ux/jobs-to-be-done/) framework. <br/>- Conduct [continuous interviews](/handbook/product/product-processes/continuous-interviewing/) with customers on a regular cadence to stay up-to-date on the problems that users face. <br/>- Leverage your [opportunity canvas](/handbook/product/product-processes/#opportunity-canvas) to communicate the problem to your stable counterparts and group stakeholders. Consider scheduling a review to gather feedback and communicate the findings to Product and UX leadership. | Product Manager |
| Initiate the Dogfooding Process: When validating problems, it's important to gather feedback from [internal customers](/handbook/product/product-processes/#internal-customer-dris), in addition to the broader community. Capturing internal customer feedback early on in the product development flow helps ensure their needs are considered as the feature matures, accelerating key [Dogfooding](/handbook/product/product-processes/#dogfood-everything) outcomes. Driving internal usage of features consistently [leads to greater customer adoption](https://about.gitlab.com/blog/2020/04/16/geo-is-available-on-staging-for-gitlab-com/) and is required for [viable maturity](https://about.gitlab.com/direction/maturity/). | - PMs are strongly encouraged to open [Dogfooding issues](/handbook/product/product-processes/#dogfooding-process) during the validation phase and capture internal customer feedback to help inform initial and/or future iterations of a feature. | Product Manager |
 
......@@ -169,7 +169,7 @@ To start the Design phase, the Product Designer or Product Manager applies the `
 
| Outcomes | Activities | DRI |
|----------|------------|-----|
|<i class="fab fa-gitlab fa-fw" style="color:rgb(252,109,38); font-size:1.25em" aria-hidden="true"></i> **Proposed solution(s) identified and documented**: The Product Designer works with the Product Manager and Engineering team to explore solutions and identifies the approach(es) that strike the best balance of user experience, customer value, business value, and development cost. | The Product Designer optionally apply by `workflow::ready for design` label to the issue, signaling the design backlog of next issues to be done is prioritized. <br/> <br/> **Diverge**: explore multiple different approaches as a team. Example activities: <br/>- [Think Big](/handbook/product/ux/thinkbig/) session. <br/>Internal interviews (be sure to [document findings in Dovetail](/handbook/product/ux/dovetail/)). <br/> - Creating [user flows](https://careerfoundry.com/en/blog/ux-design/what-are-user-flows/) or [journey maps](https://uxplanet.org/a-beginners-guide-to-user-journey-mapping-bd914f4c517c). <br/><br/> **Converge**: identify a small set of options to validate. Example activities:<br/> - [Think Small](https://about.gitlab.com/handbook/product/ux/thinkbig/#think-small) session with the team.<br/> - Design reviews with team<br/> - Low fidelity design ideas. <br/> - Update issue/epic description with proposed solution. Add Figma design file link or attach design to [GitLab's Design Management](https://docs.gitlab.com/ee/user/project/issues/design_management.html) to communicate the solution idea. Read to understand [what tool to use](/handbook/product/ux/product-designer/#deliver). <br/> - Validate approach with help from stakeholders. Run user validation using any of the [proposed methods](/handbook/product/ux/ux-research-training/solution-validation-and-methods/) and [document your findings in Dovetail](/handbook/product/ux/dovetail/) and appropriate GitLab issue. <br/> - Draw inspiration from competitive and adjacent offerings. | Product Designer |
|<i class="fab fa-gitlab fa-fw" style="color:rgb(252,109,38); font-size:1.25em" aria-hidden="true"></i> **Proposed solution(s) identified and documented**: The Product Designer works with the Product Manager and Engineering team to explore solutions and identifies the approach(es) that strike the best balance of user experience, customer value, business value, and development cost. | The Product Designer optionally apply by `workflow::ready for design` label to the issue, signaling the design backlog of next issues to be done is prioritized. <br/> <br/> **Diverge**: explore multiple different approaches as a team. Example activities: <br/>- [Think Big](/handbook/product/ux/thinkbig/) session. <br/>Internal interviews (be sure to [document findings in Dovetail](/handbook/product/ux/dovetail/)). <br/> - Creating [user flows](https://careerfoundry.com/en/blog/ux-design/what-are-user-flows/) or [journey maps](https://uxplanet.org/a-beginners-guide-to-user-journey-mapping-bd914f4c517c). <br/><br/> **Converge**: identify a small set of options to validate. Example activities:<br/> - [Think Small](https://about.gitlab.com/handbook/product/ux/thinkbig/#think-small) session with the team.<br/> - Design reviews with team<br/> - Low fidelity design ideas. <br/> - Update issue/epic description with proposed solution. Add Figma design file link or attach design to [GitLab's Design Management](https://docs.gitlab.com/ee/user/project/issues/design_management.html) to communicate the solution idea. Read to understand [what tool to use](/handbook/product/ux/product-designer/#deliver). <br/> - Validate approach with help from stakeholders. Run user validation using any of the [proposed methods](/handbook/product/ux/ux-research/solution-validation-and-methods/) and [document your findings in Dovetail](/handbook/product/ux/dovetail/) and appropriate GitLab issue. <br/> - Draw inspiration from competitive and adjacent offerings. | Product Designer |
|<i class="fab fa-gitlab fa-fw" style="color:rgb(252,109,38); font-size:1.25em" aria-hidden="true"></i> **Shared understanding in the team of the proposed solution**: The Product Designer leads the broader team through a review of the proposed solution(s). | - Review the proposed solution as a team so that everyone has a chance to contribute, ask questions, raise concerns, and suggest alternatives. <br/>- Review the proposed solution with leadership. | Product Designer |
|<i class="fab fa-gitlab fa-fw" style="color:rgb(252,109,38); font-size:1.25em" aria-hidden="true"></i> **Confidence in the technical feasibility**: It's important that Engineering understands the technical feasibility of the solution(s) to avoid rework or significant changes when we start the build phase. | - Discuss the technical implications with Engineering to ensure that what is being proposed is possible within the desired timeframe. When sharing design work, use both Figma's collaboration tools and GitLab's design management features. Read to understand [what tool to use](/handbook/product/ux/product-designer/#deliver). <br/>- Engage engineering peers early and often through Slack messages, pings on issues or by scheduling sessions to discuss the proposal.<br>- If the solution is large and complex, consider scheduling a [spike](/handbook/product/product-processes/#spikes) to mitigate risks and uncover the optimal iteration path. | Product Designer |
|<i class="fab fa-gitlab fa-fw" style="color:rgb(252,109,38); font-size:1.25em" aria-hidden="true"></i> **Updated issues/epic descriptions**: The Product Manager and Product Designer ensure issues and epics are up-to-date. | - Ensure issues and epics are up-to-date, so we can continue our work efficiently and asynchronously. <br/>- [Experiment definition](/handbook/engineering/development/growth/#experiment-definition-standards). | Product Manager |
......@@ -198,7 +198,7 @@ To start the Solution Validation phase, the Product Designer or Product Manager
 
| Outcomes | Activities | DRI |
|----------|------------|-----|
|<i class="fab fa-gitlab fa-fw" style="color:rgb(252,109,38); font-size:1.25em" aria-hidden="true"></i> **High confidence in the proposed solution**: Confidence that the jobs to be done outlined within the problem statement can be fulfilled by the proposed solution. | - Gather feedback from relevant stakeholders. <br>- Follow [solution validation guidance](/handbook/product/ux/ux-research-training/solution-validation-and-methods/) to gather feedback. | Product Designer |
|<i class="fab fa-gitlab fa-fw" style="color:rgb(252,109,38); font-size:1.25em" aria-hidden="true"></i> **High confidence in the proposed solution**: Confidence that the jobs to be done outlined within the problem statement can be fulfilled by the proposed solution. | - Gather feedback from relevant stakeholders. <br>- Follow [solution validation guidance](/handbook/product/ux/ux-research/solution-validation-and-methods/) to gather feedback. | Product Designer |
|<i class="fab fa-gitlab fa-fw" style="color:rgb(252,109,38); font-size:1.25em" aria-hidden="true"></i> **Documented Solution validation Learnings**: The results of the solution validation is communicated to and understood by team members. | - Document solution validation findings as [insights in Dovetail](/handbook/product/ux/dovetail/). <br>- Update the [opportunity canvas](/handbook/product/product-processes/#opportunity-canvas) (if used) with relevant insights. <br>- Update issue or epic description to contain or link to the findings. | Product Designer |
 
## Build track
......
......@@ -297,7 +297,7 @@ This content is divided into five key competencies for Product Managers.
- [Julie Zhuo: How to Work with Designers](https://medium.com/the-year-of-the-looking-glass/how-to-work-with-designers-6c975dede146#.q68swu2de)
- [Jess Eddy: What do designers really want from product managers?](https://uxdesign.cc/what-do-designers-really-want-from-product-managers-9c0e14993a8)
- [Jared Spool: Using the Kano Model to Build Delightful UX](https://youtu.be/ewpz2gR_oJQ) (45 min video)
- [GitLab Kano Model and Feature Prioritization Survey](https://about.gitlab.com/handbook/product/ux/ux-research-training/kano-model/)
- [GitLab Kano Model and Feature Prioritization Survey](https://about.gitlab.com/handbook/product/ux/ux-research/kano-model/)
 
##### Deeper dive
 
......
......@@ -226,9 +226,9 @@ Customer interviewing is essential to clearly defining the user's needs, problem
 
| **Level** | **Demonstrates Competency by...** | **Assessment** |
| ----- | ----------------------------- | ---------- |
| **PM** | Adept at qualitative customer interviewing. Uses templates and already available resources for discussion guides. 50% or greater reliance on UX research for interviewing. Capable of deriving key insights and patterns from customer interviews, and using that input to clarify problem statements. Potentially misses cross-stage or outside primary persona or use cases. Capable of completing the guidelines referenced in the [Validation Track](/handbook/product-development-flow/#validation-track) handbook page. Low to moderate confidence in conducting and moderating [user interviews](/handbook/product/ux/ux-research-training/problem-validation-single-stage-group/#for-user-interviews). Aware of [RICE](https://www.productplan.com/glossary/rice-scoring-model/) as a priority setting tool and can apply the framework assisted. | [Customer Interview Assessment - Individual Contributors](https://forms.gle/zMyvXPE8EeSvjbVg9) |
| **Sr. PM** | Skilled at qualitative customer interviewing. Actively improves existing resources and templates. 30% or less reliance on UX research for interviewing. Excellent at deriving key insights and patterns from customer interviews, and using that input to clarify problem statements. Independently identifies and brings in cross-stage representation during interviews with non-primary personas or use cases.<br>Skilled at applying and executing against the [Validation Track](/handbook/product-development-flow/#validation-track) in the handbook. Moderate to high confidence in conducting or moderating [user interviews](/handbook/product/ux/ux-research-training/problem-validation-single-stage-group/#for-user-interviews) independently. Capable of applying [RICE](https://www.productplan.com/glossary/rice-scoring-model/) as a priority setting tool unassisted. | [Customer Interview Assessment - Individual Contributors](https://forms.gle/zMyvXPE8EeSvjbVg9) |
| **Principal PM/ Group Manager PM** | Highly skilled at qualitative customer interviewing. Coaches and continuously seeks feedback for existing resources on interviewing. Minimal reliance on UX research for interviewing and leverages UX research for consultation of research strategy. Contributes to goal setting and OKR development across the team. Seeks opportunities for cross-stage collaboration and validation from ancillary use cases or personas. Iterates and engages with the [Validation Track](/handbook/product-development-flow/#validation-track) in the handbook as new learnings arise. Typically conducts or moderates [user interviews](/handbook/product/ux/ux-research-training/problem-validation-single-stage-group/#for-user-interviews) independently. Capable of applying [RICE](https://www.productplan.com/glossary/rice-scoring-model/) as a priority setting tool unassisted. | [Customer Interview Assessment - Individual Contributors](https://forms.gle/zMyvXPE8EeSvjbVg9)<br>Customer Interview Assessment - People Leaders - Coming Soon |
| **PM** | Adept at qualitative customer interviewing. Uses templates and already available resources for discussion guides. 50% or greater reliance on UX research for interviewing. Capable of deriving key insights and patterns from customer interviews, and using that input to clarify problem statements. Potentially misses cross-stage or outside primary persona or use cases. Capable of completing the guidelines referenced in the [Validation Track](/handbook/product-development-flow/#validation-track) handbook page. Low to moderate confidence in conducting and moderating [user interviews](/handbook/product/ux/ux-research/problem-validation-single-stage-group/#for-user-interviews). Aware of [RICE](https://www.productplan.com/glossary/rice-scoring-model/) as a priority setting tool and can apply the framework assisted. | [Customer Interview Assessment - Individual Contributors](https://forms.gle/zMyvXPE8EeSvjbVg9) |
| **Sr. PM** | Skilled at qualitative customer interviewing. Actively improves existing resources and templates. 30% or less reliance on UX research for interviewing. Excellent at deriving key insights and patterns from customer interviews, and using that input to clarify problem statements. Independently identifies and brings in cross-stage representation during interviews with non-primary personas or use cases.<br>Skilled at applying and executing against the [Validation Track](/handbook/product-development-flow/#validation-track) in the handbook. Moderate to high confidence in conducting or moderating [user interviews](/handbook/product/ux/ux-research/problem-validation-single-stage-group/#for-user-interviews) independently. Capable of applying [RICE](https://www.productplan.com/glossary/rice-scoring-model/) as a priority setting tool unassisted. | [Customer Interview Assessment - Individual Contributors](https://forms.gle/zMyvXPE8EeSvjbVg9) |
| **Principal PM/ Group Manager PM** | Highly skilled at qualitative customer interviewing. Coaches and continuously seeks feedback for existing resources on interviewing. Minimal reliance on UX research for interviewing and leverages UX research for consultation of research strategy. Contributes to goal setting and OKR development across the team. Seeks opportunities for cross-stage collaboration and validation from ancillary use cases or personas. Iterates and engages with the [Validation Track](/handbook/product-development-flow/#validation-track) in the handbook as new learnings arise. Typically conducts or moderates [user interviews](/handbook/product/ux/ux-research/problem-validation-single-stage-group/#for-user-interviews) independently. Capable of applying [RICE](https://www.productplan.com/glossary/rice-scoring-model/) as a priority setting tool unassisted. | [Customer Interview Assessment - Individual Contributors](https://forms.gle/zMyvXPE8EeSvjbVg9)<br>Customer Interview Assessment - People Leaders - Coming Soon |
| **Director PM** | Ensures consistent execution of validation track skills across product groups. Seeks feedback and continuous refinement of validation processes. Measures and evaluates validation track performance on SMAU to ensure the process is delivering results for the business. | Customer Interview Assessment - People Leaders - Coming Soon |
| **Senior Director PM** | In addition to upholding director requirements, senior directors work to ensure the validation track appropriately includes external teams like UX, UX research, Design, or Engineering, as necessary. Developing awareness and driving collaboration with the track within R&D. | Customer Interview Assessment - People Leaders - Coming Soon |
| **Vice President PM** | In addition to the requirements of the senior director requirements, vice president’s proactively inform the validation process, goals, and frameworks with context from the global company and external stakeholders, like investors. | Customer Interview Assessment - People Leaders - Coming Soon |
......
......@@ -16,7 +16,7 @@ Continuous interviews are open to all GitLab team members. The PM should notify
 
Prior to starting the practice of continuous interviewing, the PM should develop their own interview script relevant to their product area. Product managers can consult with UX research or the examples in this [project](https://gitlab.com/gitlab-com/user-interviews/-/issues) to create the script.
 
When speaking with the customer, the PM should refer to these [interviewing tips](/handbook/product/ux/ux-research-training/facilitating-user-interviews/#tips-for-interviewing) to help make these conversations a successful experience for the user and the Product team.
When speaking with the customer, the PM should refer to these [interviewing tips](/handbook/product/ux/ux-research/facilitating-user-interviews/#tips-for-interviewing) to help make these conversations a successful experience for the user and the Product team.
 
With the customer's consent, the interviews are recorded and added to [Dovetail](/handbook/product/ux/dovetail/) where the notes and transcript are [tagged for future reference](/handbook/product/ux/dovetail/#tagging-data-in-dovetail).
 
......@@ -37,7 +37,7 @@ The following is a non-exhaustive list of approaches to finding customers open t
* Continuously communicate to the customer success team that we are open to customer calls, but make it clear that these are customer interviews, not sales calls. Use the #customer-success Slack channel or open a [CSM project issue](https://gitlab.com/gitlab-com/customer-success/tam/-/issues/new?issueable_template=Product%20Engagement)
* Attend monthly Customer Success Managers (CSM) meetings and make requests for continuous interviews within their Google Doc agenda
* Set up coffee chats with CSMs to discuss continuous interviewing requests
* Additional resources for [participant recruitment through UX Research](/handbook/engineering/ux/ux-research-training/recruiting-participants/)
* Additional resources for [participant recruitment through UX Research](/handbook/engineering/ux/ux-research/recruiting-participants/)
 
#### Tips for leading continuous interviews
 
......
......@@ -2390,7 +2390,7 @@ Each quarter we want to reach out to [PNPS](/handbook/product/performance-indica
1. Draft an email that you'll send to users. Example copy is below. You can re-phrase things as you wish but make sure you still cover the same points as the example.
1. **BE ON TIME TO YOUR CALL**. Better yet, be 2 minutes early. Be ready to coach people through getting Zoom to work properly. Make sure everyone on the call introduces themselves.
1. If people have agreed to recording, still ask them once again if it's OK if you record before turning it on. Obviously do not record people that did not give consent.
1. See our training materials on [facilitating user interviews](/handbook/product/ux/ux-research-training/facilitating-user-interviews/)
1. See our training materials on [facilitating user interviews](/handbook/product/ux/ux-research/facilitating-user-interviews/)
 
**Example email copy**:
 
......
......@@ -15,7 +15,7 @@ description: >-
 
## Intro and Goal
 
The Category Maturity (CM) Scorecard is a [Summative Evaluation](https://www.nngroup.com/articles/formative-vs-summative-evaluations/) that takes into account the entire experience as defined by a Job to be Done (JTBD), instead of individual improvement(s), which are often measured through [Usability Testing](/handbook/product/ux/ux-research-training/usability-testing/) (i.e. Solution Validation). This specialized process provides data to help us grade the [maturity of our product](/direction/maturity/).
The Category Maturity (CM) Scorecard is a [Summative Evaluation](https://www.nngroup.com/articles/formative-vs-summative-evaluations/) that takes into account the entire experience as defined by a Job to be Done (JTBD), instead of individual improvement(s), which are often measured through [Usability Testing](/handbook/product/ux/ux-research/usability-testing/) (i.e. Solution Validation). This specialized process provides data to help us grade the [maturity of our product](/direction/maturity/).
 
The goal of this process is to produce data as objectively as possible given time and resource constraints. For this reason, the process is more rigorous than other UX research methods, and it focuses more on measures and less on thoughts and verbal feedback.
 
......@@ -90,7 +90,7 @@ Once you know what scenario(s) you’ll put your participants through, it’s im
 
It’s important to thoroughly plan how a participant will complete your scenario(s), especially if you answered "yes" to any of the questions above. Involve technical counterparts early in the process if you have any uncertainty about how to enable users to go through your desired flow(s).
 
If you want help creating a pristine test environment be sure to reach out to the [Demo Systems](/handbook/customer-success/demo-systems/) group on the #demo-systems Slack channel. They can create a demo environment for users and help build any particular parameters needed for your testing environment. Be aware that setting up a test environment for a research study can be time consuming and difficult. Alternately, you can utilitze the [UX Cloud Sandbox](/handbook/product/ux/ux-research-training/ux-cloud-sandbox/).
If you want help creating a pristine test environment be sure to reach out to the [Demo Systems](/handbook/customer-success/demo-systems/) group on the #demo-systems Slack channel. They can create a demo environment for users and help build any particular parameters needed for your testing environment. Be aware that setting up a test environment for a research study can be time consuming and difficult. Alternately, you can utilitze the [UX Cloud Sandbox](/handbook/product/ux/ux-research/ux-cloud-sandbox/).
 
If your JTBD interacts with other stage groups’ areas, reach out to them to ensure their part of our product will support your scenario(s).
 
......@@ -256,7 +256,7 @@ The goal for analyzing Category Maturity Scorecard data is to establish a baseli
 
**To analyze:** Use the [Google Sheet](https://docs.google.com/spreadsheets/d/1w3GZNc11PSZ9sN_2II5SI3fwK4tH9LLSb2bci_o2mWg/copy) to aid in calculating the CM Scorecard score, per scenario. Additionally, look for themes behind the reason why participants scored the way they did.
 
**To document:** Document and highlight areas for improvement via issues, utilizing the [‘Actionable Insight’ scoped labels](/handbook/product/ux/ux-research-training/research-insights/#how-to-document-actionable-insights), to make further improvements to the experience.
**To document:** Document and highlight areas for improvement via issues, utilizing the [‘Actionable Insight’ scoped labels](/handbook/product/ux/ux-research/research-insights/#how-to-document-actionable-insights), to make further improvements to the experience.
 
Read the UX Research team’s guide for [documenting insights in Dovetail](/handbook/product/ux/dovetail/#the-ux-research-teams-guide-to-documenting-insights-in-dovetail).
 
......
......@@ -10,7 +10,7 @@ description: "The GitLab UX Research team's guide to documenting insights in Dov
- TOC
{:toc .hidden-md .hidden-lg}
 
The UX Research team uses [Dovetail](https://dovetailapp.com/) to document all the research insights discovered through GitLab’s UX research program. Research insights can be gathered through methods such as user [interviews](/handbook/product/ux/ux-research-training/facilitating-user-interviews/), [usability testing](/handbook/product/ux/ux-research-training/usability-testing/), surveys, card sorts, tree tests, customer conversations, and more.
The UX Research team uses [Dovetail](https://dovetailapp.com/) to document all the research insights discovered through GitLab’s UX research program. Research insights can be gathered through methods such as user [interviews](/handbook/product/ux/ux-research/facilitating-user-interviews/), [usability testing](/handbook/product/ux/ux-research/usability-testing/), surveys, card sorts, tree tests, customer conversations, and more.
 
#### Why do we document research in Dovetail?
The UX Research team has [always faced challenges](https://about.gitlab.com/blog/2019/07/10/building-a-ux-research-insights-repository/) in finding the best way to create research reports that are easy to digest and access. When sharing research insights via PDFs, Google docs, and even GitLab issues themselves, it was difficult to track and share study findings. Additionally, since we are often asked to readily recall information we've learned in prior studies, it can be tedious to read through old reports, look through pages of interview notes, or rewatch video recordings to find the information we need. This problem compounds, since we are continuously producing research reports and the wealth of information grows infinitely.
......@@ -62,7 +62,7 @@ The following video demonstrates how to use the import feature and how to struct
 
Dovetail helps you identify patterns and themes that emerge across your research data and turn those into insight statements. Once you have imported all your raw data, you are ready to start highlighting and tagging content. Think of a highlight as anything interesting that you heard or observed during a research session (for example: a user's pain point or motivation). Tag highlights with the feature/area of GitLab to which the highlight relates (for example, ‘Merge Requests’) and the persona (for example, ‘Sasha: Software Developer’) who made the comment, if possible.
 
A bit like [affinity mapping](https://en.wikipedia.org/wiki/Affinity_diagram), tags in Dovetail help you identify and keep track of patterns that emerge across your research data. A single highlight can have one or many tags associated with it. More help can be found on our [Analyzing and synthesizing user data ](/handbook/product/ux/ux-research-training/analyzing-research-data/)handbook page.
A bit like [affinity mapping](https://en.wikipedia.org/wiki/Affinity_diagram), tags in Dovetail help you identify and keep track of patterns that emerge across your research data. A single highlight can have one or many tags associated with it. More help can be found on our [Analyzing and synthesizing user data ](/handbook/product/ux/ux-research/analyzing-research-data/)handbook page.
 
#### Enable Global Tags in your Project
 
......@@ -113,11 +113,11 @@ Gitlab Global Tags are organized into six categories:
| Category | Details | Example |
| ------------- | ------------- | ---------------- |
| User Action | These tags can be used to indicate what a user did while using the UI. | A user might be `unsure what to do` when given a task, and then proceeds to go `down the wrong path`.|
| User Feedback | Useful to describe what a user said during their research session.<br>Also a set of tags with generic options (A through D) which can be used in [Solution Validation](/handbook/product/ux/ux-research-training/solution-validation-and-methods/).<br>| A user might have a `feature request` when talking about a functionality they want in their workflow.<br><br>Or, the user could `Prefer Option B` in a design evaluation. |
| User Emotion | These tags are related to the user’s attitude towards the UI, like in a [usability test](/handbook/product/ux/ux-research-training/usability-testing/) or walkthrough. | A user could feel `overwhelmed` when being presented with a UI. |
| Workflow | Can be used to track the user’s actions in their workflow specifically.<br><br>Also a subset of tags with generic task numbers (1-10), which can be used for [usability tests](/handbook/product/ux/ux-research-training/usability-testing/) or [UX Scorecards](/handbook/product/ux/ux-scorecards/#option-b-perform-a-formative-evaluation).| You could use the `unsuccessful end task` tag if a user finished a series of tasks, but missed a key deliverable.<br><br>Or, you may want to keep track of when a user finished `task 1` and `task 2`.|
| Personas | Each tag relates to one of our [user personas](/handbook/product/personas/#list-of-user-personas), or characteristics of those personas.<br><br>Use these tags when looking for jobs or features that correspond to certain personas.| When a user configures a static scanner, they could be `Sam, Security Analyst`.<br><br>Or, if you are performing [foundational research](/handbook/product/ux/ux-research-training/foundational-research/) on the users’ organization, you could use `enterprise` or `start-up` tags. |
| JTBD | Most of the tags relate to the various stages in [mapping jobs](/handbook/product/ux/jobs-to-be-done/mapping-jobs-to-be-done/#how-to-create-a-job-map).<br><br>Can be used when performing foundational jobs research such as [contextual inquiries](/handbook/product/ux/ux-research-training/problem-validation-and-methods). | When conducting a contextual, a user may talk about monitoring their pipeline, which could be tagged with `Step in Job - monitor`.|
| User Feedback | Useful to describe what a user said during their research session.<br>Also a set of tags with generic options (A through D) which can be used in [Solution Validation](/handbook/product/ux/ux-research/solution-validation-and-methods/).<br>| A user might have a `feature request` when talking about a functionality they want in their workflow.<br><br>Or, the user could `Prefer Option B` in a design evaluation. |
| User Emotion | These tags are related to the user’s attitude towards the UI, like in a [usability test](/handbook/product/ux/ux-research/usability-testing/) or walkthrough. | A user could feel `overwhelmed` when being presented with a UI. |
| Workflow | Can be used to track the user’s actions in their workflow specifically.<br><br>Also a subset of tags with generic task numbers (1-10), which can be used for [usability tests](/handbook/product/ux/ux-research/usability-testing/) or [UX Scorecards](/handbook/product/ux/ux-scorecards/#option-b-perform-a-formative-evaluation).| You could use the `unsuccessful end task` tag if a user finished a series of tasks, but missed a key deliverable.<br><br>Or, you may want to keep track of when a user finished `task 1` and `task 2`.|
| Personas | Each tag relates to one of our [user personas](/handbook/product/personas/#list-of-user-personas), or characteristics of those personas.<br><br>Use these tags when looking for jobs or features that correspond to certain personas.| When a user configures a static scanner, they could be `Sam, Security Analyst`.<br><br>Or, if you are performing [foundational research](/handbook/product/ux/ux-research/foundational-research/) on the users’ organization, you could use `enterprise` or `start-up` tags. |
| JTBD | Most of the tags relate to the various stages in [mapping jobs](/handbook/product/ux/jobs-to-be-done/mapping-jobs-to-be-done/#how-to-create-a-job-map).<br><br>Can be used when performing foundational jobs research such as [contextual inquiries](/handbook/product/ux/ux-research/problem-validation-and-methods). | When conducting a contextual, a user may talk about monitoring their pipeline, which could be tagged with `Step in Job - monitor`.|
 
#### Global tag definitions
 
......
......@@ -12,7 +12,7 @@ description: "The UX Department works alongside the community, Product Managemen
 
The UX Department works alongside the community, Product Managers (PMs), Frontend engineers (FE), Backend engineers (BE), and the Brand team. PMs are responsible for kicking off initiatives and setting the product direction. PMs define the "what" and "why" for feature-related issues by gathering customer and user feedback, and they give GitLab team members and the wider community space to suggest and create.
 
UX should assist in driving the [product vision](/direction/) early in the process. We inform the vision by conducting [foundational research](/handbook/product/ux/ux-research-training/foundational-research/) and facilitating discussions with community members, customers, PM, FE, and BE. We are **elevated** above just the transactional workflow and **generative** in creating work, rather than just executing tasks. We align with the Brand team's direction for GitLab and incorporate brand standards and "brand moments" into the product when appropriate.
UX should assist in driving the [product vision](/direction/) early in the process. We inform the vision by conducting [foundational research](/handbook/product/ux/ux-research/foundational-research/) and facilitating discussions with community members, customers, PM, FE, and BE. We are **elevated** above just the transactional workflow and **generative** in creating work, rather than just executing tasks. We align with the Brand team's direction for GitLab and incorporate brand standards and "brand moments" into the product when appropriate.
 
## UX Workflows
 
......
......@@ -22,7 +22,7 @@ At GitLab, we have our own flavor of JTBD and use it throughout the design proce
- Evaluate existing experiences
- Assess category maturity
 
JTBD come directly from research and customer conversations with those people who do the tasks/jobs we need to design for. [Problem validation](/handbook/product/ux/ux-research-training/problem-validation-and-methods/#what-is-problem-validation) is one of the most effective ways to confidently inform the writing of a JTBD.
JTBD come directly from research and customer conversations with those people who do the tasks/jobs we need to design for. [Problem validation](/handbook/product/ux/ux-research/problem-validation-and-methods/#what-is-problem-validation) is one of the most effective ways to confidently inform the writing of a JTBD.
 
To learn more about our JTBD philosophy, see the [JTBD deep dive](/handbook/product/ux/jobs-to-be-done/deep-dive/) and [How to create a Job Map](https://about.gitlab.com/handbook/product/ux/jobs-to-be-done/mapping-jobs-to-be-done/)
 
......@@ -86,7 +86,7 @@ Often, we find there are many JTBD for one category. We are striving to have 2-3
## Quick methods to increase confidence
 
- Reference previous research and industry standards.
- Conduct [generative problem validation research](/handbook/product/ux/ux-research-training/problem-validation-and-methods/#when-to-use-problem-validation) using broad questions. For example, ask questions like, "tell me what you do as a software engineer."
- Conduct [generative problem validation research](/handbook/product/ux/ux-research/problem-validation-and-methods/#when-to-use-problem-validation) using broad questions. For example, ask questions like, "tell me what you do as a software engineer."
- Run abbreviated 30-minute job interviews with a minimum of 5 participants (direct questions). For example, ask questions based on the JTBD such as, "tell me about the last time you made an architectural decision. What went well? What didn't go so well?" Document your interview using the [JTBD Interview Note template](https://docs.google.com/spreadsheets/d/e/2PACX-1vSX5b57MKfLFl59TfiN61rWNkm2Qctb8cVy40JUGsF6FyEcy3jhPBUxY-4D3exXxqXPwwBkcSOb0HT8/pub?output=xlsx).
 
## JTBD, user stories, and tasks
......
......@@ -23,7 +23,7 @@ Every quarter, we include a question in the SUS survey that asks whether respond
1. Draft an email that you'll send to respondents. Example copy is below.
1. **BE ON TIME TO YOUR CALL**. Make sure everyone on the call introduces themselves.
1. If respondents have agreed to recording, still ask them once again if it's OK if you record before turning it on.
1. See our training materials on [facilitating user interviews](/handbook/product/ux/ux-research-training/facilitating-user-interviews/)
1. See our training materials on [facilitating user interviews](/handbook/product/ux/ux-research/facilitating-user-interviews/)
 
**Example email copy**:
 
......
......@@ -78,7 +78,7 @@ To conduct these interviews:
1. Use the [User Persona - Internal Stakeholder Script [Google Docs Template]](https://docs.google.com/document/d/1ZsQXPeg2dZNPvoh2O5wRobIB4EtBWuNsNLyVpIs2raM/copy) and fill in the title and details with your persona.
2. Link your Google Sheet in your issue.
2. Use the[ GitLab team page](https://about.gitlab.com/company/team/), various department team pages (like the [Engineering page](https://about.gitlab.com/handbook/engineering/#engineering-departments-sub-departments--teams)), or put a message in a team's slack channel (usually found near the bottom of a team page) to recruit internal participants for feedback.
1. Use a [Calendly link](/handbook/product/ux/ux-research-training/recruiting-participants/#create-a-calendly-event) to schedule the 30 minute interview sessions.
1. Use a [Calendly link](/handbook/product/ux/ux-research/recruiting-participants/#create-a-calendly-event) to schedule the 30 minute interview sessions.
3. During each interview session:
1. If you need to take notes, or have an additional note taker, use the [user interview notes template](https://docs.google.com/spreadsheets/d/1hnIqg-fnCYW2XKHR8RBsO3cYLSMEZy2xUKmbiUluAY0/copy).
4. Summarize the interview data question by question to look for trends. Use the [Persona -  Screener Questions template](https://docs.google.com/document/d/1Vm3InbGl7u1O2Q8s2TzFMEZnsnug51_qJvq2wGwsqiM/edit?usp=sharing) to help guide what to look for.
......@@ -92,7 +92,7 @@ These moderated interviews should take approximately 1 hour. Recruit 8-10 partic
 
 
#### A) Recruit & schedule participants
To kick off participant recruitment, refer to the [UX handbook guide on how to recruit and schedule participants](/handbook/product/ux/ux-research-training/recruiting-participants/). Open a recruitment issue and use the screener derived from your previous research [Step 2](#step-2-internal-interviews).  Use the handbook page on [writing screener questions](/handbook/product/ux/ux-research-training/write-effective-screener/) for help, if needed.
To kick off participant recruitment, refer to the [UX handbook guide on how to recruit and schedule participants](/handbook/product/ux/ux-research/recruiting-participants/). Open a recruitment issue and use the screener derived from your previous research [Step 2](#step-2-internal-interviews).  Use the handbook page on [writing screener questions](/handbook/product/ux/ux-research/write-effective-screener/) for help, if needed.
 
#### B) Create Interview Script
Use the summarized information created from [Step 1](#step-1-meet-with-stakeholders) and [[Step 2](#step-2-internal-interviews) to craft the script for the external interviews. The first section of the script should focus on the top jobs of the persona, and the motivations and frustrations of those jobs.
......@@ -106,7 +106,7 @@ If you are unfamiliar on [Gitlab's JTBD](/handbook/product/ux/jobs-to-be-done/),
Later sections of the script may vary depending on what your past research indicates as important to the persona. You will most likely want to ask questions about key tools your persona uses, any areas of Gitlab they use or have trouble in, or other teams that provide essential work to your persona.
 
#### C) Conduct interviews
After enough participants have been recruited and scheduled, it is time to conduct the interviews! To prepare for these interviews, be sure to understand [GitLab's handbook page on how to facilitate a user interview](/handbook/product/ux/ux-research-training/facilitating-user-interviews/). More help can also be found in our [discussion guide template](https://docs.google.com/document/d/1ERpTsQs7vcKKHLFZ5qoTukUFFA1sdazaFzknsX0Ju5Q/edit).
After enough participants have been recruited and scheduled, it is time to conduct the interviews! To prepare for these interviews, be sure to understand [GitLab's handbook page on how to facilitate a user interview](/handbook/product/ux/ux-research/facilitating-user-interviews/). More help can also be found in our [discussion guide template](https://docs.google.com/document/d/1ERpTsQs7vcKKHLFZ5qoTukUFFA1sdazaFzknsX0Ju5Q/edit).
 
To help streamline data intake during interviews, you may use this [Persona Interview template (Google Form)](https://docs.google.com/forms/d/1o6Hn7fmnHQFW8hMZf7_HL3VHT02VI3rlUzhbsy-sTXg/copy) as a script and tool to fill in participant responses.
 
......@@ -124,7 +124,7 @@ To conduct these interviews:
 
### Step 4: Synthesize & Compare Results
#### A) Synthesize the user data
If you are using the [Google Form template](https://docs.google.com/forms/d/1LkBbPoYY5TtmcgQxh0T9wo3Gy34yMX0CeRHQs_w1Xhw/copy) as described above, then the data synthesis will be a similar process for each round of research. If you need more help codifying the results, this handbook article on [data synthesis](/handbook/product/ux/ux-research-training/analyzing-research-data/) may help.
If you are using the [Google Form template](https://docs.google.com/forms/d/1LkBbPoYY5TtmcgQxh0T9wo3Gy34yMX0CeRHQs_w1Xhw/copy) as described above, then the data synthesis will be a similar process for each round of research. If you need more help codifying the results, this handbook article on [data synthesis](/handbook/product/ux/ux-research/analyzing-research-data/) may help.
 
For all open-ended questions, you first have to transform the qualitative data into something easier to understand. Do this by clustering responses into themes and tallying the counts of all themes. An example of this can be found in the table below:
 
......
......@@ -56,7 +56,7 @@ Product Designer Tools
* [UX Issue Triage](/handbook/engineering/quality/issue-triage/#ux)
* [Heuristics](/handbook/product/ux/heuristics/)
* [Competitor Evaluation](https://gitlab.com/gitlab-org/competitor-evaluations) (GitLab Team Member access only)
* [UX Cloud Sandbox](/handbook/product/ux/ux-research-training/ux-cloud-sandbox/)
* [UX Cloud Sandbox](/handbook/product/ux/ux-research/ux-cloud-sandbox/)
 
Are you a new GitLab Product Designer? If so, welcome! Make sure you see the [Product Designer Workflow](/handbook/product/ux/product-designer/) handbook page that will help you get started.
 
......
......@@ -246,7 +246,7 @@ As the design is completed (progressing from lo-fi to hi-fi), assets should be u
2. [Ideate and iterate](https://about.gitlab.com/handbook/product/ux/product-designer/#ideate-and-iterate) low-fidelity wireframes that incorporate all of the requirements of the theme, holistically addressing the JTBD, needs and use-cases.
- Collaborate with your counterparts early and often.
- Adjust your designs after [soliciting feedback](https://about.gitlab.com/handbook/product/ux/product-designer/#design-reviews) from your counterparts and the UX team as needed.
3. In order to increase your confidence in your design direction it's always recommended that you validate your low-fidelity wireframe design with the [solution validation](https://about.gitlab.com/handbook/product/ux/ux-research-training/solution-validation-and-methods/) method that is right for your project. After all, this is the design you'll be working from in future milestones to come, so it is best to ensure, while in this low-fidelity state, that it is usable and meets your user's goals and needs. Test it while it is cheap!
3. In order to increase your confidence in your design direction it's always recommended that you validate your low-fidelity wireframe design with the [solution validation](https://about.gitlab.com/handbook/product/ux/ux-research/solution-validation-and-methods/) method that is right for your project. After all, this is the design you'll be working from in future milestones to come, so it is best to ensure, while in this low-fidelity state, that it is usable and meets your user's goals and needs. Test it while it is cheap!
- Adjust your designs as necessary based on this user feedback solidifying your low-fidelity wireframe design direction.
4. Work with your counterparts to [breakdown your low-fidelity wireframe design into an appropriate MVC(s)](https://about.gitlab.com/handbook/product/ux/product-designer/#refine-mvc)
- Create an issue(s) to track this MVC work.
......
......@@ -144,19 +144,19 @@ Part of the role of product designers is to lead and facilitate idea generation
 
- Run a sync (such as a [ThinkBig!](/handbook/product/ux/thinkbig/) session), async, or combination workshop to generate ideas. Define a scope and invite participants from product, engineering, ux research, and other areas for best results.
- Reach out to [sales](/handbook/sales/), [customer success](/handbook/customer-success/) or [marketing](/handbook/marketing/corporate-marketing/) counterparts for a new perspective. You can also invite these counterparts as optional attendees to your regular meetings.
- Prioritize a round of [problem validation research](/handbook/product/ux/ux-research-training/problem-validation-and-methods/) together with Product Managers and UX research. Talk to customers about their experiences building software in a very open-ended way, see what keeps them up at night, what slows them down, and what impedes their productivity.
- Prioritize a round of [problem validation research](/handbook/product/ux/ux-research/problem-validation-and-methods/) together with Product Managers and UX research. Talk to customers about their experiences building software in a very open-ended way, see what keeps them up at night, what slows them down, and what impedes their productivity.
- Discover unknown pain points:
- [Dovetail](/handbook/product/ux/dovetail/) is used to analyze data, collaborate on insights, and as our current research repository.
- [Chorus.ai](https://www.chorus.ai/) is a tool used by sales reps that records and transcribes sales calls. You can search calls by keyword to narrow in on what you listen to.
- [Zendesk](https://gitlab.zendesk.com/agent/) is also a source of information around existing problems, although it can be a bit harder to parse through the tickets, as they aren't necessarily categorized in a way that is optimal for UX.
 
[Access instructions for Dovetail, Zendesk and Chorus.ai](/handbook/product/ux/ux-research-training/ux-research-resources/#how-to-find-existing-research)
[Access instructions for Dovetail, Zendesk and Chorus.ai](/handbook/product/ux/ux-research/ux-research-resources/#how-to-find-existing-research)
 
- The [community forum](https://forum.gitlab.com/c/questions-and-answers/) is a support option for free users.
 
#### Understand the space
 
- Investigate whether there is existing UX Research in the [UX Research Archive](https://gitlab.com/gitlab-org/uxr_insights), [Dovetail](https://dovetailapp.com/), or other data that could help inform your decisions and measure results. If there isn't existing UX Research, contact your [UX Researcher](/handbook/product/ux/ux-research-training/how-uxr-team-operates/) to conduct (or guide you and your Product Manager in conducting) research for the problem.
- Investigate whether there is existing UX Research in the [UX Research Archive](https://gitlab.com/gitlab-org/uxr_insights), [Dovetail](https://dovetailapp.com/), or other data that could help inform your decisions and measure results. If there isn't existing UX Research, contact your [UX Researcher](/handbook/product/ux/ux-research/how-uxr-team-operates/) to conduct (or guide you and your Product Manager in conducting) research for the problem.
- Consider conducting competitive analysis to inform your work. Look for terminology, functionality, and UX conventions that can help refine success criteria. Only stray from industry conventions with strategic intent, such as capitalizing on [disruptive innovation](https://www.economist.com/the-economist-explains/2015/01/25/what-disruptive-innovation-means) opportunities. We want users to migrate from other tools to ours, and they’re more likely to be successful and satisfied when our products use conventions that are familiar based on other tools they've used.
- Consider creating user flows or journey maps to help ensure you've considered the entire workflow and also to help communicate that workflow to your team.
 
......@@ -216,7 +216,7 @@ When applying iterative design, you should consider the longer-term strategy or
- Ask for feedback from other Product Designers in [Design Reviews](#design-reviews) to help improve your work. At minimum, you'll get objective feedback and new ideas that lead to better solutions. You might also get context you didn’t know you were missing, such as GitLab-specific or industry-standard design conventions.
- Collaborate with your group's Technical Writer when the work involves substantial UI text, such as user-assistance or links back to documentation. For details on how to collaborate, see the [UI text Planning and authoring](/handbook/product/ux/technical-writing/workflow/#ui-text) section of the Technical Writing handbook. Additionally, involve your technical writer in the [review process](#technical-writer-ui-text-reviews) for smaller copy changes, such as UI elements labels.
- For a significant UX change, like a new workflow or feature, include your Product Design Manager in feedback sessions, as they might have input into the overall direction of the design or knowledge about initiatives on other teams that might impact your own work.
- If the team does not have a high level of confidence in a direction, there are multiple design solutions, or the direction is a significant risk, [validate](/handbook/product-development-flow/#validation-phase-4-solution-validation) your proposed solution with customers/users by leveraging [ux research methods](/handbook/product/ux/ux-research-training/solution-validation-and-methods/). If the team has a high level of confidence in a direction or design solution and the risk is low, it's fine to gather feedback from customers only after releasing the MVC.
- If the team does not have a high level of confidence in a direction, there are multiple design solutions, or the direction is a significant risk, [validate](/handbook/product-development-flow/#validation-phase-4-solution-validation) your proposed solution with customers/users by leveraging [ux research methods](/handbook/product/ux/ux-research/solution-validation-and-methods/). If the team has a high level of confidence in a direction or design solution and the risk is low, it's fine to gather feedback from customers only after releasing the MVC.
- Use the [design and UI changes checklist](https://docs.gitlab.com/ee/development/contributing/design.html#checklist) to help you think through how your design will read, look, and behave.
 
#### GitLab Design Talks: Iteration
......@@ -301,8 +301,8 @@ If the content that should go in the drawer doesn’t exist yet:
 
UX Researchers work closely with Product Managers and Product Designers to ensure research projects are focused and provide answers to design questions.
 
- Ensure you follow the [process to request research](/handbook/product/ux/ux-research-training/how-uxr-team-operates/#how-to-request-research) (even if you are conducting the research yourself)
- Ensure you follow the process to [document research findings](/handbook/product/ux/ux-research-training/documenting-research-findings/)
- Ensure you follow the [process to request research](/handbook/product/ux/ux-research/how-uxr-team-operates/#how-to-request-research) (even if you are conducting the research yourself)
- Ensure you follow the process to [document research findings](/handbook/product/ux/ux-research/documenting-research-findings/)
 
 
### Refine MVC
......
......@@ -135,7 +135,7 @@ GitLab uses labels to categorize, prioritize, and track work. The following is a
* [**UX scorecard-rec** label](https://gitlab.com/groups/gitlab-org/-/issues?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=UX%20scorecard-rec): Indicates this issue is a recommendation that was a result of a UX scorecard review. It's OK if the issue was created prior to the scorecard being done; it can still be pulled into the set of recommendations.
* [**CM scorecard** label](https://gitlab.com/groups/gitlab-org/-/issues?sort=created_date&state=opened&label_name[]=CM+scorecard): Indicates the primary issue or epic for the [CM Scorecard](/handbook/product/ux/category-maturity-scorecards/). It is used to easily find current work and track efforts.
* [**cm-scorecard-rec** label](https://gitlab.com/groups/gitlab-org/-/issues?sort=created_date&state=opened&label_name[]=cm-scorecard-rec): Indicates this issue is a recommendation that was a result of a CM Scorecard.
* [Actionable Insights](/handbook/engineering/ux/ux-research-training/research-insights/#how-to-document-actionable-insights) document learnings from research that need to be acted on.
* [Actionable Insights](/handbook/engineering/ux/ux-research/research-insights/#how-to-document-actionable-insights) document learnings from research that need to be acted on.
* [Actionable Insight::Exploration needed](https://gitlab.com/groups/gitlab-org/-/issues/?sort=updated_desc&state=opened&label_name%5B%5D=Actionable%20Insight%3A%3AExploration%20needed): A research insight derived from a UX research study that requires further exploration.
* [Actionable Insight::Product change](https://gitlab.com/groups/gitlab-org/-/issues/?sort=updated_desc&state=opened&label_name%5B%5D=Actionable%20Insight%3A%3AProduct%20change): A research insight derived from a UX research study and requires a change to the product experience.
 
......
......@@ -48,5 +48,5 @@ One aspect to consider with First Look participants is that they signed up to pr
 
### How can I use First Look for my research?
 
If you want to use participants from the First Look panel, please submit a [recruitment request issue](https://about.gitlab.com/handbook/product/ux/ux-research-training/recruiting-participants/#sts=Open%20a%20recruitment%20request%20issue) in the UX Research project and contact the UX Research Operations Coordinator within the issue to get started. The UX Research Operations Coordinator will determine whether First Look or another recruiting method best suits the needs of the study.
If you want to use participants from the First Look panel, please submit a [recruitment request issue](https://about.gitlab.com/handbook/product/ux/ux-research/recruiting-participants/#sts=Open%20a%20recruitment%20request%20issue) in the UX Research project and contact the UX Research Operations Coordinator within the issue to get started. The UX Research Operations Coordinator will determine whether First Look or another recruiting method best suits the needs of the study.
 
......@@ -110,7 +110,7 @@ In order to get people scheduled for interviews, we generally want to keep the f
1. **Respondent.io** is a participant recruitment service that is a good choice for studies aimed at software professionals who may or may not be GitLab users. This has been a good source to find security professionals and some other harder-to-reach participants. Respondent is also a great choice for when you need users quickly.
- Respondent.io is one of the fastest options depending on criteria and typically participants start qualyfing within a few hours.
 
1. **UserTesting.com** is an [unmoderated usability testing platform](https://about.gitlab.com/handbook/product/ux/ux-research-training/unmoderated-testing/). We can use their panel to recruit users for usability tests through their platform.
1. **UserTesting.com** is an [unmoderated usability testing platform](https://about.gitlab.com/handbook/product/ux/ux-research/unmoderated-testing/). We can use their panel to recruit users for usability tests through their platform.
- Participants typically start qualifying within a few hours depending on criteria.
 
1. **Social outreach.** Social posts may go out on GitLab's brand channels. The UX Research Operations Coordinator uses the Social Request template in the Corporate Marketing project to request this type of post. This is a good choice for studies primarily aimed at GitLab users. Product Managers, Product Designers, and other teammates are highly encouraged to help promote their studies through their networks.
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment