The Heuristic Buddy UX Scorecards are a twist on our UX Scorecard process. These are specifically designed to help identify areas of usability and learnability improvements. They are to be completed by a designer who does not work within the same product area(s) the job can be completed in. Learn more about UX Scorecards
The initial preparation is completed by the Group Product Designer. When the preparation has been completed they will hand it over to the Heuristic Buddy to complete the evaluation who will hand it back to the Group Product Designer when completed to add any recommendations. Read through the steps below for details.
Group Product Designer (Expert)
Add this issue to the stage group epic for the corresponding UX scorecards. Verify that the "UX scorecard" label is applied.
After working with your PM to identify a top job, write it using the Job to be Done (JTBD) format: When [situation], I want to [motivation], so I can [expected outcome]. Review with your manager to ensure your JTBD is written at the appropriate level. Remember, a JTBD is not a user story, it should not directly reference a solution and should be tool agnostic.
Make note of which personas might be performing the job, and link to them from this issue's description. Keeping personas in mind allows us to make the best decisions to address specific problems and pain points. Note: Do not include a persona in your JTBD format, as multiple types of users may complete the same job.
If your JTBD spans more than one stage group, that’s great! Review your JTBD with a designer from that stage group for accuracy. Note: This stage group's designer cannot be your Heuristic Buddy.
Consider whether you need to include additional scenarios related to onboarding.
Ping your Heuristic Buddy and let them know it's ready for them to conduct the evaluation.
Work with your Heuristic Buddy to ensure they'll be evaluating GitLab in the correct environment setup that is appropriate to a new user attempting to complete the JTBD that you've selected.
Heuristic Buddy (Evaluator)
Review the current experience, noting where you expect a user's high and low points to be based on our UX Heuristics. Using an experience map, such as the one found in this template, capture the screens and jot down observations. Ideally, use our scoring template.
During the evaluation strive to wear the hat of the persona relevant to the JTBD and while doing so try to see the UI from their perspective as if they were a new user.
As you progress through your evaluation this will be easy to forget so it's recommended to put a reminder somewhere in your view, such as a post-it stuck on your monitor that says "You're a new user!"
Use the Grading Rubric to provide an overall measurement that becomes the Benchmark score for the experience (one grade per JTBD), and add it to this issue's description. Document the score in the UX Scorecard Spreadsheet.
Once testing is complete, create a walkthrough video that documents what you experienced when completing the job in GitLab. Begin the video with a contextual introduction including:
Your role, stage group
Specify how you conducted the heuristic evaluation
Add a short introduction describing the JTBD and the purpose of the UX scorecard (i.e. you're performing the evaluation in partnership with {stage group} and {product designer}.
This is not a "how-to" video, but instead should help build empathy for users by clearly showing areas of potential frustration and confusion. (You can point out where the experience is positive, too.)
At the end of the video, make sure to include narration of the Benchmark Score. Examples here and here.
The walkthrough video shouldn't take you long to create. Don't worry about it being polished or perfect, it's more important to be informative.
Post your video to the GitLab Unfiltered YouTube channel, and link to it from this issue's description.
Amelia Bauerlychanged title from UX Scorecard - On-call schedules to UX Scorecard - Monitor: FY FY22-Q3 - Creating an on-call schedule
changed title from UX Scorecard - On-call schedules to UX Scorecard - Monitor: FY FY22-Q3 - Creating an on-call schedule
Amelia Bauerlychanged the descriptionCompare with previous version
changed the description
Amelia Bauerlymarked the checklist item Add this issue to the stage group epic for the corresponding UX scorecards. Verify that the "UX scorecard" label is applied. as completed
marked the checklist item Add this issue to the stage group epic for the corresponding UX scorecards. Verify that the "UX scorecard" label is applied. as completed
Amelia Bauerlychanged title from UX Scorecard - Monitor: FY FY22-Q3 - Creating an on-call schedule to UX Scorecard - Monitor: FY22-Q3 - Creating an on-call schedule
changed title from UX Scorecard - Monitor: FY FY22-Q3 - Creating an on-call schedule to UX Scorecard - Monitor: FY22-Q3 - Creating an on-call schedule
Amelia Bauerlymarked the checklist item Make note of which personas might be performing the job, and link to them from this issue's description. Keeping personas in mind allows us to make the best decisions to address specific problems and pain points. Note: Do not include a persona in your JTBD format, as multiple types of users may complete the same job. as completed
marked the checklist item Make note of which personas might be performing the job, and link to them from this issue's description. Keeping personas in mind allows us to make the best decisions to address specific problems and pain points. Note: Do not include a persona in your JTBD format, as multiple types of users may complete the same job. as completed
Amelia Bauerlymarked the checklist item If your JTBD spans more than one stage group, that’s great! Review your JTBD with a designer from that stage group for accuracy. as completed
marked the checklist item If your JTBD spans more than one stage group, that’s great! Review your JTBD with a designer from that stage group for accuracy. as completed
Amelia Bauerlymarked the checklist item After working with your PM to identify a top job, write it using the Job to Be Done (JTBD) format: When [situation], I want to [motivation], so I can [expected outcome]. Review with your manager to ensure your JTBD is written at the appropriate level. Remember, a JTBD is not a user story, it should not directly reference a solution and should be tool agnostic. as completed
marked the checklist item After working with your PM to identify a top job, write it using the Job to Be Done (JTBD) format: When [situation], I want to [motivation], so I can [expected outcome]. Review with your manager to ensure your JTBD is written at the appropriate level. Remember, a JTBD is not a user story, it should not directly reference a solution and should be tool agnostic. as completed
Amelia Bauerlymarked the checklist item Consider whether you need to include additional scenarios related to onboarding. as completed
marked the checklist item Consider whether you need to include additional scenarios related to onboarding. as completed
We've recently released two new features - on-call schedules and escalation policies. It seems like this is a great opportunity to test out the jobs related to those new features
JTBD for on-call schedules: I want to set up on-call schedules to handle my alerts so I can be confident that, if an alert is firing, the appropriate people will be paged.
JTBD for escalation policies: I want to set up escalation policies for my on-call schedules so I can be confident that, even if the first person paged to review an alert isn't available, someone else will be notified.
The job for escalation policies is actually ranked higher by our users than the job for on-call schedules but, you can't actually create an escalation policy without first creating an on-call schedule. So, I'd think we'd probably want to test the experience for creating a on-call schedule first, and then look at testing the experience for creating an escalation policy afterwards.
@abellucci - wanted to loop you in here! As part of our UX OKRs, a designer from another stage group (@kcomoli) will be creating a UX Scorecard for one of our JTBD. Since we've recently released on-call schedules and escalation policies, I thought this might be a good chance to set a "base score" for one of those new experiences. I'm guessing we should start with on-call schedules since you can't really create an escalation policy without a schedule. But, are you okay with that? If you have any concerns with this, just let me know :)
Awesome, thanks, @kcomoli! I think you could just create a new project from scratch and then see if you can figure out how to create an on-call schedule. Or you can use a project you already have to see how it would work in a project with more members. But, is there any additional context I can provide before you get started that would be helpful?
@ameliabauerly Got it . I just have one question: On-call schedules and escalation policies are both Premium features, therefore do we consider that I start the JTBDs as a paid or free user?
That's a really terrific question, @kcomoli! I realize that not only is this a premium feature, you also have to be a maintainer to set up a schedule.
Given the personas and the job we're evaluating, I suppose starting as a paid user with the proper permissions (maintainer) would make sense as, otherwise, you won't actually be able to evaluate the job. But, if you're able to also check out how this would look for free users/people who don't have the correct permissions, I think that would be terrific.
Sidebar: I actually just tried out creating a schedule on this GitLab project (where I don't have proper permissions for creating a schedule) and it strikes me that the experience could most definitely be improved. So, there will be ample fodder for feedback on this experience
I actually just tried out creating a schedule on this GitLab project (where I don't have proper permissions)
@ameliabauerly That's an interesting one too! But I agree it's secondary. I'll focus mainly on paid user with maintainer permission.
If I have time, I think it makes sense to do at least Paid user with not adequate permission. I think free maintainer is important too, but by design, we are removing most of the Premium/Ultimate features from the U.I for free users. This will most likely result in a dead end (which is still a finding ).
I'll focus mainly on paid user with maintainer permission.
This all sounds great, @kcomoli. Really glad you are going to be looking through this workflow with fresh eyes, and that you are considering the different perspectives of paid/free users. Very excited to hear about all of the things we can improve
@ameliabauerly@abellucci I just went through the JTBD and added both experience map and score to the issue description. Overall it was a pleasant experience, no struggles, great job here . I also evaluated the JTBD from a non-maintainer, and free user perspective but did not include these variants in the heuristic scoring.
I'll follow up with a walkthrough video to document the findings. In the meantime, I'm keen to get your thoughts on the experience maps .
Thanks so much for doing this @kcomoli, this is amazing! A couple thoughts:
You mentioned importing existing on-call schedules from wherever they are currently being managed. Though we don't currently support that, that's definitely something worth exploring. That would certainly reduce the friction of moving over into GitLab's on-call schedule management features.
Over-zealous error messages - oh, this is something we've been struggling with! So, we wanted to let people know about errors before submitting the form (so they had a chance to fix them before submitting) but we're left with error messages that are too easily triggered. We were trying to not show the messages until after the user tabs past the required field but they still trigger too quickly. We have an existing issue to fix some of the escalation policy errors but we should probably go back and see if we can further improve the schedule ones, too. Right now, you can't even submit the form until the errors are corrected. But, I suppose we could change the behaviors so that the form can be submitted, even with errors, and then show which fields are wrong? Neither approach feels like it's without drawbacks so it's a tricky one!
With the schedules/escalation policies on separate pages thing - I agree, it's awkward! The reason we separated it out was primarily because there can be a ton of schedules and policies, so it felt like too much to have on one page. But, I've also been thinking about the tab idea - maybe these pages could work as tabs?
The disappearing tip alert - we had the problem with a bunch of alerts stacking so we dismissed the tip alert after the rotation was added. I can see that this means we also lose the link to the escalation policy page though. Maybe you're right in that we should have permanent text linking to the escalation policy page? I wasn't sure because it's only really helpful the first time the user sets everything up. But I guess maybe we should just leave the tip alert there, even if there are stacked alerts, until the user chooses to dismiss it?
I love your suggestion to figure out some way of testing the escalation policy once it's all set up. That's a great idea! It's also weird that there was no success message that showed up when your rule was created. There should be!
Do you think it's worth hiding the button for creating on-call schedules for people who don't have proper permissions? Or, do you think it's useful for them to be able to see what they could do, as a way of getting a glimpse into what the feature is all about?
You mentioned doing some kind of a trial of our on-call schedule features, or some way of otherwise promoting them on the premium feature pages for people who want to learn more. I think this is such a terrific idea!! How would we pursue something like that? Is that something we could work on with the Growth team?
Anyway, this is just terrific, and it gives us so much to think about. I really appreciate all of this very thoughtful work!
So I guess the only remaining steps here are the walk-through video and for me to go through and make recommendation issues for how to improve the items you highlighted?
@ameliabauerly Thank you for your feedback! Some thoughts below:
But, I suppose we could change the behaviors so that the form can be submitted, even with errors, and then show which fields are wrong? Neither approach feels like it's without drawbacks so it's a tricky one!
I agree it's tricky. I personally err on the side of letting people submit first, then display errors. But displaying the errors before submits works fine too. I think it's a minor pain point. The most important thing is to have alerts with actionable messaging. Which is done well in this flow.
But, I've also been thinking about the tab idea - maybe these pages could work as tabs?
Tabs would be a good iteration I think. Another thing I had in mind going through the flow is that escalation policies could be metadata of the on-call calendar since they are tied. I imagined they could be displayed in the popover or in inline in the schedule (eg. Below). Not sure how relevant that would be
But I guess maybe we should just leave the tip alert there, even if there are stacked alerts, until the user chooses to dismiss it?
That could work too. Another potential solution is to use the content of the alert as a permanent paragraph below the page title. Either way, I think both work and are easy fixes.
It's also weird that there was no success message that showed up when your rule was created. There should be!
Do you think this was a bug? Happy to change the evaluation if this is not the normal behaviour. To me this is a pain point as it can erode the "so I can be confident that..." part of the JTBD. It really influenced the grading.
Do you think it's worth hiding the button for creating on-call schedules for people who don't have proper permissions? Or, do you think it's useful for them to be able to see what they could do, as a way of getting a glimpse into what the feature is all about?
I would lean more toward swapping it based on their permission. So if they don't have permission the main action should be to request it. That said, we don't have a permission request system in the app right now. All we do is to surface the maintainer/owner name when higher permission or upgrade are required see example below.
You mentioned doing some kind of a trial of our on-call schedule features, or some way of otherwise promoting them on the premium feature pages for people who want to learn more. I think this is such a terrific idea!! How would we pursue something like that? Is that something we could work on with the Growth team?
If it's in-app promotion, Growth can help. We would need to define if this would be a value driver first, then setup an experiment to surface some of this feature and see if it drives trials. If it's updating the pricing page, I think the digital experience team could have some insights on why they chose some features over others in the tiers.
Thanks for all of these terrific ideas, @kcomoli. This is all great!
Another thing I had in mind going through the flow is that escalation policies could be metadata of the on-call calendar since they are tied.
That's a great suggestion! A schedule could be used in multiple escalation policies so it won't necessarily be a 1-1 correlation but I think noting some place that "this schedule is used in x policy" (or whatever) could be really useful. I suppose we could also surface that it's not currently being used in an escalation policy, as well, which might be another way of linking the on-call schedule page to the escalation policy page, should we not proceed with the tab idea. Anyway, I think that's a great suggestion!
Do you think this was a bug? Happy to change the evaluation if this is not the normal behaviour. To me this is a pain point as it can erode the "so I can be confident that..." part of the JTBD. It really influenced the grading.
I think it's just a detail we forgot to add in for escalation policies, to be honest! We just released escalation policies at the end of 14.1, so the whole feature is very new, and we're still spotting things that need to be cleaned up! I think we just missed that it wasn't there in doing our testing on the MRs. There should be one, though, just like we have for on-call schedules. I'm glad you caught that we forgot to add it for escalation policies!
We would need to define if this would be a value driver first, then setup an experiment to surface some of this feature and see if it drives trials. If it's updating the pricing page, I think the digital experience team could have some insights on why they chose some features over others in the tiers.
Gotcha, okay! This is where Monitor often has trouble, because we're not necessarily seen as value driver (yet!) so it can be hard for us to make a case for promoting our features over others I will at least try to start this conversation though, to see if we can push to get some of the things we're building surfaced in some of these other ways
@abellucci - also wanted to add a bit of context about the score! A B- translates to an experience that is right between minimal and complete, but that already falls into the complete bucket:
This is actually awesome news, then, because I think the on-call schedule category is still at Minimal. It means we've made great strides in our On-Call Schedule Management and Escalation Policy MVCs
@abellucci - as part of this process, I have to go through and make follow-up issues capturing the recommendations. I'll do that when Kevin is finished, and I'll ping you on the issues for scheduling :)
I think it's just a detail we forgot to add in for escalation policies, to be honest! We just released escalation policies at the end of 14.1, so the whole feature is very new, and we're still spotting things that need to be cleaned up! I think we just missed that it wasn't there in doing our testing on the MRs. There should be one, though, just like we have for on-call schedules. I'm glad you caught that we forgot to add it for escalation policies!
@ameliabauerly Got it! I kept it as part of the evaluation.
I just added the walkthrough video and slide deck to the issue description.
Have internal or external users accomplish the JTBD. Record this session and document their experience. Note that 3-5 users are preferred, as this provides valuable insights and removes subjectivity. Make sure to avoid setting up a task-based usability study. The goal is to provide the participant context (the JTBD) and listen and watch how they attempt to complete the job. What we learn may differ from participant to participant.
@kcomoli - I feel like we got the things we needed from this step. You accomplished the JTBD and documented your experience using the murals. Your murals were so well done that it was the equivalent to watching a video, because I understood what you saw/thought at each step in the process, which I think was the point of this step.
I also just watched your walkthrough video! I think it serves as both a great summary of your scoring conclusions while also documenting your experience of tackling the JTBD. So I don't think we need an additional video at this point, at least not in my opinion. I think it would be okay to consider this one:
I just added the walkthrough video and slide deck to the issue description.
Amazing, thank you! I don't seem to be able to see the slides though. I think the access might need to be updated?
@abellucci - I've gone through and created a bunch of follow-up issues from Kevin's excellent walkthrough video and comments. Should I create an epic for all of these issues? Maybe a UX improvements for on-call schedule management epic or something similar? Let me know what you think.
@ameliabauerly - Thanks for creating the followup issues from Kevin's work! Let's capture them under one epic with the Category:On-call Schedule Management label. Can you mention me in the epic and I will take a look and review once they are ready?
@ameliabauerly@abellucci Thank you for the feedback along the way. I enjoyed going through this flow. I especially loved the fact that everything happened in the U.I. In my opinion, we generally rely too much on documentation for general setup. Here, I did not open the documentation once, it's rare enough to be highlighted. Kudos to you and the team
Kevin Comolichanged the descriptionCompare with previous version
changed the description
Kevin Comolichanged the descriptionCompare with previous version
changed the description
Kevin Comolimarked the checklist item Review the current experience, noting where you expect a user's high and low points to be based on our UX Heuristics. Using an experience map, such as the one found in this template, capture the screens and jot down observations. Ideally, use our scoring template. as completed
marked the checklist item Review the current experience, noting where you expect a user's high and low points to be based on our UX Heuristics. Using an experience map, such as the one found in this template, capture the screens and jot down observations. Ideally, use our scoring template. as completed
Kevin Comolimarked the checklist item Use the Grading Rubric to provide an overall measurement that becomes the Benchmark score for the experience (one grade per JTBD), and add it to this issue's description. Document the score in the UX Scorecard Spreadsheet. as completed
marked the checklist item Use the Grading Rubric to provide an overall measurement that becomes the Benchmark score for the experience (one grade per JTBD), and add it to this issue's description. Document the score in the UX Scorecard Spreadsheet. as completed
Kevin Comolimarked the checklist item Review the current experience, noting where you expect a user's high and low points to be based on our UX Heuristics. Using an experience map, such as the one found in this template, capture the screens and jot down observations. Ideally, use our scoring template. as incomplete
marked the checklist item Review the current experience, noting where you expect a user's high and low points to be based on our UX Heuristics. Using an experience map, such as the one found in this template, capture the screens and jot down observations. Ideally, use our scoring template. as incomplete
Kevin Comolimarked the checklist item Use the Grading Rubric to provide an overall measurement that becomes the Benchmark score for the experience (one grade per JTBD), and add it to this issue's description. Document the score in the UX Scorecard Spreadsheet. as incomplete
marked the checklist item Use the Grading Rubric to provide an overall measurement that becomes the Benchmark score for the experience (one grade per JTBD), and add it to this issue's description. Document the score in the UX Scorecard Spreadsheet. as incomplete
Kevin Comolimarked the checklist item Post your video to the GitLab Unfiltered YouTube channel, and link to it from this issue's description. as completed
marked the checklist item Post your video to the GitLab Unfiltered YouTube channel, and link to it from this issue's description. as completed
Kevin Comolimarked the checklist item Once testing is complete, create a walkthrough video that documents what you experienced/witnessed within the existing experience. Begin the video with a contextual introduction including: your role, stage group, specifiy how you acquired the data (ex: internal or external users, or self-heurisitic evaluation), and a short introduction to your JTBD and purpose of the UX scorecard. This is not a "how to" video, but instead should help build empathy for users by clearly showing areas of potential frustration and confusion. (You can point out where the experience is positive, too.) At the end of the video, make sure to include narration of the Benchmark Score. Examples here and here. as completed
marked the checklist item Once testing is complete, create a walkthrough video that documents what you experienced/witnessed within the existing experience. Begin the video with a contextual introduction including: your role, stage group, specifiy how you acquired the data (ex: internal or external users, or self-heurisitic evaluation), and a short introduction to your JTBD and purpose of the UX scorecard. This is not a "how to" video, but instead should help build empathy for users by clearly showing areas of potential frustration and confusion. (You can point out where the experience is positive, too.) At the end of the video, make sure to include narration of the Benchmark Score. Examples here and here. as completed
Kevin Comolimarked the checklist item Use the Grading Rubric to provide an overall measurement that becomes the Benchmark score for the experience (one grade per JTBD), and add it to this issue's description. Document the score in the UX Scorecard Spreadsheet. as completed
marked the checklist item Use the Grading Rubric to provide an overall measurement that becomes the Benchmark score for the experience (one grade per JTBD), and add it to this issue's description. Document the score in the UX Scorecard Spreadsheet. as completed
Kevin Comolichanged the descriptionCompare with previous version
changed the description
Kevin Comolimarked the checklist item Review the current experience, noting where you expect a user's high and low points to be based on our UX Heuristics. Using an experience map, such as the one found in this template, capture the screens and jot down observations. Ideally, use our scoring template. as completed
marked the checklist item Review the current experience, noting where you expect a user's high and low points to be based on our UX Heuristics. Using an experience map, such as the one found in this template, capture the screens and jot down observations. Ideally, use our scoring template. as completed
Amelia Bauerlymarked the checklist item Create a recommendation issue for this JTBD and add it to the same stage group epic as this issue. Also add a link to your recommendation issue to this issue. as completed
marked the checklist item Create a recommendation issue for this JTBD and add it to the same stage group epic as this issue. Also add a link to your recommendation issue to this issue. as completed
Amelia Bauerlymarked the checklist item Have internal or external users accomplish the JTBD. Record this session and document their experience. Note that 3-5 users are preferred, as this provides valuable insights and removes subjectivity. Make sure to avoid setting up a task-based usability study. The goal is to provide the participant context (the JTBD) and listen and watch how they attempt to complete the job. What we learn may differ from participant to participant. as completed
marked the checklist item Have internal or external users accomplish the JTBD. Record this session and document their experience. Note that 3-5 users are preferred, as this provides valuable insights and removes subjectivity. Make sure to avoid setting up a task-based usability study. The goal is to provide the participant context (the JTBD) and listen and watch how they attempt to complete the job. What we learn may differ from participant to participant. as completed
Amelia Bauerlymarked the checklist item Work with your Heuristic Buddy to ensure they'll be evaluating GitLab in the correct environment setup that is appropriate to a new user attempting to complete the JTBD that you've selected. as completed
marked the checklist item Work with your Heuristic Buddy to ensure they'll be evaluating GitLab in the correct environment setup that is appropriate to a new user attempting to complete the JTBD that you've selected. as completed
Amelia Bauerlymarked the checklist item Review the current experience, noting where you expect a user's high and low points to be based on our UX Heuristics. Using an experience map, such as the one found in this template, capture the screens and jot down observations. Ideally, use our scoring template. as completed
marked the checklist item Review the current experience, noting where you expect a user's high and low points to be based on our UX Heuristics. Using an experience map, such as the one found in this template, capture the screens and jot down observations. Ideally, use our scoring template. as completed
Amelia Bauerlymarked the checklist item Use the Grading Rubric to provide an overall measurement that becomes the Benchmark score for the experience (one grade per JTBD), and add it to this issue's description. Document the score in the UX Scorecard Spreadsheet. as completed
marked the checklist item Use the Grading Rubric to provide an overall measurement that becomes the Benchmark score for the experience (one grade per JTBD), and add it to this issue's description. Document the score in the UX Scorecard Spreadsheet. as completed
Amelia Bauerlymarked the checklist item Once testing is complete, create a walkthrough video that documents what you experienced when completing the job in GitLab. Begin the video with a contextual introduction including: as completed
marked the checklist item Once testing is complete, create a walkthrough video that documents what you experienced when completing the job in GitLab. Begin the video with a contextual introduction including: as completed
Amelia Bauerlymarked the checklist item Post your video to the GitLab Unfiltered YouTube channel, and link to it from this issue's description. as completed
marked the checklist item Post your video to the GitLab Unfiltered YouTube channel, and link to it from this issue's description. as completed
Amelia Bauerlymarked the checklist item Once the evaluation has been completed ping the Stage Group Product Designer in this issue letting them know it's ready for their review. as completed
marked the checklist item Once the evaluation has been completed ping the Stage Group Product Designer in this issue letting them know it's ready for their review. as completed