The full a11y report being downloadable is great but a developer like Sasha needs context within the Merge Request about the impact on accessibility from the current change. This is not easily deciphered from the report today. We need to solve that gap for Sasha so they can quickly resolve new accessibility issues their change has introduced as early in the development cycle as possible.
Presley (Product Designer) - who can review the impact of a11y from the MR and provide suggestions for improvements quickly.
Further details
The goal of this feature is to solve the problem of too much information that isn't actionable that may come from a full accessibility report. Sasha and Presley only want to make sure the change in the Merge Request is not degrading the accessibility of the project. If the change does degrade the accessibility then they want actionable information on how to fix it. This may happen as part of initial development OR in a design review prior to Merge of the code.
This is part of the work that moves the ~"Category:Accessibility Testing" to maturityviable.
Goals
This iterates on the a11y MVC by bringing the difference of the latest scan vs. the last scan of /master into the context of the MR. The Outcome(s) for this are as follows:
When I build code, I want to see the changes to a11y in the context of the MR, so that features I release can be used by everyone.
Proposal
Building on the MVC that made a .json file available and the previous pattern of code quality that compares the report of the new branch to the base branch display the changes between the two in a widget on the Merge Request page.
A link to the full report (or downloadable artifact) from the widget should be provided.
The widget should be minimized on page load but with an indicator that improvements/degradations have been found.
Permissions and Security
Documentation
Update existing documentation for setup for MR widget and any caveats for comparison (artifact expiration, need for two, etc.) to appear.
Testing
@zeffmorgan - can you take a look at your convenience?
What does success look like, and how can we measure that?
Acceptance Criteria
MR Widget is displayed when a11y job is included.
Improvements and degradations have their own visual indicators
If the diff cannot be generated display a tooltip similar to the code quality pattern.
Provide a link to the information from the pa11y report about the issue found.
Track usage of the feature
Provide a count of how many times the MR widget is displayed to be queried in Periscope
Provide a count of how many times the MR Widget is expanded to be queried in Periscope
What is the type of buyer?
Individual Contributor.
Links / references
This page may contain information related to upcoming products, features and functionality.
It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes.
Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab Inc.
@jheimbuck_gl - this is a neat issue. I would like to hear more about:
Why is this GitLab Premium? Is is because director's care about a11y testing in general and will actually be the ones consuming this?
Where will this interaction with Sasha really occur? Is it in the IU and where? Is it in the CLI? If so, what is the first step to getting the interaction between the MR and a11y results?
Where else could these results be useful? As in, are there items in other stages - like the Design group that would find Web Accessibility results interesting?
Can you elaborate more on what this will accomplish from a use case view? Are we saving time for the developer? "Shifting Left" the devops cycle? Increasing Quality? Delivering better compliance as a part of improved time to market?
Why is this GitLab Premium? Is is because director's care about a11y testing in general and will actually be the ones consuming this?
I was thinking about visual review tools which is in Premium being a requirement but it's not, review apps is which is in Core. I'll revise.
Where will this interaction with Sasha really occur? Is it in the IU and where? Is it in the CLI? If so, what is the first step to getting the interaction between the MR and a11y results?
In this iteration it is in the MR as a widget much like code quality. I'll get the design links added to help tell that story.
Where else could these results be useful? As in, are there items in other stages - like the Design group that would find Web Accessibility results interesting?
Designers would very likely find this interesting, maybe @cdybenko would be interested as well.
Can you elaborate more on what this will accomplish from a use case view? Are we saving time for the developer? "Shifting Left" the devops cycle? Increasing Quality? Delivering better compliance as a part of improved time to market?
Shifting left. The use case is more well defined in the MVC but i'll copy over here and call out the case of "I want to see how my changes impacted a11y before merge, so i can fix them without deploying the full app and testing manually"
From a Design Management perspective, I was thinking of taking the Design Tab to Core and then the Premium upgrade would be things like visual review and post-handoff communication where the designs are present for inspection. See some of the paid options in this space are in this competitive analysis doc: https://docs.google.com/document/d/1sW6Wxmb_RPlHKi06phfqiV_aAZ4TUbNZtY_EnJGvMVU/edit
I'm still open to it being Core, but it would be a neat carrot as well if we could nail the visual inspection of designs against the review app.
I think this aligns with that vision @cdybenko. More features targeted at Managers/directors for both visual review tools and a11y are likely to be in Starter or above.
Not sure if this is existing copy for other widgets, but "for the source branch only" seems ambiguous to me. My first read was that only the source branch was analyzed and 10 issues were found on it.
Something like "no new issues" or "0 issues introduced in the changes" might be better. Something that explicitly states that we're talking about a diff.
@drewcimino's suggestions sound applicable. "No new issues introduced in the changes" is clear to me. Are we only talking about the most recent submitted change(s)? That would be the only other part I'm curious about -- if we should clarify it's only referring to the most recent proposed change(s)
Thanks for circling back here @jj-ramirez! Reading again, it occurred to me that "issues" could potentially be confusing since we use the terms issues/merge requests so frequently in other contexts. The suggestion as is still makes perfect sense and shouldn't be confusing to users, however we could consider using "problems" or "discrepancies" instead (although "discrepancies" sounds a little too formal). Just a thought!
@jheimbuck_gl JSON a11y artifacts are currently ~"workflow::In dev" #118856 (closed), but this will also require some backend to be able to compare the reports and come up with diff data.
@iamricecake and @morefice I started looking at TestReportsComparerEntity as model for "AccessibilityReportsComparerEntity", and was thinking it's generic enough that we might be able to consider the exposed fields there a general API for compared reports. Same with TestSuiteComparerEntity; it's sort of a "count of bad things, count of new bad things (positive or negative). WDYT of using the same classes versus a near-copy?
I think it would be more explicit to make a separate report for it. This will make the code very clear about its original intent. Sometimes duplication is necessary for better readability/simplicity.
What are the pros and cons to create a different one for our new accessibility reports?
I would like us to take simple decisions, and later if we find too much duplication we can abstract it at a higher level. Thoughts?
@jheimbuck_gl@erushton I've split off the backend work required to support this widget into a separate issue, with a plan for implementation and testing with acceptance criteria. I've marked that as Deliverable since the proposal and scope both seem very reasonable to me.
@mfluharty@shampton I think this issue can be marked as Deliverable if that backend issue looks like it'll have everything you need for this, which will become the frontend and feature issue.
The status property would be used for the icon on the left of each issue. I'm not sure if we should separate out the message into different parts based on the design. Maybe you understand more of what the backend will be generating than I do.
@shampton I think we'll use the same structure as TestSuiteComparerEntity; in this ecosystem I think an a11y scan is equivalent to a test suite, and would come back in a TestReport alongside rpsec, junit, etc. (They wouldn't actually be side-by-side by default, but structurally they'd be at the same level and it would be possible to configure them that way.
TestSuiteComparerEntity uses new_existing_ and resolved_ prefixes, which I think we would keep. But rather than failures and errors, use errors, warnings, and notes.
@drewcimino Yeah, I know the information is wrong. I was just putting stuff in there as a placeholder. Sorry for the confusion! I just wanted to make sure the structure was what you were expecting to send over.
@shampton I was chatting with @morefice and he suggested that we might be able to ship a smaller MVC by doing the comparison in the front-end, the way we do with CodeQuality. Considering the eventual/possible size of the a11y artifacts this might not be a better long-term solution, but would it allow the widget to ship faster?
I can't remember if we discussed doing it this way and had a strong reason to not, initially.
I can't remember what our discussion on this was either. I don't know if the size of these artifacts are going to be a major problem on an already slow MR page. Maybe @jheimbuck_gl has some better insight here.
@morefice and I have the same concern about artifact size, but if it's quick and not a lot of work to unwind later, we could always set up the service on the backend and switch the widget to that in the the next release.
In the meantime, I'm trying to get an idea of how complicated the parsing will be on the backend. I will keep the schema as close as possible to the current one we have today.
@drewcimino@morefice@jheimbuck_gl I'm working on parsing on the frontend and I have some questions on how these issues should be combined. If two issues have the code WCAG2AA.Principle1.Guideline1_4.1_4_3.G18.Fail, they can be combined to say found two errors of the following type: WCAG2AA.Principle1.Guideline1_4.1_4_3.G18.Fail. However, if the messages are different should we still combine them?
This element has insufficient contrast at this conformance level. Expected a contrast ratio of at least 4.5:1, but text in this element has a contrast ratio of 3.84:1. Recommendation: change text colour to #767676.
and another with the same code say
This element has insufficient contrast at this conformance level. Expected a contrast ratio of at least 4.5:1, but text in this element has a contrast ratio of 2.82:1. Recommendation: change background to #d1470c."
If the messages are different, should we keep them separate? I'd imagine so, but I just want to make sure. The examples in the design are truncated to only show This element has insufficient contrast at this conformance level. so I wasn't sure if we just truncate all of the messages or if we show the whole thing. If we truncate them, then I guess we can just combine the messages if the truncated messages are equal.
Looking at the design it suggests we should combine them. This would probably add some extra work for our MVC maybe we can keep it simple and not merging them yet?
Thanks @jheimbuck_gl. Just to clarify, are we separating all issues even if they have the same code and message or are we only separating issues that have different messages but combining those have the same code and message?
@shampton I can't recall if we're presenting a count anywhere off the top of my head to avoid confusion let's just separate everything for now and we can iterate on this after.
Talking with @drewcimino, we realized we need to add the base path and head path for the accessibility reports in the app/serializers/merge_request_widget_entity.rb file similar to how the code quality report is done in the EE version of that file. This will probably need to be a separate MR.
@morefice as I was working on updating the frontend to use the backend API endpoint, I realized I don't know how the schema is going to look with the URLs included. Will it look like this?:
{:status=>"failed",:new_errors=>{"https://gitlab.com"=>[{:code=>"WCAG2AA.Principle4.Guideline4_1.4_1_2.H91.A.NoContent",:type=>"error",:type_code=>1,:message=>"Anchor element found with a valid href attribute, but no link content has been supplied.",:context=>"<a class=\"social-icon\" target=\"_blank\" href=\"https://gitlab.com\" rel=\"nofollow noopener noreferrer\">\n <svg xmlns=\"http://www...</a>",:selector=>"html > body > header > div > nav > a",:runner=>"htmlcs",:runner_extras=>{}}]},:resolved_errors=>[],:existing_errors=>{"https://gitlab.com"=>[{:code=>"WCAG2AA.Principle4.Guideline4_1.4_1_2.H91.A.NoContent",:type=>"error",:type_code=>1,:message=>"Anchor element found with a valid href attribute, but no link content has been supplied.",:context=>"<a class=\"site-title active\" rel=\"author\" href=\"/\">\n <svg version=\"1.0\" xml...</a>",:selector=>"html > body > header > div > nav > a",:runner=>"htmlcs",:runner_extras=>{}}]},:summary=>{:total=>1,:resolved=>0,:errored=>2}}
So the accessibility report includes the urls but they will not be exposed to the API yet for the sake of simplicity.
We could add them whenever we want when it's required and when we will iterate on this feature with a new design for it.
JSON schema
{ :status=>"failed", :new_errors=> [ { :code=>"WCAG2AA.Principle4.Guideline4_1.4_1_2.H91.A.NoContent", :type=>"error", :type_code=>1, :message=>"Anchor element found with a valid href attribute, but no link content has been supplied.", :context=>"\n ", :selector=>"html > body > header > div > nav > a", :runner=>"htmlcs", :runner_extras=>{} } ], :resolved_errors=>[], :existing_errors=> [ { :code=>"WCAG2AA.Principle4.Guideline4_1.4_1_2.H91.A.NoContent", :type=>"error", :type_code=>1, :message=>"Anchor element found with a valid href attribute, but no link content has been supplied.", :context=>"\n ", :selector=>"html > body > header > div > nav > a", :runner=>"htmlcs", :runner_extras=>{} } ], :summary=>{ :total=>1, :resolved=>0, :errored=>1 }}
I have an MR in maintainer review that finishes up the frontend work when the comparison is done on the client side. It is behind a feature flag, but wouldn't render anyway since the base_path and head_path for the accessibility report are not exposed currently.
I have another MR in review that switches the comparison to the backend endpoint. This will still be behind a feature flag, and also won't work until we actually have the endpoint path passed to the frontend. So once the backend work is completed we can followup to make sure the integration is seamless and make any changes if necessary.
@shampton what's going to be the LAST MR that needs merged before this is live? That's the one I'd lie to reference in the release post MR so when it's merged we know to merge the release post content. Thanks!
@jheimbuck_gl - do you mean create an issue to remove the feature flag? The feature flag is already disabled and hiding the widget. I just want to clarify before creating the issue.
Sorry, wasn't clear @shampton . An issue to flip the flag to whatever state is needed to make this feature show up after the last MR is merged and closed. We can do another issue to remove it and always have this on.
@jheimbuck_gl@morefice I realized we need to expose the accessibility endpoint path so that the frontend has it. I'll create an MR now and ping @morefice to review. Should be really small.