Job inputs – manual QA testing
Job inputs are in the final stages before we release the feature in %18.10 – docs are being finalized and we're preparing to record a demo for the release post.
To help us meet our quality expectations we want to spend some time manually giving the new feature a test drive from a more neutral perspective – someone who has not actively worked on the feature or documentation. The goal here is to find rough edges or confusing aspect that users might otherwise discover and report after release.
This is a new approach we haven't really done before, but in a world where testing/QA is owned by product teams themselves I'm hoping that this will help us ship more confidently. As such, there's not a lot of guidance for how to do this task yet, and it may depend a fair bit on the feature in question.
Basic steps hat make sense to me:
- Ensure you're able to use the functionality (enable relevant FFs / use new enough version of GitLab/Runner)
- Read through documenation
- Follow documentation to test documented usage patterns
- Extend documented usage patterns with potential misuse or misguided expectations, e.g.
- Set an empty value for something that shouldn't be empty
- Use variables to see if they're expanded or not
- Use feature in an included file
- Test limits – a lot of inputs, very large values for an input etc.
Maybe ask Duo about potential ways that users might misunderstand or (unintentionally) abuse the functionality based on how it is documented.
If we find rough edges, we can polish the experience or documentation a bit further to avoid support tickets or bug report issues. If we don't find anything – great. That is not a guarantee that someone else won't – in that case we can learn what to test/look for in the future.