This is a followup to an issue created earlier for supporting a minimal e2e test.The scope of the issue created earlier was to have a working e2e spec that can be run locally but not on CI or a long-lived sandbox environment.
The scope of this ticket is to investigate / implement changes to support an e2e spec in CI/sandbox environment
This effort is granularized into the following
Install ingress-controller in the k8s cluster
Agent setup in the k8s cluster
Installing Workspaces proxy in the k8s cluster
Change the current e2e spec to do all the pre-requisties
When the workspace is created clicking on the workspace link should load the VSCode WebIDE & Terminal is workable
My understanding is that issue will be handed over to you soon. Just to highlight, you may also want to take a look at the work done in an earlier issue to support a local e2e spec (it's disabled in CI however).
Do let me know if I can fill in any context / background info that you may find useful.
Nivetha Prabakaranchanged title from Support end-to-end QA test in CI of remote dev architecture spike to Support end-to-end QA test in CI for Workspaces
changed title from Support end-to-end QA test in CI of remote dev architecture spike to Support end-to-end QA test in CI for Workspaces
I seem to remember that the conclusion was that they should not, because the QA team members are not fully dedicated to the Remote Development team, and therefore their work should not be included and calculated as part of our team's velocity.
We can look for the discussion, but I think that was the conclusion.
However, it should be noted that the issues that @shekharpatnaik owns ARE managed as part of our workflow, even though he is not 100% dedicated to our team. And also @ealcantara splits his effort with Category:Web IDE.
So, from that perspective, our velocity is already inherently inaccurate due to them not being 100% dedicated on the team (although we can evolve our process in the future to account for this).
But I also believe they are in a different situation because their assigned features/issues are directly related to the product, require awareness and discussion of the full team during IPMs, and may be blockers to other issues. Thus they warrant being handled as part of our standard remote development workflow.
In any case, we need to decide how to handle QA issues, especially in light of our new automations.
PROPOSAL 1:
Any issues that QA owns and is working on should have the QA label assigned, in addition to the Category:Remote Development label.
All of the remote development workflow bot automation rules should be updated to exclude any issues which have the QA label. I.e., it will not touch them or make any automated updates or comments related to milestones, iterations, labels, etc.
The benefits of this proposal is that we are not imposing our workflow on the QA team members, i.e. as in this case by asking them to submit an issue refinement, and manage iteration/milestone/etc.
One slight downside of this approach is that these QA issues will then not show up on our iteration planning board, because they are not in any ~rd-workflow::* label.
PROPOSAL 2:
Alternately, we could require that the QA issues be included and managed according to our remote development team workflow.
The downside of this is as noted above, it places requirements on QA team members to follow our process, which may not provide value for them in their workflow.
One other benefit of PROPOSAL 1 is that we can use this same process precedent (including automation rules) for other types of issues that we don't necessarily want to see on our iteration planning board, such as the security issues that may linger open for many weeks.
We might even want to introduce a new generic ~rd-workflow::ignore state to be used for this purpose, which would allow us to write the automation rules in a generic way. And we could also even show this column on the iteration board if we wanted to have visibility of them there, but outside the standard process flow.
Are you happy with PROPOSAL 1? i.e. we add the QA label to any issues you are working on and we can exclude them from the Category:Remote Development automation? I agree with @cwoolley-gitlab this is the best path forward.
We might even want to introduce a new generic ~rd-workflow::ignore state to be used for this purpose, which would allow us to write the automation rules in a generic way. And we could also even show this column on the iteration board if we wanted to have visibility of them there, but outside the standard process flow.
This actually seems like the best choice, lets's just mark these as ~rd-workflow::ignore and our automation can ignore those items.
But I also believe they are in a different situation because their assigned features/issues are directly related to the product, require awareness and discussion of the full team during IPMs, and may be blockers to other issues.
This is a great point @cwoolley-gitlab and that's the tricky distinction.
I also support a ~rd-workflow::ignore label that we use for automation, as it resolves most of my other concerns about the meta issues that get assigned to me as well.
@cwoolley-gitlab@oregand This PROPOSAL 1 seems great where we can include QA labels and also add the ~rd-workflow::ignore so that it can be on the board if needed but does not affect the iteration.
Thank you Chad for this proposal which would really make it easier for the QA team members to exclude from the refinement process