2022-05-20 04:54:03 +0000 SEVERE: http://gitlab-ee-a33e99dc.test/gitlab-qa-sandbox-group/qa-test-2022-05-20-04-22-46-5cd6d64fe3e91525/web-only-pipeline-cfbf4592118e103a/-/pipelines - Failed to load resource: the server responded with a status of 422 (Unprocessable Entity)
3) Verify Run pipeline with web only rule can trigger pipeline Failure/Error: Page::Project::Pipeline::New.perform(&:click_run_pipeline_button) QA::Page::Validatable::PageValidationError: pipeline_header did not appear on QA::Page::Project::Pipeline::Show as expected
Screenshot / HTML page
Possible fixes
Designs
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related or that one is blocking others.
Learn more.
It looks like a failureflaky-test as it hasn't come up frequently. I'll take a look at this in the next milestone as it will take time to investigate as well.
@richard.chong - is this ready to move over into workflowready for development ? if not could we either get it into that state or move to the next milestone by end of the week?
Thanks @jheimbuck_gl - @richard.chong - since we are 1/2 way into %15.3, does this require any backend assistance here too? I'm just trying to better understand if this is something that should move out based on capacity. Thanks!
@marknuzzo Don't need backend assistance for this one at the moment. I need to investigate the root cause first and that's when I'll know what needs to be done to fix it. I will be spending time this week looking into this.
I've been running this test against staging and wasn't able to reproduce this. I don't think this is still an issue at the moment. Looking at gitlab-org/quality/testcases#2021 (closed), there is no recent record of this failure occurring recently as well. I think we are okay to keep monitoring for now until it comes up again. I will downgrade the priority here down to priority3 and just continue monitoring. Once we hit the 3-month mark for this, I'll just close this off if there's still no new instance of this issue.
Looks like this has something to do with the nightly environment specifically. There's a new occurrence a week ago but is also only on nightly still. Continuing to investigate..
Found this error that is pretty close to the time of failure. Looks like there's some occasional blips where the API becomes unreachable so the tests subsequently fail.
{"severity":"ERROR","time":"2022-05-20T04:04:04.280Z","correlation_id":"a9e3a002e3b07f542ed3280303b1d921","exception.class":"Gitlab::Git::PreReceiveError","exception.message":"Internal API unreachable","exception.backtrace":["lib/gitlab/gitaly_client/operation_service.rb:391:in `user_commit_files'","lib/gitlab/git/repository.rb:917:in `block in multi_action'","lib/gitlab/git/wraps_gitaly_errors.rb:7:in `wrapped_gitaly_errors'","lib/gitlab/git/repository.rb:916:in `multi_action'","app/models/repository.rb:847:in `block in multi_action'","app/models/repository.rb:830:in `with_cache_hooks'","app/models/repository.rb:847:in `multi_action'","app/models/repository.rb:805:in `create_file'","app/services/files/create_service.rb:16:in `create_transformed_commit'","app/services/files/create_service.rb:10:in `create_commit!'","app/services/commits/create_service.rb:30:in `execute'","app/services/projects/create_service.rb:182:in `create_readme'","app/services/projects/create_service.rb:130:in `after_create_actions'","ee/app/services/ee/projects/create_service.rb:62:in `after_create_actions'","app/services/projects/create_service.rb:76:in `block in execute'","lib/gitlab/application_context.rb:103:in `block in use'","lib/gitlab/application_context.rb:103:in `use'","lib/gitlab/application_context.rb:48:in `with_context'","app/services/projects/create_service.rb:75:in `execute'","ee/app/services/ee/projects/create_service.rb:31:in `execute'","app/services/concerns/measurable.rb:35:in `execute'","lib/gitlab/database_importers/self_monitoring/project/create_service.rb:59:in `create_project'","app/models/concerns/stepable.rb:14:in `call'","app/models/concerns/stepable.rb:14:in `block in execute_steps'","app/models/concerns/stepable.rb:13:in `each'","app/models/concerns/stepable.rb:13:in `inject'","app/models/concerns/stepable.rb:13:in `execute_steps'","lib/gitlab/database_importers/self_monitoring/project/create_service.rb:27:in `execute'","(eval):3:in `block (2 levels) in run_file'","lib/gitlab/database/load_balancing/connection_proxy.rb:119:in `block in write_using_load_balancer'","lib/gitlab/database/load_balancing/load_balancer.rb:112:in `block in read_write'","lib/gitlab/database/load_balancing/load_balancer.rb:172:in `retry_with_backoff'","lib/gitlab/database/load_balancing/load_balancer.rb:110:in `read_write'","lib/gitlab/database/load_balancing/connection_proxy.rb:118:in `write_using_load_balancer'","lib/gitlab/database/load_balancing/connection_proxy.rb:70:in `transaction'","lib/gitlab/database.rb:290:in `block in transaction'","lib/gitlab/database.rb:289:in `transaction'","lib/tasks/gitlab/db.rake:104:in `block (3 levels) in \u003ctop (required)\u003e'"],"user.username":null,"tags.program":"web","tags.locale":"en","tags.feature_category":null,"tags.correlation_id":"a9e3a002e3b07f542ed3280303b1d921"}
I don't think this has to do with the feature or the spec being broken and this will continue to occasionally come up. I'll downgrade this to priority4 and move this to the %Backlog. I'll keep this open in case there's a spike of occurrences.