Autoscaler Executor with Windows images using Hyper-V isolation: net/http: timeout awaiting response headers
Summary
Every job that leads to the creation of a new instance using the autoscaler executor with an azure virtual machine scale set, using a windows image that runs in hyper-v isolation, fails with a timeout while awaiting response headers.
I also use images with process isolation and haven't seen the problem there. Could very well be that those two aren't related, though.
I'll mention it below as well, but the solution here for me is either increasing the timeouts or making them configurable somehow.
Steps to reproduce
Not sure how reproducible this actually is.
For me it is really just using a hyper-v image in an azure virtual machine scale set and starting a job
Actual behavior
Timeout and failure of the job
Expected behavior
Job runs through and works
Relevant logs and/or screenshots
job log
Dialing instance 1...
Instance 1 connected
Using Docker executor with image xxxxx ...
Authenticating with credentials from $DOCKER_AUTH_CONFIG
Pulling docker image xxxxx ...
Using docker image xxxxx for xxxxx with digest xxxxx@sha256:xxxxx ...
Preparing environment
Running on RUNNER-xxxx via
vm-prod-gitlab-runner-DS1-v2...
Getting source from Git repository
Skipping Git repository setup
Skipping Git checkout
Skipping Git submodules setup
Executing "step_script" stage of the job script
Using docker image sha256:xxxx for xxxx with digest xxxx@sha256:xxxx ...
Cleaning up project directory and file based variables
ERROR: Job failed (system failure): Post "http://internal.tunnel.invalid/v1.43/containers/7697666f3890479374beb57699a4cd8196000bffc22222fbaa0fef5121604fb7/start": net/http: timeout awaiting response headers (exec.go:77:120s)
Used GitLab Runner version
Running with gitlab-runner 17.3.0
Possible fixes
I already build my own GitLab runner due to #38014. So I increased all the timeouts here in options.go and the problem didn't occur again.
Maybe make the timeout configurable, or just increasing it would also work for me at least.
