The scheduler failed to assign job to the runner

Please follow the troubleshooting guide if you run into this issue.

Troubleshooting: CI Job Scheduler Failures

When you see the error "The scheduler failed to assign job to the runner", CI/CD pipelines fail to execute jobs. This guide helps you diagnose and resolve the issue.

Symptoms

  • Pipeline jobs show "The scheduler failed to assign job to the runner" error
  • Jobs never get assigned to runners despite runners being available
  • Pipeline status changes from running to failed without executing
  • Logs show repeated messages: Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:xxxxx

Root Causes and Solutions

1. Missing Signing Keys (Most Common)

Cause: Missing ci_jwt_signing_key or ci_job_token_signing_key in the database, typically after incomplete initial setup with external databases.

Solution:

  1. Check if signing keys exist:

    Insert at cursor

    sudo gitlab-rails dbconsole SELECT * FROM application_settings WHERE ci_jwt_signing_key IS NOT NULL OR ci_job_token_signing_key IS NOT NULL;

  2. If keys are missing, generate and set them:

    Insert at cursor

    sudo gitlab-rails runner -e production 'ApplicationSetting.current.update!(ci_jwt_signing_key: SecureRandom.random_bytes(32), ci_job_token_signing_key: SecureRandom.random_bytes(32))'\n ```

  3. Restart GitLab:

    Insert at cursor

    sudo gitlab-ctl restart

Prevention: Always run gitlab:setup during initial installation, especially with external databases:

Insert at cursor

sudo gitlab-rake gitlab:setup

2. Incomplete Database Migrations

Cause: Database migrations or background migrations haven't completed after an upgrade.

Solution:

  1. Check migration status:

    Insert at cursor

    sudo gitlab-rake db:migrate:status

  2. Check background migrations:

    Insert at cursor

    sudo gitlab-rake gitlab:background_migrations:status

  3. If migrations are pending, wait for them to complete or manually run:

    Insert at cursor

    sudo gitlab-rake db:migrate

3. Gitaly or Network Issues

Cause: Slow or failed calls to Gitaly service, or network connectivity problems between GitLab and runners.

Solution:

  1. Check Gitaly connectivity:

    Insert at cursor

    sudo gitlab-ctl status gitaly

  2. Review logs for Gitaly errors:

    Insert at cursor

    sudo gitlab-ctl tail gitaly

  3. Verify runner can reach GitLab:

    • From runner machine: curl -v https://your-gitlab-instance
    • Check firewall rules between runner and GitLab
    • Verify DNS resolution

4. Monitoring/Prometheus Configuration Issues

Cause: Alertmanager or Prometheus misconfiguration causing system instability.

Solution:

  1. Check for alertmanager errors in logs:

    Insert at cursor

    sudo gitlab-ctl tail alertmanager

  2. If monitoring is causing issues, temporarily disable it:

    Insert at cursor

    # Edit /etc/gitlab/gitlab.rb prometheus_monitoring['enable'] = false

  3. Reconfigure GitLab:

    Insert at cursor

    sudo gitlab-ctl reconfigure

5. Stale Exclusive Leases

Cause: Exclusive leases stuck in Redis from crashed processes.

Solution:

  1. Clear exclusive leases:

    Insert at cursor

    sudo gitlab-rake gitlab:exclusive_lease:clear

  2. Restart GitLab:

    Insert at cursor

    sudo gitlab-ctl restart

Diagnostic Steps

If the above solutions don't work, gather diagnostic information:

  1. Check all migrations are complete:

    Insert at cursor

    sudo gitlab-rake db:migrate:status sudo gitlab-rake gitlab:background_migrations:status

  2. Verify runner registration:

    • Go to Admin > Runners
    • Confirm runners show as "online"
    • Check runner logs for connection errors
  3. Review application logs:

    Insert at cursor

    sudo gitlab-ctl tail gitlab-rails

  4. Generate diagnostic report (for support):

    Insert at cursor

    sudo gitlab-ctl collect-logs

  5. Run GitLab health check:

    Insert at cursor

    sudo gitlab-rake gitlab:check

When to Contact Support

If you've completed all steps above and the issue persists, collect:

  1. GitLab version and installation type (Omnibus, Docker, source)

  2. Runner version and configuration

  3. Output from gitlab-rake gitlab:env:info

  4. Output from gitlab-rake gitlab:check

  5. Recent logs from gitlab-ctl tail (last 100 lines)

  6. GitLab SOS report (if available):

    Insert at cursor

    sudo gitlab-ctl collect-logs

Then contact GitLab Support with this information.

Summary

Trying to run CI pipeline does not work. Trying with auto-devops. I've tried doing gitlab-rake gitlab:exclusive_lease:clear as suggested on #331033 (closed) but does not change anything. Also I've reported on forum here: https://forum.gitlab.com/t/possible-bug-running-auto-devops-the-scheduler-failed-to-assign-job-to-the-runner-please-try-again-or-contact-system-administrator/81644

This gitlab installation has been updated since gitlab 8 and migrated from another machine. Apparently all is working OK except pipelines.

Problem has been reproduced on 15.6.6, 15.6.8 and 15.7.8. Maybe present on previous versions too but not tested. Maybe is not a bug but i cannot find any documentation about the problem and possible resolution.

Checked also https://stackoverflow.com/questions/64607412/the-scheduler-failed-to-assign-job-to-the-runner-please-try-again-or-contact-sy, but since is a auto-devops, there's not configuration or variables involved.

What is the current bug behavior?

Pipeline does not work

What is the expected correct behavior?

Pipeline should work or give some info for resolution of the problem.

Relevant logs and/or screenshots

image

2023-03-09T13:33:28.078Z: {:message=>"Enqueuing hooks for Pipeline 87: running", :class=>"Ci::Pipeline", :pipeline_id=>87, :project_id=>49, :pipeline_status=>"running"}
2023-03-09T13:33:29.424Z: Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:87. There must be another instance already in execution.
2023-03-09T13:33:29.580Z: Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:87. There must be another instance already in execution.
2023-03-09T13:33:29.593Z: Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:87. There must be another instance already in execution.
2023-03-09T13:33:29.611Z: Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:87. There must be another instance already in execution.
2023-03-09T13:33:29.627Z: Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:87. There must be another instance already in execution.
2023-03-09T13:33:29.915Z: {:message=>"Enqueuing hooks for Pipeline 87: failed", :class=>"Ci::Pipeline", :pipeline_id=>87, :project_id=>49, :pipeline_status=>"failed"}
2023-03-09T13:36:34.948Z: ActiveRecord connection established
2023-03-09T13:36:36.018Z: {:message=>"Syncing dynamic postgres partitions"}
2023-03-09T13:36:36.019Z: {:message=>"Switched database connection", :connection_name=>"main"}
2023-03-09T13:36:36.035Z: {:message=>"Checking state of dynamic postgres partitions", :table_name=>"audit_events", :connection_name=>"main"}
2023-03-09T13:36:36.072Z: {:message=>"Switched database connection", :connection_name=>"main"}
2023-03-09T13:36:36.075Z: {:message=>"Checking state of dynamic postgres partitions", :table_name=>"web_hook_logs", :connection_name=>"main"}
2023-03-09T13:36:36.093Z: {:message=>"Switched database connection", :connection_name=>"main"}
2023-03-09T13:36:36.096Z: {:message=>"Checking state of dynamic postgres partitions", :table_name=>"loose_foreign_keys_deleted_records", :connection_name=>"main"}
2023-03-09T13:36:36.127Z: {:message=>"Switched database connection", :connection_name=>"main"}
2023-03-09T13:36:36.130Z: {:message=>"Checking state of dynamic postgres partitions", :table_name=>"batched_background_migration_job_transition_logs", :connection_name=>"main"}
2023-03-09T13:36:36.149Z: {:message=>"Switched database connection", :connection_name=>"main"}
2023-03-09T13:36:36.152Z: {:message=>"Checking state of dynamic postgres partitions", :table_name=>"incident_management_pending_alert_escalations", :connection_name=>"main"}
2023-03-09T13:36:36.173Z: {:message=>"Switched database connection", :connection_name=>"main"}
2023-03-09T13:36:36.176Z: {:message=>"Checking state of dynamic postgres partitions", :table_name=>"incident_management_pending_issue_escalations", :connection_name=>"main"}
2023-03-09T13:36:36.197Z: {:message=>"Switched database connection", :connection_name=>"main"}
2023-03-09T13:36:36.199Z: {:message=>"Checking state of dynamic postgres partitions", :table_name=>"verification_codes", :connection_name=>"main"}
2023-03-09T13:36:36.220Z: {:message=>"Switched database connection", :connection_name=>"main"}
2023-03-09T13:36:36.220Z: {:message=>"Switched database connection", :connection_name=>"main"}
2023-03-09T13:36:36.220Z: {:message=>"Switched database connection", :connection_name=>"main"}
2023-03-09T13:36:36.220Z: {:message=>"Switched database connection", :connection_name=>"main"}
2023-03-09T13:36:36.220Z: {:message=>"Switched database connection", :connection_name=>"main"}
2023-03-09T13:36:36.220Z: {:message=>"Finished sync of dynamic postgres partitions"}
2023-03-09T13:36:41.429Z: Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:87. There must be another instance already in execution.
2023-03-09T13:36:41.480Z: Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:87. There must be another instance already in execution.
2023-03-09T13:36:41.509Z: {:message=>"Enqueuing hooks for Pipeline 87: running", :class=>"Ci::Pipeline", :pipeline_id=>87, :project_id=>49, :pipeline_status=>"running"}
2023-03-09T13:36:44.382Z: Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:87. There must be another instance already in execution.
2023-03-09T13:36:44.399Z: Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:87. There must be another instance already in execution.
2023-03-09T13:36:44.415Z: Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:87. There must be another instance already in execution.
2023-03-09T13:36:44.432Z: Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:87. There must be another instance already in execution.
2023-03-09T13:36:44.592Z: {:message=>"Enqueuing hooks for Pipeline 87: failed", :class=>"Ci::Pipeline", :pipeline_id=>87, :project_id=>49, :pipeline_status=>"failed"}

Results of GitLab environment info

Expand for output related to GitLab environment info

\~:/var/log/gitlab# gitlab-rake gitlab:env:info Attention: used pure ruby version of MurmurHash3  System information System:         Debian 11 Current User:   gitlab Using RVM:      no Ruby Version:   2.7.4p191 Gem Version:    3.2.5 Bundler Version:2.2.5 Rake Version:   13.0.3 Redis Version:  6.0.16 Sidekiq Version:6.5.7 Go Version:     unknown  GitLab information Version:        15.7.8 Revision:       Unknown Directory:      /usr/share/gitlab DB Adapter:     PostgreSQL DB Version:     13.9 URL:            https://localhost HTTP Clone URL: https://localhost/some-group/some-project.git SSH Clone URL:  gitlab@localhost:some-group/some-project.git Using LDAP:     no Using Omniauth: no  GitLab Shell Version:        14.14.0 Repository storages: - default:      unix:/run/gitlab/sockets/private/gitaly.socket GitLab Shell path:              /usr/share/gitlab-shell  

Results of GitLab application Check

Expand for output related to the GitLab application check

\~:/var/log/gitlab# gitlab-rake gitlab:check SANITIZE=true Attention: used pure ruby version of MurmurHash3 Checking GitLab subtasks ...

Checking GitLab Shell ...

GitLab Shell: ... GitLab Shell version \>= 14.14.0 ? ... OK (14.14.0) Running /usr/share/gitlab-shell/bin/gitlab-shell-check Internal API available: OK Redis available via internal API: OK gitlab-shell self-check successful

Checking GitLab Shell ... Finished Checking Gitaly ... Gitaly: ... default ... OK Checking Gitaly ... Finished Checking Sidekiq ... Sidekiq: ... Running? ... yes Number of Sidekiq processes (cluster/worker) ... 1/1 Checking Sidekiq ... Finished Checking Incoming Email ... Incoming Email: ... Reply by email is disabled in config/gitlab.yml Checking Incoming Email ... Finished Checking LDAP ... LDAP: ... LDAP is disabled in config/gitlab.yml Checking LDAP ... Finished Checking GitLab App ...

Database config exists? ... yes All migrations up? ... yes Database contains orphaned GroupMembers? ... no GitLab config exists? ... yes GitLab config up to date? ... yes Cable config exists? ... yes Resque config exists? ... yes Log directory writable? ... yes Tmp directory writable? ... yes Uploads directory exists? ... yes Uploads directory has correct permissions? ... yes Uploads directory tmp has correct permissions? ... yes Systemd unit files or init script exist? ... yes Systemd unit files or init script up-to-date? ... yes

Projects have namespace: ... E / EVo ... yes W / EV ... yes E / M ... yes E / C ... yes E / B ... yes W / EVa ... yes E / EVp ... yes R / S ... yes R / CP ... yes D / DCP ... yes F / B ... yes N / H ... yes Y / P ... yes T / Y ... yes T / kc ... yes T / B ... yes T / A ... yes T / C ... yes T / cu ... yes T / de ... yes T / Ep ... yes T / Ne ... yes T / Pe ... yes T / Se ... yes T / ol ... yes Y / Ac ... yes T / se ... yes T / Mi ... yes T / El ... yes Y / am ... yes Y / li ... yes R / CG ... yes V / Od ... yes T / Sp ... yes V / si ... yes V / en ... yes N / DCP ... yes V / se ... yes D / Sc ... yes V / t ... yes A / WP ... yes U / P ... yes GitLab Instance Administrators / GitLab self monitoring ... yes Y / b ... yes Y / P ... yes R / R ... yes A / F ... yes

Redis version \>= 6.0.0? ... yes Ruby version \>= 2.7.2 ? ... yes (2.7.4) Git user has default SSH configuration? ... yes Active users: ... 7 Is authorized keys file accessible? ... yes GitLab configured to store new projects in hashed storage? ... yes All projects are in hashed storage? ... yes

Checking GitLab App ... Finished

Checking GitLab subtasks ... Finished

Edited Feb 10, 2026 by 🤖 GitLab Bot 🤖
Assignee Loading
Time tracking Loading