Execute scheduled triggers - tooltip in frontend is incorrect
Summary
The configuration parameter Execute scheduled triggers
is applied only in sidekick and not in the frontend tooltip.
The tooltip always behaves as if the pipelines will be checked on the default 19 minutes.
Steps to reproduce
If you deploy this chart with the following sidekick configuration you will see the fault:
gitlab:
sidekiq:
cron_jobs:
pipeline_schedule_worker:
cron: "* * * * *"
Configuration used
values.yml used to deploy the chart. We use envsubst to replace environment variables:
global:
# The GitLab version used in the default image tag for the charts can be changed using the global.gitlabVersion key.
# gitlabVersion:
application:
create: false
hosts:
# The base domain. GitLab and Registry will be exposed on the subdomain of this setting. This defaults to example.com,
# but is not used for hosts that have their name property configured. See the gitlab.name, minio.name, and registry.name sections below.
domain: ${BASE_DOMAIN}
externalIP:
gitlab:
name: ${GITLAB_DOMAIN}
https: true
registry:
name: ${GITLAB_REGISTRY_DOMAIN}
https: true
minio:
name: ${MINIO_DOMAIN}
https: true
ingress:
enable: true
# If true, relies on certmanager-issuer.email being set.
configureCertmanager: true
tls:
enabled: true
annotations:
annotation-key: '"nginx\.ingress\.kubernetes\.io/enable-access-log"=true'
psql:
host: ${DB_HOSTNAME}
port: ${db_port}
username: ${db_username}
database: ${db_name}
password:
# TODO requires create a secret in kubernetes before the gitlab deployment
# DONE
# Example: kubectl create secret generic gitlab-psql --from-literal=psql-password=mysupersecretpassword --dry-run -o yaml | kubectl apply -f -
secret: gitlab-psql
key: psql-password
redis:
password: {}
# registry:
# bucket: orange-x.gitlab-eks.registry
# certificate: {}
# httpSecret: {}
# replicas: 1
# ingress:
# enabled: true
# tls:
# enabled: true
# storage:
# # TODO
# # DONE
# # kubectl create secret generic registry-storage --from-file=config=registry-storage.yaml
# secret: registry-storage
# key: config
gitaly:
internal:
names:
- default
- secondary
- third
minio:
enabled: false
appConfig:
enableUsagePing: true
enableImpersonation: true
defaultCanCreateGroup: true
usernameChangingEnabled: false
defaultProjectsFeatures:
issues: true
mergeRequests: true
wiki: true
snippets: true
builds: true
# https://gitlab.com/charts/gitlab/blob/master/doc/advanced/external-object-storage/index.md#backups
backups:
bucket: ${backup_bucket_name}
tmpBucket: ${backup_tmp_bucket_name}
shell:
authToken: {}
hostKeys: {}
omniauth:
enabled: false
ldap: {}
# servers:
# # 'main' is the GitLab 'provider ID' of this LDAP server
# main:
# label: 'LDAP'
# host: gitlab-ldap-openldap
# active_directory: false
# port: 389
# uid: uid
# method: 'plain'
# bind_dn: 'cn=admin,dc=example,dc=com'
# base: 'dc=example,dc=com'
# user_filter: ''
# group_base: 'ou=groups,dc=example,dc=com'
# # kubectl create secret generic ldap-credentials --from-literal=ldap-password=mysupersecretpassword --dry-run -o yaml | kubectl apply -f -
# password: admin
# #secret: ldap-credentials
# #key: ldap-password
lfs:
bucket: ${lfs_bucket_name}
# kubectl create secret generic lfs-storage --from-file=config=rails.s3.yaml
connection:
secret: lfs-storage
key: config
artifacts:
bucket: ${artifacts_bucket_name}
# kubectl create secret generic artifacts-storage --from-file=config=rails.s3.yaml
connection:
secret: artifacts-storage
key: config
uploads:
bucket: ${uploads_bucket_name}
# kubectl create secret generic uploads-storage --from-file=config=rails.s3.yaml
connection:
secret: uploads-storage
key: config
packages:
bucket: ${packages_bucket_name}
# kubectl create secret generic packages-storage --from-file=config=rails.s3.yaml
connection:
secret: packages-storage
key: config
pseudonymizer:
bucket: ${pseudonymizer_bucket_name}
# kubectl create secret generic pseudonymizer-storage --from-file=config=rails.s3.yaml
connection:
secret: pseudonymizer-storage
key: config
postgresql:
install: false
redis:
enabled: true
redis-ha: # Does not work
enabled: false
# https://gitlab.com/charts/gitlab/blob/master/doc/advanced/external-object-storage/index.md#backups
# kubectl create secret generic s3cmd-config --from-file=config=s3cmd.config
gitlab:
task-runner:
backups:
objectStorage:
config:
secret: s3cmd-config
key: config
sidekiq:
cron_jobs:
stuck_ci_jobs_worker:
cron: "0 * * * *"
pipeline_schedule_worker:
cron: "* * * * *"
expire_build_artifacts_worker:
cron: "50 * * * *"
gitlab-runner:
# https://gitlab.com/gitlab-org/gitlab-runner/issues/3807
image: gitlab/gitlab-runner:alpine-v11.6.1
install: true
logLevel: debug
rbac:
create: true
runners:
locked: false
cache:
cacheType: s3
s3BucketName: ${cache_bucket_name}
cacheShared: true
s3BucketLocation: ${AWS_REGION}
# Example: kubectl create secret generic s3access --from-literal=accesskey="AKIA" --from-literal=secretkey="" --dry-run -o yaml | kubectl apply -f -
secretName: s3-runner-cache-credentials
s3CachePath: gitlab-runner
s3CacheInsecure: false
s3ServerAddress: s3-${AWS_REGION}.amazonaws.com
certmanager-issuer:
email: devops@example.com
registry:
enabled: true
bucket: ${registry_bucket_name}
certificate: {}
httpSecret: {}
replicas: 1
ingress:
enabled: true
tls:
enabled: true
delete:
enabled: false
storage:
# TODO
# DONE
# kubectl create secret generic registry-storage --from-file=config=registry-storage.yaml
secret: registry-storage
key: config
Command to deploy
helm upgrade --install mygitlab gitlab/gitlab --values gitlab/rendered/helm-values.yaml --version 1.5.3
Current behavior
We can see that the sidekick configuration is applied correctly, that a pipeline programmed to run every two minutes works correctly but in the frontend appears the sidekick scheduling by default.
Expected behavior
We hope that in the frontend (last capture) appears correctly the time at which it will run the next pipeline.
Versions
- Chart: 1.5.3
- Platform:
- Cloud: AWS EKS
- Kubernetes: (
kubectl version
)- Client: v1.13.3
- Server: v1.13.3
- Helm: (
helm version
)- Client: v2.12.1
- Server: v2.12.3
Relevant logs
N/A