GitLab Chart issueshttps://gitlab.com/gitlab-org/charts/gitlab/-/issues2024-03-28T12:59:06Zhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/4003Unknown Blob error using container registry2024-03-28T12:59:06ZPedro Henrique Florio MendesUnknown Blob error using container registry<!---
Please read this!
Before opening a new issue, make sure to search for keywords in the issues
filtered by the "regression" or "type::bug" label:
- https://gitlab.com/gitlab-org/gitlab/issues?label_name%5B%5D=regression
- https://g...<!---
Please read this!
Before opening a new issue, make sure to search for keywords in the issues
filtered by the "regression" or "type::bug" label:
- https://gitlab.com/gitlab-org/gitlab/issues?label_name%5B%5D=regression
- https://gitlab.com/gitlab-org/gitlab/issues?label_name%5B%5D=type::bug
and verify the issue you're about to submit isn't a duplicate.
--->
### Summary
<!-- Summarize the bug encountered concisely. -->
When I try to push an image to the container registry of my self managed gitlab instance, I get an "unknown blob" or "blob upload unknown" error. When I open the instance and try to find the image that I just pushed, I can see it there, but with no tags associated with it.
### Steps to reproduce
<!-- Describe how one can reproduce the issue - this is very important. Please use an ordered list. -->
- Install Gitlab using Helm in an AKS
- Don't alter the initial values of the Container Registry configurations
- Tag a docker image
- Try to push the image
### What is the current *bug* behavior?
<!-- Describe what actually happens. -->
Docker push results in "blob upload unknown", image with no tags in it
### What is the expected *correct* behavior?
<!-- Describe what you should see instead. -->
Docker push results in "pushed", and image have 1 tag on it
### Relevant logs and/or screenshots
<!-- Paste any relevant logs - please use code blocks (```) to format console output, logs, and code
as it's tough to read otherwise. -->
Docker push results
``` docker push registry.***.com.br/***/test-user/hello-world
Using default tag: latest
The push refers to repository [registry.***.com.br/***/test-user/hello-world]
e07ee1baac5f: Pushing [==================================================>] 14.85kB
blob upload unknown
```
Pushed image
![image](/uploads/deb16bfa73dca83403a6b816cf27fcdb/image.png)
Registry pod logs
```
{"auth_user_name":"","code":"NAME_UNKNOWN","correlation_id":"***","detail":"map[name:pedromendesinspira/test-user/testeflib]","error":"name unknown: repository name not known to registry","go_version":"go1.17.6","level":"error","msg":"repository name not known to registry","root_repo":"pedromendesinspira","time":"2022-03-15T23:57:52.680Z","vars_name":"pedromendesinspira/test-user/testeflib","version":"v3.27.1-gitlab"}
```
```
{"auth_user_name":"pedromendesinspira","code":"BLOB_UPLOAD_UNKNOWN","correlation_id":"***","detail":"blob upload unknown","error":"blob upload unknown: blob upload unknown to registry","go_version":"go1.17.6","level":"error","msg":"blob upload unknown to registry","root_repo":"pedromendesinspira","time":"2022-03-15T21:55:52.601Z","vars_name":"pedromendesinspira/test-user/testeflib","vars_uuid":"1918744f-9aae-4339-9a8e-375d4a15c0c9","version":"v3.27.1-gitlab"}
```https://gitlab.com/gitlab-org/charts/gitlab/-/issues/5398Add ability to set annotations to registry migrations job2024-03-27T02:33:55ZWei K HuangAdd ability to set annotations to registry migrations job<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
The gitla...<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
The gitlab registry migrations job at https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/charts/registry/templates/migrations-job.yaml needs the ability to set annotations like how the standard migrations job at https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/charts/gitlab/charts/migrations/templates/_jobspec.yaml?ref_type=heads#L13-18. This is necessary for tools such as helm and argocd to be able to remove the old job before creating the new job.
ex. however it should be able to be set by the user of the chart instead of hard coded.
```yaml
"helm.hook": "pre-install"
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"
```
related issues:
- https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3021
- https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3832
- https://gitlab.com/gitlab-org/charts/gitlab/-/issues/734https://gitlab.com/gitlab-org/charts/gitlab/-/issues/2778Deployment in a complete IPv6 only environment2024-03-27T00:21:38Zizazahamed-babjiDeployment in a complete IPv6 only environment## Summary
Deploying gitlab helm chart in a complete IPv6 only environment.
Not able to access the sign_in page and also found that certain end points of webservice pods are throwing "404 Not found Html page" when trying to access usi...## Summary
Deploying gitlab helm chart in a complete IPv6 only environment.
Not able to access the sign_in page and also found that certain end points of webservice pods are throwing "404 Not found Html page" when trying to access using IPv6.
Changed binding ips to IPv6 to access the webpage, rebuilt the image and tried. Still the endpoints are throwing the same and the complete webpage is not loading.
![image](/uploads/34faf5010f26d2b9e3c701bb43002ff0/image.png)
## Steps to reproduce
Deployment in a complete IPv6 only environment.
Dockerfile of gitlab-webservice:
```
FROM registry.gitlab.com/gitlab-org/build/cng/gitlab-webservice-ee:v13.12.1
**RUN sed -i "s+0.0.0.0+[::]+g" /srv/gitlab/config/puma.rb**__
CMD /scripts/process-wrapper
HEALTHCHECK --interval=30s --timeout=30s --retries=5 \
CMD /scripts/healthcheck
```
Made the following changes in the scripts/start-workhorse and rebuild the image:
```
#!/bin/bash
set -e
touch /var/log/gitlab/workhorse.log
# pre-create known good tmpdir
export TMPDIR=/tmp/gitlab
mkdir -p -m 3770 $TMPDIR
export GITLAB_WORKHORSE_LOG_FILE=${GITLAB_WORKHORSE_LOG_FILE:-stdout}
export GITLAB_WORKHORSE_LOG_FORMAT=${GITLAB_WORKHORSE_LOG_FORMAT:-json}
if [[ "${GITLAB_WORKHORSE_PROM_LISTEN_ADDR}" =~ ^.+:[0-9][0-9]{0,4}$ ]]; then
export PROM_LISTEN_ADDR="-prometheusListenAddr ${GITLAB_WORKHORSE_PROM_LISTEN_ADDR}"
fi
gitlab-workhorse \
-logFile ${GITLAB_WORKHORSE_LOG_FILE} \
-logFormat ${GITLAB_WORKHORSE_LOG_FORMAT} \
${GITLAB_WORKHORSE_EXTRA_ARGS} \
-listenAddr **[::]:${GITLAB_WORKHORSE_LISTEN_PORT:-8181}**__ \
-documentRoot "/srv/gitlab/public" \
-secretPath "/etc/gitlab/gitlab-workhorse/secret" \
-config "/srv/gitlab/config/workhorse-config.toml" \
${PROM_LISTEN_ADDR} \
2>&1
```
## Current behavior
```
kubectl exec -it gitlab-webservice-default-7c96fb68d4-69bks -- bash
git@gitlab-webservice-default-7c96fb68d4-69bks:/$ curl 127.0.0.1:8080/-/readiness
{"status":"ok","master_check":[{"status":"ok"}]}
git@gitlab-webservice-default-7c96fb68d4-69bks:/$ curl 127.0.0.1:8080/-/liveness
{"status":"ok"}
git@gitlab-webservice-default-7c96fb68d4-69bks:/$ curl [::1]:8080/-/readiness
**A big 404 Notfound HTML page**__
git@gitlab-webservice-default-7c96fb68d4-69bks:/$ curl [::1]:8080/-/liveness
**A big 404 Notfound HTML page**__
```
## Expected behavior
```
git@gitlab-webservice-default-7c96fb68d4-69bks:/$ curl [::1]:8080/-/readiness
{"status":"ok","master_check":[{"status":"ok"}]}
git@gitlab-webservice-default-7c96fb68d4-69bks:/$ curl [::1]:8080/-/liveness
{"status":"ok"}
```
Similar behaviour at /-/metrics endpoint.
Does gitlab support IPv6 ONLY deployment?!https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3832Replace usages of `.Release.Revision` to support deployment workflows based o...2024-03-23T21:44:05ZPatrick HobuschReplace usages of `.Release.Revision` to support deployment workflows based on `helm template`## Summary
As the title states, I suggest to replace all usages of `.Release.Revision` to support workflows where the output of `helm template` is used instantly or later to perform a deployment.
Currently, `.Release.Revision` is used ...## Summary
As the title states, I suggest to replace all usages of `.Release.Revision` to support workflows where the output of `helm template` is used instantly or later to perform a deployment.
Currently, `.Release.Revision` is used in the following templates:
```
charts/certmanager-issuer/templates/_helpers.tpl
27:{{- printf "%s-%d" $name .Release.Revision | trunc 63 | trimSuffix "-" -}}
charts/gitlab/charts/migrations/templates/_helpers.tpl
12:{{- printf "%s-%d" $name .Release.Revision | trunc 63 | trimSuffix "-" -}}
charts/minio/templates/_helpers.tpl
38:{{- printf "%s-create-buckets-%d" $name .Release.Revision -}}
charts/registry/templates/_helpers.tpl
173:{{- printf "%s-migrations-%d" $name .Release.Revision | trunc 63 | trimSuffix "-" -}}
templates/_helpers.tpl
577:{{- printf "%s-%d-%s" $name .Release.Revision $rand | trunc 63 | trimSuffix "-" -}}
```
Unfortunately, `helm template` does not support overriding `.Release.Revision` so that `helm template` always evaluates this template to the value 1 (also see https://github.com/helm/helm/issues/10690). And this causes problems during the deployment, e.g. of the GitLab "migrations" job, where this value is intended to be used to "update" the job (also see https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/charts/gitlab/charts/migrations/templates/_helpers.tpl).
A more reliable way to deal with these job updates would maybe be to work with a checksum or a date value. This would allow to support such workflows based on `helm template`. Many people believe that every Helm chart should allow to work with such a workflow. And replacing the old behavior with a new one should not affect users.
Btw. we are not using such a workflow based on `helm template` directly, but indirectly by using Helmfile and Helmfile's "ad-hoc dependency" feature. For more input, have a look at my issue there: https://github.com/helmfile/helmfile/issues/430
## Steps to reproduce
not needed
## Configuration used
not needed
## Current behavior
Deployment fails because trying to update the same job.
## Expected behavior
Allow updates of jobs without "knowing" the current revision.
## Versions
- Chart: 6.4.2
- Platform:
- Cloud: AKS
- Kubernetes: (`kubectl version`)
- Client: 1.21.4
- Server: 1.22.6
- Helm: (`helm version`)
- Client: 3.9.0
- Server:
## Relevant logs
```
Error: UPGRADE FAILED: cannot patch "gitlab-migrations-1" with kind Job: Job.batch "gitlab-migrations-1" is invalid: spec.template: Invalid value: core.PodTemplateSpec(...)
```16.11https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4834Unable to deploy with no self signed certs2024-03-22T20:50:14ZNicolas GourdonUnable to deploy with no self signed certs<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
While dep...<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
While deploying in a new namespace, pods are waiting for the wildcard self signed cert secret which is not generated.
There has been [an update on the condition](https://gitlab.com/gitlab-org/charts/gitlab/-/blame/master/templates/shared-secrets/self-signed-cert-job.yml#L4) to create the self signed cert job but [the condition for mounting the secret](https://gitlab.com/gitlab-org/charts/gitlab/-/blame/master/templates/_certificates.tpl#L71) stayed the same.
## Steps to reproduce
Deploy in a new namespace with a configuration that does not define `global.ingress.tls` but does define `gitlab.webservice.ingress.tls.secretName` (same with another component)
## Configuration used
```yaml
global:
ingress:
enabled: true
configureCertmanager: false
gitlab:
webservice:
ingress:
enabled: true
tls:
secretName: gitlab-ingress-tls
certmanager:
install: false
nginx-ingress:
enabled: false
```
## Current behavior
The wildcard self signed cert is mounted on pods but not created by the self signed cert job.
## Expected behavior
The wildcard self signed cert should not be mounted on pods.
## Versions
- Chart: 6.11.9
- Platform:
- Self-hosted: Rancher RKE
- Kubernetes: (`kubectl version`)
- Client: 1.25
- Server: 1.24
- Helm: (`helm version`)
- Client: v3.11.2https://gitlab.com/gitlab-org/charts/gitlab/-/issues/5385customCAs not getting mounted to containers, gitlab-runner in a crashloop wit...2024-03-20T16:33:51Zleland knitecustomCAs not getting mounted to containers, gitlab-runner in a crashloop with TLS failure trying to register the running<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
According...<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
According to [this document](https://docs.gitlab.com/charts/charts/globals#custom-certificate-authorities), the needed certificate related helm chart values need to be specified under the 'global' level, however based on the [raw values file](https://gitlab.com/gitlab-org/charts/gitlab/raw/master/values.yaml) it looks like they should be placed at the same level as 'global'. In any case, whether I specify at one location or the other, or at both locations, the customCA is not getting mounted on the gitlab runner resulting in a crashloop.
## Steps to reproduce
A new cleanly deployed gitlab using the gitlab helm chart.
## Configuration used
```yaml
gitlab:
# disable gitlab-provided addons we already have
certmanager:
install: false
nginx-ingress:
enabled: false
prometheus:
install: false
# required for deployment to work with argocd
upgradeCheck:
enabled: false
global:
# edition: ce
hosts:
domain: non.k.home.net
https: true
ingress:
class: nginx
annotations:
cert-manager.io/cluster-issuer: vault-issuer
configureCertmanager: "false"
certificates:
customCAs:
- secret: ca-bundle
keys:
- ca.crt
certificates:
customCAs:
- secret: ca-bundle
keys:
- ca.crt
certmanager-issuer:
email: work@around.com
gitlab-runner:
enabled: true
```
## Current behavior
gitlab-runner is a crash loop because of TLS failure, to resolve the onprem cert needs to be mounted within the pod. I have a secret with the cert in it, and have specified as instructed according to documentation.
## Expected behavior
Expecting the onprem cert to be mounted in the gitlab-runner pod allowing it to finish initialization, register, and not crash.
## Versions
- Chart: 7.9.2
- Platform:
- Self-hosted: Tanzu
- Kubernetes:
- Client: v1.28.4
- Server: v1.26.5
- Helm:
- Client: v3.13.2+g2a2fb3b
## Relevant logs
```
Merging configuration from template file "/configmaps/config.template.toml"
WARNING: Support for registration tokens and runner parameters in the 'register' command has been deprecated in GitLab Runner 15.6 and will be replaced with support for authentication tokens. For more information, see https://docs.gitlab.com/ee/ci/runners/new_creation_workflow
ERROR: Registering runner... failed runner=qgmcpk0r status=couldn't execute POST against https://gitlab.non.k.home.net/api/v4/runners: Post "https://gitlab.non.k.home.net/api/v4/runners": tls: failed to verify certificate: x509: certificate signed by unknown authority
PANIC: Failed to register the runner.
Quit
```https://gitlab.com/gitlab-org/charts/gitlab/-/issues/5062Support for Server-Side Backups (Gitaly direct to Object Storage)2024-03-18T13:15:11ZMichael G. GerhartSupport for Server-Side Backups (Gitaly direct to Object Storage)<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
[GitLab 1...<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
[GitLab 16.5 introduced the Server-Side backup strategy](https://about.gitlab.com/releases/2023/10/22/gitlab-16-5-released/#back-up-and-restore-repository-data-in-the-cloud), however upon reading the documentation, it seems support for this is missing from the GitLab Helm Chart for 16.5.0.
Omnibus configuration docs:
https://docs.gitlab.com/ee/administration/gitaly/configure_gitaly.html#configure-s3-storage
Server-Side Backup Epic: https://gitlab.com/groups/gitlab-org/-/epics/10826
Adding this functionality into the kubernetes based deployment would be a huge win and alleviate a lot of the stability issues backing up large kubernetes based installations.
## Current behavior
1. The gitaly config toml template is missing a variablized [backup] section: https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/charts/gitlab/charts/gitaly/templates/_configmap_spec.yaml?ref_type=heads#L14
2. I didn't spend a ton of time looking, but I would guess that Gitaly would need to be configured to support IRSA (This is how we have the rest of our GitLab services setup to interact with AWS S3).
## Expected behavior
The Gitaly subchart should configure the [backup] section in Gitaly's config.toml, and the configuration should be configurable by the parent chart values.
## Versions
- Chart: v7.5.0
- Platform:
- Cloud: EKS
- Kubernetes: 1.25.12
- Client: 1.25.12
- Server: 1.25.12
## Acceptance criteria
(copied from https://gitlab.com/gitlab-org/charts/gitlab/-/issues/5062#note_1619263205)
We need to translate the expectations into the chart, and to the CNG container (toolbox)
- [ ] Enable configuration of `backup` section in `gitlab/gitaly` chart's `config.toml.tpl` (including tests)
- [ ] Update the CNG content for `toolbox`, such that `backup-utility` can pass along the variable to the underlying Rake command ([create](https://gitlab.com/gitlab-org/build/CNG/-/blob/master/gitlab-toolbox/scripts/bin/backup-utility#L243) and [restore](https://gitlab.com/gitlab-org/build/CNG/-/blob/master/gitlab-toolbox/scripts/bin/backup-utility#L337))
- [ ] Document how to configure the appropriate secrets, configuration, and backup in a secure fashion16.11https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3813GitLab exporter does not connect redis via sentinels when using redis sentinels2024-03-08T16:29:08ZIsmael Posada TroboGitLab exporter does not connect redis via sentinels when using redis sentinels## Summary
A potential bug under the `gitlab-exporter` component (ffi: https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/charts/gitlab/charts/gitlab-exporter), deployed together with the GitLab Helm chart prevents this component...## Summary
A potential bug under the `gitlab-exporter` component (ffi: https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/charts/gitlab/charts/gitlab-exporter), deployed together with the GitLab Helm chart prevents this component to connect redis via sentinels. It connects always redis via service regardless of using redis with/without sentinel.
## Steps to reproduce
Just to check out `gitlab-exporter` logs, like:
```bash
kubectl logs gitlab-exporter-xxx
```
## Current behavior
`gitlab-exporter` does not connect redis via sentinel when using sentinel, in a similar way `sidekiq` or any other component does.
## Expected behavior
`gitlab-exporter` to connect redis via sentinels (when sentinels are used), otherwise to default to redis.
## Versions
- Chart: `6.2.5`
- Platform:
- Self-hosted: `Kubernetes`
## Relevant logs
n.b: `gitlab-redis` is the master name for sentinel.
```bash
# kubectl logs gitlab-exporter-xxx
E, [2022-08-26T06:43:17.001442 #1] ERROR -- : Error connecting to the Redis: Error connecting to Redis on gitlab-redis:6379 (SocketError)10.100.52.105 - - [26/Aug/2022:06:43:16 UTC] "GET /metrics HTTP/1.1" 200 6596- -> /metrics
```
## Actionable analysis
See https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3813#note_1782066338
## Actionable
- [ ] GitLab Exporter does not currently support the ability to collect metrics behind Sentinels, and needs to be extended to support this.
- https://gitlab.com/gitlab-org/ruby/gems/gitlab-exporter/-/issues/33+
- TBD MR link
- [ ] GitLab Helm chart needs to support configuration of GitLab Exporter with Redis Sentinels for the Sidekiq probes.
- TBD MR link
- [ ] GitLab Helm chart should support enable / disable of probes
- Feature parity to Omnibus GitLab, via `probe_sidekiq`
- TBD MR link
- [ ] Omnibus GitLab needs to support configuration of GitLab Exporter with Redis Sentinels for the Sidekiq probes.
- TBD MR linkhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/2787Cloud native backup does not support deployments in AWS with IMDSv1 disabled2024-03-07T20:22:58ZBen Prescott_Cloud native backup does not support deployments in AWS with IMDSv1 disabled<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
Customer ...<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
Customer raised a ticket to troubleshoot cloud-native backups that weren't working. [GitLab team members can read more in the ticket](https://gitlab.zendesk.com/agent/tickets/218143).
The reason is that they've disabled IMDSv1, and `s3cmd` doesn't seem to support IMDSv2. [AWS EKS best practice advises disabling IMDSv1 for nodes and pods](https://docs.aws.amazon.com/eks/latest/userguide/best-practices-security.html).
Potential fix: use Rails for Helm backups - https://gitlab.com/gitlab-org/charts/gitlab/-/issues/1127
---
This is one of a number of issues around GitLab support for deployments with only IMDSv2 enabled. For more information see the description and comments in: https://gitlab.com/gitlab-org/gitlab/-/issues/334160
---
### What is IMDSv2
- Instance Metadata Service Version 2
- IMDS is the AWS API that's available at `169.254.169.254`
- One use case is obtaining credentials in an environment that uses IAM.
### s3cmd
looking at the s3cmd code, it makes an HTTP connection to `169.254.169.254`, and then:
```
request('GET', "/latest/meta-data/iam/security-credentials/")
```
This will return JSON payload, and from that `AccessKeyId`, `SecretAccessKey`, and `Token` can be extracted.
However, it looks like `s3cmd` is only using IMDSv1, because [the steps documented for using IMDSv2 are](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html#instance-metadata-v2-how-it-works):
- obtain a session token with: `PUT "http://169.254.169.254/latest/api/token"`
- include that token in `GET` requests to the instance metadata service
## Steps to reproduce
- Deploy GitLab to EKS
- Disable IMDSv1
- Using IAM, attempt to back up GitLab
## Configuration used
(Please provide a _sanitized_ version of the configuration used wrapped in a code block (```yaml))
```yaml
(Paste sanitized configuration here)
```
## Current behavior
Backups don't work with IMDSv2 only
## Expected behavior
Customers can disable IMDSv1 per AWS recommendations and backups still work
## Versions
GitLab 13.11.5-ee
- Chart: 4.11.5
- Platform:
- Cloud: EKS
## Relevant logs
(Please provide any relevate log snippets you have collected, using code blocks (```) to format)https://gitlab.com/gitlab-org/charts/gitlab/-/issues/5376GitLab Exporter: allow configuration of Redis Sentinels for Sidekiq probes2024-03-04T17:05:53ZJason PlumGitLab Exporter: allow configuration of Redis Sentinels for Sidekiq probes## Summary
Once GitLab Exporter supports the use of Redis Sentinels in the Sidekiq probes, we need to ensure that the Helm chart supports the configuration of the Redis Sentinels.
## Steps to reproduce
See https://gitlab.com/gitlab-or...## Summary
Once GitLab Exporter supports the use of Redis Sentinels in the Sidekiq probes, we need to ensure that the Helm chart supports the configuration of the Redis Sentinels.
## Steps to reproduce
See https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3813+
## Configuration used
Chart is configured with Redis Sentinels
## Current behavior
Configuration is not rendered
## Expected behavior
Configuration is rendered, per support after https://gitlab.com/gitlab-org/ruby/gems/gitlab-exporter/-/issues/33 is completed.
## Versions
- Chart: All
## Relevant logs
```
Error connecting to the Redis: Error connecting to Redis on gitlab-redis:6379 (SocketError)
```https://gitlab.com/gitlab-org/charts/gitlab/-/issues/5375GitLab Exporter: Enabled control of various exporter items2024-03-04T17:04:29ZJason PlumGitLab Exporter: Enabled control of various exporter items## Summary
GitLab Helm chart should support enable / disable of probes.
The GitLab Exporter currently does not support the use of Redis Sentinels for the Sidekiq probes. We need to be able to enable / disable this. This is step one, wh...## Summary
GitLab Helm chart should support enable / disable of probes.
The GitLab Exporter currently does not support the use of Redis Sentinels for the Sidekiq probes. We need to be able to enable / disable this. This is step one, while we await https://gitlab.com/gitlab-org/ruby/gems/gitlab-exporter/-/issues/33+ to be resolved.
To do this, we can look to feature parity with the Omnibus GitLab for configuring probe groups. See [`Prometheus Gitlab exporter` section of `gitlab.rb.example`](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-config-template/gitlab.rb.template#L2495-2540)
## Steps to reproduce
See https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3813+
## Configuration used
Chart is configured to consume Redis with Sentinels.
## Current behavior
All supported probes are configured, at all times. Exporter fills logs with `Error connecting to the Redis: Error connecting to Redis on gitlab-redis:6379 (SocketError)`
## Expected behavior
Various probe classes of GitLab Exporter can be enabled or disabled. Notice is given via `NOTES.txt` if these need to be disabled automatically (such as Sentinels not supported)
## Versions
- Chart: All
## Relevant logs
See https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3813+
## Acceptance criteria
- [ ] :warning: `probe_sidekiq`
- [ ] :grey_question: `probe_elasticsearch` (currently, this probe not available)
- [ ] :shrug: others?https://gitlab.com/gitlab-org/charts/gitlab/-/issues/5301Registry certificate secret key is not overiden2024-03-02T03:04:54ZSébastien GLONRegistry certificate secret key is not overiden<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
Following ...<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
Following the example provided here: https://docs.gitlab.com/charts/charts/registry/#certificate
Registry certificate secret key is not overriden via values
We need to use the cert-manager to generate authentication certificate but the crt file name require to be cutomiezd.
## Steps to reproduce
Set helm values like this;
```yaml
global:
registry:
enable: true
certificate:
secret: registry-letsencrypt
key: tls.crt
```
Deploy the helm chart;
## Current behavior
Deployment fail with error on the registry pods:
```
MountVolume.SetUp failed for volume "registry-secrets" : references non-existent secret key: registry-auth.crt
```
Same error on the sidekit-all
```
MountVolume.SetUp failed for volume "init-sidekiq-secrets" : references non-existent secret key: registry-auth.key
```
and toolbox:
```
MountVolume.SetUp failed for volume "init-toolbox-secrets" : references non-existent secret key: registry-auth.key
```
## Expected behavior
registry deployment use the provided secret key;
## Versions
- Chart: 7.8.1
- Platform:
- Cloud: onprem
- Self-hosted: kube
- Kubernetes: (`kubectl version`)
- Client: 1.27
- Server: 1.22.2
- Helm: (`helm version`)
- Client: 3.12
## Relevant logs
(Please provide any relevate log snippets you have collected, using code blocks (```) to format)Next 1-3 releaseshttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/5167Nested Redis instance password secrets appear to be ignored with the global s...2024-03-02T03:04:54ZGrant YoungNested Redis instance password secrets appear to be ignored with the global secret being used instead## Summary
While testing out an environment with two Redis instances (for Cache and Persistent) AUTH errors were seen in the environment when trying to connect to the Cache instance - `WRONGPASS invalid username-password pair or user is...## Summary
While testing out an environment with two Redis instances (for Cache and Persistent) AUTH errors were seen in the environment when trying to connect to the Cache instance - `WRONGPASS invalid username-password pair or user is disabled`
In this setup we have Persistent acting as the default and Cache for the `cache` data as per it's namesake, this is configured by configuring the global settings for all classes and a nested `cache` config accordingly as shown below.
Looking at the Webservice and Sidekiq deployment yaml though the cache config looks to be getting ignored, with the global secret key inserted instead:
```yaml
- secret:
name: gitlab-redis-persistent-password. <----
items:
- key: password
path: redis/cache-password
- secret:
name: gitlab-redis-persistent-password
items:
- key: password
path: redis/redis-password
```
Additionally the `dependencies` container didn't pick this up, only checking the Persistent Redis instance and not the separated Cache instance:
```
Begin parsing .erb templates from /var/opt/gitlab/templates
Writing /srv/gitlab/config/cable.yml
Writing /srv/gitlab/config/database.yml
Writing /srv/gitlab/config/gitlab.yml
Writing /srv/gitlab/config/redis.cache.yml
Writing /srv/gitlab/config/resque.yml
Begin parsing .tpl templates from /var/opt/gitlab/templates
Copying other config files found in /var/opt/gitlab/templates to /srv/gitlab/config
Copying smtp_settings.rb into /srv/gitlab/config
Checking: resque.yml, cable.yml
+ SUCCESS connecting to 'redis://10.43.26.119:6379' from resque.yml, through 10.43.26.119
+ SUCCESS connecting to 'redis://10.43.26.119:6379' from cable.yml, through 10.43.26.119
Checking: main
Database Schema - main (gitlabhq_production) - current: 20231115151449, codebase: 20231115151449
```
It's unknown when this specifically started at this time. In our test environments we've been using the same password for both so this is gone unnoticed on our end. It's only recently when we tried to set up via GCP Memorystore, which set's up passwords separately, was this seen. Anecdotally GCP Memorystore has worked in the past so this appears to be a regression from at least this year.
Finally this may extend to [other Redis classes](https://docs.gitlab.com/charts/charts/globals.html#multiple-redis-support) and probably worth checking as well.
## Steps to reproduce
Configure a Charts setup to point to two separate Redis instances, one global and one cache, *with* different passwords for each.
## Configuration used
(Please provide a _sanitized_ version of the configuration used wrapped in a code block (```yaml))
```yaml
redis:
auth:
key: password
secret: gitlab-redis-persistent-password
cache:
auth:
key: password
secret: gitlab-redis-cache-password
host: <redacted>
port: "6379"
scheme: redis
host: <redacted>
port: "6379"
scheme: redis
```
## Current behavior
Redis Cache password is being configured to the global secret.
## Expected behavior
For the Redis Cache password to be configured to the correct secret.
## Versions
- Chart: 7.6.1
- Platform:
- Cloud: GKE (confirming AWS)
- Kubernetes: (`kubectl version`)
- Client: `v1.28.4`
- Server: `v1.27.3-gke.100`
- Helm: (`helm version`)
- `version.BuildInfo{Version:"v3.13.2", GitCommit:"2a2fb3b98829f1e0be6fb18af2f6599e0f4e8243", GitTreeState:"clean", GoVersion:"go1.21.4"}`
## Relevant logs
<details><summary>Webservice dependencies pod</summary>
```
Begin parsing .erb templates from /var/opt/gitlab/templates
Writing /srv/gitlab/config/cable.yml
Writing /srv/gitlab/config/database.yml
Writing /srv/gitlab/config/gitlab.yml
Writing /srv/gitlab/config/redis.cache.yml
Writing /srv/gitlab/config/resque.yml
Begin parsing .tpl templates from /var/opt/gitlab/templates
Copying other config files found in /var/opt/gitlab/templates to /srv/gitlab/config
Copying smtp_settings.rb into /srv/gitlab/config
Checking: resque.yml, cable.yml
+ SUCCESS connecting to 'redis://<redis_persistent_ip>:6379' from resque.yml, through <redis_persistent_ip>
+ SUCCESS connecting to 'redis://<redis_persistent_ip>:6379' from cable.yml, through <redis_persistent_ip>
Checking: main
Database Schema - main (gitlabhq_production) - current: 20231115151449, codebase: 20231115151449
```
</details>
<details><summary>Webservice pod</summary>
```
{"component": "gitlab","subcomponent":"exceptions_json","severity":"ERROR","time":"2023-12-12T14:57:49.005Z","correlation_id":"bdf478a8-5132-45ab-96d3-aa9d492399ef","exception.class":"Redis::CommandError","exception.message":"WRONGPASS invalid username-password pair or user is disabled.","exception.backtrace":["lib/gitlab/instrumentation/redis_interceptor.rb:10:in `block in call'","lib/gitlab/instrumentation/redis_interceptor.rb:42:in `instrument_call'","lib/gitlab/instrumentation/redis_interceptor.rb:9:in `call'","config/initializers/zz_metrics.rb:45:in `connect'","lib/gitlab/instrumentation/redis_interceptor.rb:10:in `block in call'","lib/gitlab/instrumentation/redis_interceptor.rb:42:in `instrument_call'","lib/gitlab/instrumentation/redis_interceptor.rb:9:in `call'","lib/feature.rb:259:in `block in current_feature_value'","lib/feature.rb:274:in `with_feature'","lib/feature.rb:255:in `current_feature_value'","lib/feature.rb:102:in `enabled?'","lib/feature.rb:115:in `disabled?'","lib/gitlab/lograge/custom_options.rb:39:in `call'","lib/gitlab/metrics/elasticsearch_rack_middleware.rb:16:in `call'","lib/gitlab/middleware/memory_report.rb:13:in `call'","lib/gitlab/middleware/speedscope.rb:13:in `call'","lib/gitlab/database/load_balancing/rack_middleware.rb:23:in `call'","lib/gitlab/middleware/rails_queue_duration.rb:33:in `call'","lib/gitlab/etag_caching/middleware.rb:21:in `call'","lib/gitlab/metrics/rack_middleware.rb:16:in `block in call'","lib/gitlab/metrics/web_transaction.rb:46:in `run'","lib/gitlab/metrics/rack_middleware.rb:16:in `call'","lib/gitlab/middleware/go.rb:20:in `call'","lib/gitlab/middleware/query_analyzer.rb:11:in `block in call'","lib/gitlab/database/query_analyzer.rb:37:in `within'","lib/gitlab/middleware/query_analyzer.rb:11:in `call'","lib/gitlab/middleware/multipart.rb:173:in `call'","lib/gitlab/middleware/read_only/controller.rb:50:in `call'","lib/gitlab/middleware/read_only.rb:18:in `call'","lib/gitlab/middleware/same_site_cookies.rb:27:in `call'","lib/gitlab/middleware/path_traversal_check.rb:48:in `call'","lib/gitlab/middleware/handle_malformed_strings.rb:21:in `call'","lib/gitlab/middleware/basic_health_check.rb:25:in `call'","lib/gitlab/middleware/handle_ip_spoof_attack_error.rb:25:in `call'","lib/gitlab/middleware/request_context.rb:15:in `call'","lib/gitlab/middleware/webhook_recursion_detection.rb:15:in `call'","config/initializers/fix_local_cache_middleware.rb:11:in `call'","lib/gitlab/middleware/compressed_json.rb:44:in `call'","lib/gitlab/middleware/rack_multipart_tempfile_factory.rb:19:in `call'","lib/gitlab/middleware/sidekiq_web_static.rb:20:in `call'","lib/gitlab/metrics/requests_rack_middleware.rb:79:in `call'","lib/gitlab/middleware/release_env.rb:13:in `call'"],"user.username":null,"tags.program":"web","tags.locale":"en","tags.feature_category":null,"tags.correlation_id":"bdf478a8-5132-45ab-96d3-aa9d492399ef","extra.storage":"feature_flag"}
```
</details>Next 1-3 releaseshttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/5373Using consolidated object storage and Pages globals does not work2024-03-02T03:04:53ZJessie LeeUsing consolidated object storage and Pages globals does not work<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
Using Pag...<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
Using Pages and unified object storage in the helm chart will lead to a broken configuration of rails.
## Steps to reproduce
1. Configure consolidated object storage and pages bucket seperately.
1. Install GitLab
2. receive the following error:
```
/srv/gitlab/config/initializers/pages_storage_check.rb:13:in `<main>': Please enable at least one of the two Pages storage strategy (local_store or object
```
## Configuration used
(Please provide a _sanitized_ version of the configuration used wrapped in a code block (```yaml))
```yaml
global:
minio:
enabled: false
## https://docs.gitlab.com/charts/charts/globals#configure-appconfig-settings
## Rails based portions of this chart share many settings
appConfig:
## https://docs.gitlab.com/charts/charts/globals#lfs-artifacts-uploads-packages-external-mr-diffs-and-dependency-proxy
object_store:
enabled: true
lfs:
bucket: dev-environment-redacted-gitlab-git-lfs
artifacts:
bucket: dev-environment-redacted-gitlab-artifacts
uploads:
bucket: dev-environment-redacted-gitlab-uploads
packages:
bucket: dev-environment-redacted-gitlab-packages
backups:
bucket: dev-environment-redacted-gitlab-backups
tmpBucket: dev-environment-redacted-gitlab-backups-tmp
## End of global.appConfig
## https://docs.gitlab.com/charts/charts/globals#configure-registry-settings
registry:
bucket: dev-environment-redacted-gitlab-registry
## https://docs.gitlab.com/charts/charts/globals#configure-gitlab-pages
pages:
enabled: true
objectStore:
enabled: true
bucket: dev-environment-redacted-gitlab-pages
```
## Current behavior
error:
```
/srv/gitlab/config/initializers/pages_storage_check.rb:13:in `<main>': Please enable at least one of the two Pages storage strategy (local_store or object
```
## Expected behavior
a working GitLab installation with Pages object storage configured.
## Versions
- Chart: 7.9.1
- Platform:
- Cloud: GKE
- Kubernetes:
- Client: 1.24+
- Server: 1.24+
- Helm:
- Client: v3.12.1
## Relevant logs
/cc @WarheadsSE @bradhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/5362[registry] Add PersistentVolumeClaim to registry helm chart to allow persiste...2024-02-29T14:57:23Zsusy belle[registry] Add PersistentVolumeClaim to registry helm chart to allow persistence with an existing PersistentVolume## Summary
I want to use filesystem storage for the registry in my gitlab helm installation. (the reason is that there is a problem with using registry with minio storage and I will create a separate issue for it). For me using filesyst...## Summary
I want to use filesystem storage for the registry in my gitlab helm installation. (the reason is that there is a problem with using registry with minio storage and I will create a separate issue for it). For me using filesystem storage in current kubernetes installation would be a good workaround. But currently gitlab container registry helm chart does not work with filesystem storage.
I created a secret (below is helm template):
```
apiVersion: v1
kind: Secret
metadata:
name: gitlab-registry-filesystem
namespace: gitlab
type: Opaque
data:
config: {{ print "filesystem:" "\n" " rootdirectory: /var/lib/registry" | b64enc }}
```
```
$ kubectl get -o yaml secret/gitlab-registry-filesystem | yq .data.config | base64 -d
filesystem:
rootdirectory: /var/lib/registry/
```
and my `gitlab.values.yaml` contains:
```
registry:
enabled: true
hpa:
minReplicas: 1
maxReplicas: 1
storage:
secret: gitlab-registry-filesystem
key: config
```
but the registry deployment log contains messages:
```
... "detail":"filesystem: mkdir /var/lib/registry: permission denied" ... "go_version":"go1.21.7", ... "version":"v3.88.1-gitlab" ...
```
`kubectl exec deployment.apps/gitlab-xxx-registry -- mkdir /var/lib/registry` gives error:
```
mkdir: cannot create directory ‘/var/lib/registry’: Permission denied
```
I created a persistent volume:
```
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
app: gitlab-registry
name: gitlab-persistentvolume-registry
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 12Gi
hostPath:
path: /opt/volume/gitlab-registry/
type: ""
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
```
But there is no option in gitlab container registry helm chart like PersistentVolume or extraVolumeMounts to attach this volume to the registry deployment.
## Current behavior
No way to attach existing persistent volume to gitlab container registry.
## Expected behavior
Expected to have a commonly used approach having extraVolumeMounts to attach volumes to registry deployment.
## Versions
- Chart: gitlab-7.9.1
- Platform:
- Self-hosted: k0s
- Kubernetes: (`kubectl version`)
- Client: v1.29.1
- Server: v1.29.1+k0s
- Helm: (`helm version`)
- Client: v3.14.0https://gitlab.com/gitlab-org/charts/gitlab/-/issues/1741CNG: Use Distroless base image2024-02-28T17:26:50ZHossein PursultaniCNG: Use Distroless base imageCurrently we release two sets of CNG images. Standard images that are based on Debian Stretch (the Slim version) and UBI images that are based on UBI RHEL8. UBI images go through security scanning. However, the standard images are laggin...Currently we release two sets of CNG images. Standard images that are based on Debian Stretch (the Slim version) and UBI images that are based on UBI RHEL8. UBI images go through security scanning. However, the standard images are lagging behind.
[Distroless](https://github.com/GoogleContainerTools/distroless) "images contain only application and its runtime dependencies. They do not contain package managers, shells or any other programs you would expect to find in a standard Linux distribution". Not only they're very small in size, but also they have clear security advantages.
Distroless is based on Debian. The process of building a Distroless image is in line with separation of build stage and layering artifacts.
Of note, RedHat certification has hard requirement on the image being `FROM ubi` or [some variant](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/building_running_and_managing_containers/index) (`ubi`, `ubi-minimal`, `ubi-micro`)
/cc @gitlab-org/distributionBackloghttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/2552remote backup s3 helm2024-02-08T00:17:59ZMarc Streeterremote backup s3 helmThere is documentation for how to set up remote backup with s3 using the `/etc/gitlab/gitlab.rb` here:
https://docs.gitlab.com/ee/raketasks/backup_restore.html#using-amazon-s3
I'm looking for how to express this with helm `values.yml` ...There is documentation for how to set up remote backup with s3 using the `/etc/gitlab/gitlab.rb` here:
https://docs.gitlab.com/ee/raketasks/backup_restore.html#using-amazon-s3
I'm looking for how to express this with helm `values.yml` somewhere around here:
https://docs.gitlab.com/charts/charts/globals.htmlhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/4997NGINX sometimes does not take ownership of Ingress objects2024-02-08T00:16:46ZMitchell NielsenNGINX sometimes does not take ownership of Ingress objects## Summary
We're noticing more frequently that NGINX sometimes does not take ownership of related Ingress objects.
### Symptoms
* Ingresses don't have an associated IP address.
* Tests fail because they can't resolve the domain.
* Ext...## Summary
We're noticing more frequently that NGINX sometimes does not take ownership of related Ingress objects.
### Symptoms
* Ingresses don't have an associated IP address.
* Tests fail because they can't resolve the domain.
* External DNS doesn't create a record for a domain.
### Workarounds
Almost every time, the workaround is to delete the NGINX Pod. When a new one starts, it successfully takes ownership of the Ingress and gives it the IP address.
## Related issues
Related to https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4866
As well as:
* https://github.com/kubernetes/ingress-nginx/issues/9932
* https://github.com/kubernetes/ingress-nginx/issues/9438
* https://github.com/kubernetes/ingress-nginx/issues/9412Backloghttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/2792success of certificates initContainer needs to take into account components o...2024-02-02T10:42:16ZBen Prescott_success of certificates initContainer needs to take into account components of /scripts/bundle-certificates failing<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
Rails ele...<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
Rails elements of a customer's Helm deployment of GitLab were not trusting their corporate CA because `update-ca-certificates` in `/scripts/bundle-certificates` was not completing its job. No hashes of certificates were created.
Customer requested
- we add error handling so failure of `update-ca-certificates` is noisy and visible
- we detect that no hashes have been created
More details in:
- A customer issue ([ticket](https://gitlab.zendesk.com/agent/tickets/216740) for GitLab team members)
- Discussion in related issue https://gitlab.com/gitlab-org/charts/gitlab/-/issues/2774#note_608458355
## Steps to reproduce
In general, the script won't detect and handle `update-ca-certificates` failing.
- script modified so the binary is missing:
```
/bin # sh -x /scripts/bundle-certificates
+ ls -1 /etc/ssl/certs/
+ wc -l
+ '[' 0 -gt 0 ]
+ update-ca-certificatez
/scripts/bundle-certificates: line 12: update-ca-certificatez: not found
+ readlink -f '/etc/ssl/certs/*.pem'
+ origin='/etc/ssl/certs/*.pem'
+ originPath=/etc/ssl/certs
+ '[' /etc/ssl/certs '!=' /etc/ssl/certs ]
+ readlink -f '/etc/ssl/certs/*.crt'
+ origin='/etc/ssl/certs/*.crt'
+ originPath=/etc/ssl/certs
+ '[' /etc/ssl/certs '!=' /etc/ssl/certs ]
/bin # echo $?
0
```
- non-executable binary:
```
/bin # sh -x /scripts/bundle-certificates
+ wc -l
+ ls -1 /etc/ssl/certs/
+ '[' 0 -gt 0 ]
+ update-ca-certificates
/scripts/bundle-certificates: line 12: update-ca-certificates: Permission denied
+ readlink -f '/etc/ssl/certs/*.pem'
+ origin='/etc/ssl/certs/*.pem'
+ originPath=/etc/ssl/certs
+ '[' /etc/ssl/certs '!=' /etc/ssl/certs ]
+ readlink -f '/etc/ssl/certs/*.crt'
+ origin='/etc/ssl/certs/*.crt'
+ originPath=/etc/ssl/certs
+ '[' /etc/ssl/certs '!=' /etc/ssl/certs ]
/bin # echo $?
0
```
- executes, but fails (binary replaced with a script)
```
/bin # sh -x /scripts/bundle-certificates
+ ls -1 /etc/ssl/certs/
+ wc -l
+ '[' 0 -gt 0 ]
+ update-ca-certificates
oh no, I failed horribly
+ echo 'the return code was 1'
the return code was 1
+ readlink -f '/etc/ssl/certs/*.pem'
+ origin='/etc/ssl/certs/*.pem'
+ originPath=/etc/ssl/certs
+ '[' /etc/ssl/certs '!=' /etc/ssl/certs ]
+ readlink -f '/etc/ssl/certs/*.crt'
+ origin='/etc/ssl/certs/*.crt'
+ originPath=/etc/ssl/certs
+ '[' /etc/ssl/certs '!=' /etc/ssl/certs ]
/bin # echo $?
0
```
## Configuration used
n/a
## Current behavior
It looks likely that [update-ca-certificates](https://gitlab.com/gitlab-org/build/CNG/-/blob/14-0-stable/alpine-certificates/scripts/bundle-certificates#L12) was segmentation faulting, owing to security software in the customer's environment.
Script does not detect failure of `update-ca-certificates` or it having not created any hashes.
It appears that golang doesn't require the hashes, so GitLab trusts the CA to some extent (workhorse, for example) which complicates diagnosing the issue.
## Expected behavior
- detect failure of `update-ca-certificates` and ensure script terminates, eg:
```
update-ca-certificates || exit 1
```
- measure whether certificate hashes are present.
```
if update-ca-certificates ; then
if [ 0 -eq $(ls /etc/ssl/certs/*.0 | grep -c ssl/certs) ] ; then
echo "no certificate hashes created"
exit 1
else
:
fi
else
echo "update-ca-certificates failed"
exit 1
fi
```
## Versions
Up to and including chart 5.0.0 (GL14)
## Relevant logs
```
# kubectl logs gitlab-webservice-default-pod -c certificates
Segmentation fault
```https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4352Add support for multiple backup configurations2024-01-31T22:18:01ZNicholas StauderAdd support for multiple backup configurations## Summary
I would like to configure separate daily and monthly backups through the Gitlab Helm chart, but currently there is only support for one configuration.
## Configuration used
```yaml
backups:
cron:
enabled: true...## Summary
I would like to configure separate daily and monthly backups through the Gitlab Helm chart, but currently there is only support for one configuration.
## Configuration used
```yaml
backups:
cron:
enabled: true
concurrencyPolicy: Replace
persistence:
enabled: true
accessMode: 'ReadWriteOnce'
size: '750Gi'
resources:
requests:
cpu: '1000m'
memory: '1024M'
schedule: '0 1 * * *'
extraArgs: '--skip registry --skip uploads --skip artifacts --skip lfs --skip packages --skip external_diffs --skip terraform_state --skip ci_secure_files'
objectStorage:
config:
secret:
key: config
gcpProject:
backend: gcs
```
## Current behavior
Single backup configuration works, but a second config is not supported. I bypass this by exporting the cronjob to JSON, replacing config details, and applying using kubectl.
## Expected behavior
Multiple backup cron configurations supported.
## Versions
- Chart: 6.9.2
- Platform:
- Cloud: GKE
- Kubernetes: (`kubectl version`)
- Client: 1.25.2
- Server: 1.24.10-gke.1200
- Helm: (`helm version`)
- Client: 3.11.0
- Server: