GitLab Chart issueshttps://gitlab.com/gitlab-org/charts/gitlab/-/issues2023-10-03T14:11:14Zhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/5025Adding backup cronjob specific labels2023-10-03T14:11:14ZJoão CardosoAdding backup cronjob specific labels<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
It would ...<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
It would be nice to have the possibility of adding labels specific to the backup cronjob.
At the moment only standard/common labels that will be picked by the other pods from the deployment can be added (See [link](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/charts/gitlab/charts/toolbox/templates/backup-job.yaml?ref_type=heads#L33))
This would help for example in exporting the logs from the job to elastic by allowing the addition of the `elastic-index` labelhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/5002Make spec/features/backups_spec.rb more robust against GKE autoscaler2024-02-09T23:21:17ZJoão Alexandre CunhaMake spec/features/backups_spec.rb more robust against GKE autoscaler<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Issue
While inves...<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Issue
While investigating https://gitlab.com/gitlab-org/charts/gitlab/-/issues/5001, which complained about the toolbox pod not being found, we identified that it was indeed created at some point, and then it was rescheduled due to the GCP auto scaler scaling down the nodes.
Since the spec knew exactly the name of the pod which to look for, and we found the pod name by checking pods with `--field-selector=status.phase=Running`, it means that we indeed got affected by the autoscaler, not by the pod not being present.
### Relevant logs
```shell
Failures:
1) Restoring a backup Backups Should be able to backup an identical tar
Failure/Error: expect(status.success?).to be(true), "Error backing up instance: #{stdout}"
Error backing up instance: Error from server (NotFound): pods "gke125-production-bz1tbp-toolbox-75df9cd6d5-dg28r" not found
# ./spec/features/backups_spec.rb:102:in `block (3 levels) in <top (required)>'
```
## Proposal
[From our discussion](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/5001#note_1563829073) in the upstream issue:
We'd like to block the autoscaler from scaling down these pods which our tests depends on by patching our deployments with `cluster-autoscaler.kubernetes.io/safe-to-evict: false` before the tests run.
NOTE: This is currently hard-coded into the Toolbox Deployment's `.spec.template.spec.metadata.annotations`https://gitlab.com/gitlab-org/charts/gitlab/-/issues/5000Provide example values file for group SAML configuration2023-09-19T00:04:58ZMitchell NielsenProvide example values file for group SAML configuration## Summary
The following discussion from !3235 should be addressed:
- [ ] @mnielsen started a [discussion](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/3235#note_1464461017): (+2 comments)
> **suggestion(non-block...## Summary
The following discussion from !3235 should be addressed:
- [ ] @mnielsen started a [discussion](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/3235#note_1464461017): (+2 comments)
> **suggestion(non-blocking)**:I see there's a brief example for `group SAML` a bit lower in this file, but perhaps it'd be helpful to create a file under `examples/` that shows more complete samples of how to use this configuration. What do you think?https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4983[CI] Add nginx/external-dns controller checks2023-09-11T21:06:58ZDmytro Makovey[CI] Add nginx/external-dns controller checks<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
We've bee...<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
We've been observing occasional instances of `nginx` and `external-dns` "hangs" where new ingress objects have not been properly processed resulting in either missing TLS certs or DNS entries etc.
## Steps to reproduce
Observe CI pipelines for a while until failure
## Suggested solution
add checks for proper Ingress object processing (DNS entries, external IP assignment etc.) prior to chart checks, and if necessary - restart relevant pods.https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4978Identify strategy for testing multi-arch images in arm642024-02-28T03:36:56ZNailia IskhakovaIdentify strategy for testing multi-arch images in arm64## Summary
With upcoming multi-arch images for CNG https://gitlab.com/groups/gitlab-org/-/epics/10938+, the issue is to review if arm64 verification for multi-arch images is needed in CI. For example, switching one of the existing CI cl...## Summary
With upcoming multi-arch images for CNG https://gitlab.com/groups/gitlab-org/-/epics/10938+, the issue is to review if arm64 verification for multi-arch images is needed in CI. For example, switching one of the existing CI clusters to use `arm64` architecture or having a scheduled pipeline if CI review is switched to vCluster.
Known limitation in GKE - [taint on arm64 nodes](https://cloud.google.com/kubernetes-engine/docs/how-to/prepare-arm-workloads-for-deployment#overview).
## Context
https://gitlab.com/groups/gitlab-org/-/epics/10938+ - https://gitlab.com/gitlab-org/build/CNG/-/issues/522+https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4975Add support and test for Sentinel requirepass2023-09-22T08:20:49ZStan HuAdd support and test for Sentinel requirepassIn https://gitlab.com/gitlab-org/gitlab/-/issues/235938, we added support in Omnibus GitLab to use Sentinel `requirepass`.
This may not be fully supported in the GitLab Helm Chart at the moment. We may need to add config support in vari...In https://gitlab.com/gitlab-org/gitlab/-/issues/235938, we added support in Omnibus GitLab to use Sentinel `requirepass`.
This may not be fully supported in the GitLab Helm Chart at the moment. We may need to add config support in various places.
For example:
1. Workhorse: https://gitlab.com/gitlab-org/gitlab/-/issues/422820
2. MailRoom: https://github.com/redis-rb/redis-client/pull/137 added support in redis-client v0.17.0 for passing in a separate Sentinel password.
We may need to:
1. Spin up a Redis cluster with and without a Sentinel `requirepass` set. I believe GitLab.com runs a Redis cluster with a Redis password, but no Sentinel password.
2. Test all services work with this configuration.
Related issues:
* https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4965https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4962Ability to use smtp:username as a secret2023-10-12T22:06:07ZLeandro LamaisonAbility to use smtp:username as a secret<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
We need t...<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
We need to use AWS Access key ID in the smtp user field to connect to an AWS Simple Email Service endpoint.
This is detected by GitLeaks as a Critical Issue.
In 7.3.0 the way to use smtp password was enhanced, but smtp username is not handled as a secret.
https://gitlab.com/gitlab-org/charts/gitlab/-/commit/ca343d8c83bf8483002775a04e9dd5cc4cf9fb39
## Steps to reproduce
Description:
Running Gitleaks leaks in a pipeline detects smtp:user_name as critical vulnerability if you need to use AWS Access key ID (which should be a secret instead)
## Configuration used
(Please provide a _sanitized_ version of the configuration used wrapped in a code block (```yaml))
From values file:
```yaml
smtp:
enabled: true
address: email-smtp.eu-central-1.amazonaws.com
port: 587
user_name: <AWS Access key ID>
password:
secret: gitlab-smtp-password
key: password
domain: "gitlab.xyz.aws.xyz.net"
authentication: login
starttls_auto: true
openssl_verify_mode: peer
pool: false
```
## Current behavior
(What you're experiencing happening)
The AWS Access key ID is detected as a Critical Vulnerability
## Expected behavior
(What you're expecting to happen)
The Gitlab Helm Chart allows smtp:user_name value as a Kubernetes Secret
## Versions
- Chart: (v7.3.0)
- Platform:
- Cloud: ( EKS )
- Self-hosted: (-)
- Kubernetes: (`kubectl version`)
- Client: 1.24
- Server: 1.24
- Helm: (`helm version`)
- Client: -
- Server: -
## Relevant logs
(Please provide any relevant log snippets you have collected, using code blocks (```) to format)
```
AWS Access Token secret has been found in commit
Project:
/ devops / cluster-management
File:
clusters/services-shared/addons/gitlab.yaml:80
Identifiers:
Gitleaks rule ID AWS
Severity:
Critical
Tool:
Secret Detection
Scanner Provider:
Gitleaks
```https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4957Investigate pipeline failures due to kubectl rollout status timeouts2023-08-28T17:33:40ZAndrew PattersonInvestigate pipeline failures due to kubectl rollout status timeoutsWe are getting occasional pipeline failures due to deployment rollout timeouts. See
- gitlab-org/charts/gitlab#4952
- gitlab-org/charts/gitlab#4954
- gitlab-org/charts/gitlab#4955
Increasing the timeout in `spec/gitlab_test_helper.rb:w...We are getting occasional pipeline failures due to deployment rollout timeouts. See
- gitlab-org/charts/gitlab#4952
- gitlab-org/charts/gitlab#4954
- gitlab-org/charts/gitlab#4955
Increasing the timeout in `spec/gitlab_test_helper.rb:wait_for_rollout from `120s` to `240s` may help prevent future occurrences, but we should also add more logging around the deployment to help us determine root cause.https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4956Ensure HA workloads don't run on same node2023-09-12T11:37:52ZBen BodenmillerEnsure HA workloads don't run on same nodeFollowing https://docs.gitlab.com/ee/administration/reference_architectures/3k_users.html#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative with current chart and https://gitlab.com/gitlab-org/charts/gitlab/-/blob/m...Following https://docs.gitlab.com/ee/administration/reference_architectures/3k_users.html#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative with current chart and https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/examples/ref/3k.yaml it appears user facing time sensitive GitLab components can run on the same node (e.g. GitLab webservice) which can cause brief outages if that node fails or is unexpectedly replaced. Can the chart be updated to prevent this? Any workarounds in the meantime?Next 1-3 releaseshttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/4950Add support for pages to consolidated object storage config2023-08-23T14:29:59ZBen BodenmillerAdd support for pages to consolidated object storage config## Summary
[Chart consolidated object storage config](https://docs.gitlab.com/charts/charts/globals#consolidated-object-storage) does not support Pages while [upstream does](https://docs.gitlab.com/ee/administration/object_storage.html#...## Summary
[Chart consolidated object storage config](https://docs.gitlab.com/charts/charts/globals#consolidated-object-storage) does not support Pages while [upstream does](https://docs.gitlab.com/ee/administration/object_storage.html#configure-each-object-type-to-define-its-own-storage-connection-storage-specific-form). Support should be added to chart as consolidated object storage has many advantages per https://docs.gitlab.com/ee/administration/object_storage.html#configure-a-single-storage-connection-for-all-object-types-consolidated-form.https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4937Add appProtocol option with uppercase support to nginx-ingress-controller ser...2023-08-17T10:54:59ZVojtech GrosserAdd appProtocol option with uppercase support to nginx-ingress-controller service## Summary
For GKE with Gateway API, the nginx-ingress-controller service to has to have `spec.ports.*.appProtocol` defined. Additionally for GKE the value needs to be uppercase.
I found https://gitlab.com/gitlab-org/charts/gitlab/-/me...## Summary
For GKE with Gateway API, the nginx-ingress-controller service to has to have `spec.ports.*.appProtocol` defined. Additionally for GKE the value needs to be uppercase.
I found https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2705, where this was discussed. With 1.19 out of support, I am hoping this can be revisited.
## Steps to reproduce
Helm install
## Current behavior
The service is created without `appProtocol:`
```yaml
apiVersion: v1
kind: Service
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
nodePort: 00000
- name: https
protocol: TCP
port: 443
targetPort: https
nodePort: 00000
- name: gitlab-shell
protocol: TCP
port: 22
targetPort: gitlab-shell
nodePort: 00000
selector:
app: nginx-ingress
component: controller
release: gitlab
clusterIP: 00.00.00.00
clusterIPs:
- 00.00.00.00
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Local
healthCheckNodePort: 00000
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
allocateLoadBalancerNodePorts: true
internalTrafficPolicy: Cluster
```
---
## Expected behavior
The service to be created with `appProtocol` and and option to make its value uppercase for GKE:
```yaml
apiVersion: v1
kind: Service
spec:
ports:
- name: http
protocol: TCP
appProtocol: HTTP
port: 80
targetPort: http
nodePort: 00000
- name: https
protocol: TCP
appProtocol: HTTPS
port: 443
targetPort: https
nodePort: 00000
- name: gitlab-shell
protocol: TCP
port: 22
targetPort: gitlab-shell
nodePort: 00000
selector:
app: nginx-ingress
component: controller
release: gitlab
clusterIP: 00.00.00.00
clusterIPs:
- 00.00.00.00
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Local
healthCheckNodePort: 00000
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
allocateLoadBalancerNodePorts: true
internalTrafficPolicy: Cluster
```
Something like:
```yaml
nginx-ingress:
controller:
service:
appProtocol: true
appProtocolInUpperCase: true
```
---
## Versions
- Chart: latest (7.2.x)
- Platform:
- Cloud: (GKE | AKS | EKS | ?)
- Self-hosted: (OpenShift | Minikube | Rancher RKE | ?)
- Kubernetes: (`kubectl version`)
- Client:
- Server:
- Helm: (`helm version`)
- Client:
- Server:Next 1-3 releaseshttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/4905RSpec: implement gomplate testing patterns throughout2023-09-05T21:33:55ZJason PlumRSpec: implement gomplate testing patterns throughout## Summary
Following #3366, we can now test gomplate content directly as rendered. We should evaluate all instances of RSpec testing gomplate content, for the need to test the result, or simply the presence within the gomplate string co...## Summary
Following #3366, we can now test gomplate content directly as rendered. We should evaluate all instances of RSpec testing gomplate content, for the need to test the result, or simply the presence within the gomplate string content.
## Description
Currently, there is a mix of direct `YAML.safe_load()` of gomplate content, and direct `to contain()` string comparison. We should evaluate each instance of any test of gomplate, determining if it should be an evaluation of the final content as YAML, or a specific match to the content of the gomplate template itself (via string comparators).
Any instance which should be a YAML evaluation should be translated to make use of `RuntimeTemplate.gomplate` from the recently added (!3289) `spec/runtime_template_helper.rb`.
## Process
- Identify all instances (listing of all RSpec files doing this)
- Determine appropriate check for each instance within each file
- Replace those as necessary, as a single RSpec file per MR to keep reviews small
- Replace any prior implementations of gomplate rendering
## Acceptance Criteria
- [ ] All uses of RSpec on gomplate content have been evaluated for replacement
- [ ] All instances of such gomplate have been updated to make use of `RuntimeTemplate.gomplate` as appropriate.
- [ ] Any instances of prior implementation are removed (!3098, !3151)https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4904RSpec: implement ERB testing patterns throughout2023-08-30T15:07:25ZJason PlumRSpec: implement ERB testing patterns throughout## Summary
Following #4123, we can now test ERB content directly as rendered. We should evaluate all instances of RSpec testing ERB content, for the need to test the result, or simply the presence within the ERB string content.
## Desc...## Summary
Following #4123, we can now test ERB content directly as rendered. We should evaluate all instances of RSpec testing ERB content, for the need to test the result, or simply the presence within the ERB string content.
## Description
Currently, there is a mix of direct `YAML.safe_load()` of ERB content, and direct `to contain()` string comparison. We should evaluate each instance of any test of ERB, determining if it should be an evaluation of the final content as YAML, or a specific match to the content of the ERB template itself (via string comparators).
Any instance which should be a YAML evaluation should be translated to make use of `RuntimeTemplate.erb` from the recently added (!3289) `spec/runtime_template_helper.rb`.
## Process
- Identify all instances (listing of all RSpec files doing this)
- Determine appropriate check for each instance within each file
- Replace those as necessary, as a single RSpec file per MR to keep reviews small
## Acceptance Criteria
- [ ] All uses of RSpec on ERB content have been evaluated for replacement
- [ ] All instances of such ERB have been updated to make use of `RuntimeTemplate.erb` as appropriate.https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4902add node selector for registry chart migrations-job.yaml2023-07-28T15:00:05ZNguyen Trongadd node selector for registry chart migrations-job.yaml<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
(Summariz...<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
(Summarize the bug encountered, concisely as possible)
## Steps to reproduce
(Please provide the steps to reproduce the issue)
## Configuration used
(Please provide a _sanitized_ version of the configuration used wrapped in a code block (```yaml))
```yaml
(Paste sanitized configuration here)
```
## Current behavior
(What you're experiencing happening)
## Expected behavior
(What you're expecting to happen)
## Versions
- Chart: (tagged version | branch | hash `git rev-parse HEAD`)
- Platform:
- Cloud: (GKE | AKS | EKS | ?)
- Self-hosted: (OpenShift | Minikube | Rancher RKE | ?)
- Kubernetes: (`kubectl version`)
- Client:
- Server:
- Helm: (`helm version`)
- Client:
- Server:
## Relevant logs
(Please provide any relevate log snippets you have collected, using code blocks (```) to format)https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4900[Spike] Consider triggering downstream Operator pipeline to validate impact o...2023-07-27T19:12:49ZMitchell Nielsen[Spike] Consider triggering downstream Operator pipeline to validate impact of Charts changes## Summary
CNG has a [downstream Charts trigger CI job](https://gitlab.com/gitlab-org/build/CNG/-/blob/c4ed59a3386c077b09dcacf9b0364c0e65030914/.gitlab/ci/images.gitlab-ci.yml#L17-49)
that runs a Charts pipeline with the images built as...## Summary
CNG has a [downstream Charts trigger CI job](https://gitlab.com/gitlab-org/build/CNG/-/blob/c4ed59a3386c077b09dcacf9b0364c0e65030914/.gitlab/ci/images.gitlab-ci.yml#L17-49)
that runs a Charts pipeline with the images built as a result of the changes in a CNG MR.
This helps us evaluate the impact of changes to the CNG on the Charts project.
https://gitlab.com/gitlab-org/distribution/team-tasks/-/issues/1331 was opened to address a similar need
for the Operator, which depends on the Charts similar to the way the Charts depends on the CNG.
We could create a similar trigger job that will spawn an Operator pipeline that tests the changes provided
in a Charts MR.
## Acceptance criteria
- [ ] Feasibility of the CI job is evaluated (at a high level, what changes are required to make this work?)
- [ ] Follow-up issue(s) createdhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/4899gitlab-shell health check produces non-sense errors2023-07-27T19:57:58ZElliot Courantgitlab-shell health check produces non-sense errors<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
Right now...<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
Right now the gitlab-shell is deployed with a readiness and a liveness probe that looks like this:
#### Liveness
The liveness probe executes a health check script which simply looks for the PID file that sshd produces, and checks to see if said PID is actively running.
```yaml
livenessProbe:
exec:
command:
- /scripts/healthcheck
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
```
#### Readiness
The readiness check however, looks to see if the TCP port sshd listens on is active.
```yaml
readinessProbe:
failureThreshold: 2
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
tcpSocket:
port: 22
timeoutSeconds: 3
```
## Steps to reproduce
Deploy `gitlab-shell` with the readiness probe enabled.
## Configuration used
I don't have anything special on my gitlab-shell values in helm besides changing the port (which is not reflected in the yaml above):
```yaml
## doc/charts/globals.md#configure-gitlab-shell-settings
shell:
port: 2223
authToken: { }
# secret:
# key:
hostKeys: { }
# secret:
```
The resulting Kube yaml from this is:
```yaml
# Source: gitlab/charts/gitlab/charts/gitlab/charts/gitlab-shell/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitlab-gitlab-shell
namespace: elliot-gitlab
labels:
app: gitlab-shell
chart: gitlab-shell-7.1.2
release: gitlab
heritage: Helm
annotations:
app.gitlab.com/app: ""
app.gitlab.com/env: ""
spec:
selector:
matchLabels:
app: gitlab-shell
release: gitlab
template:
metadata:
labels:
app: gitlab-shell
chart: gitlab-shell-7.1.2
release: gitlab
heritage: Helm
annotations:
checksum/config: d208b4e7ec82e26f0b2ed6679e719b249f9f8038fc9c56711fc2e866d72eded7
checksum/config-sshd: fa8dbe79bd486f3edb6056d1c714ea628319a09aa77887dade5ac24c82ab9012
cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
spec:
initContainers:
- name: certificates
image: registry.gitlab.com/gitlab-org/build/cng/alpine-certificates:20191127-r2
env:
volumeMounts:
- name: etc-ssl-certs
mountPath: /etc/ssl/certs
readOnly: false
- name: etc-pki-ca-trust-extracted-pem
mountPath: /etc/pki/ca-trust/extracted/pem
readOnly: false
- name: custom-ca-certificates
mountPath: /usr/local/share/ca-certificates
readOnly: true
resources:
requests:
cpu: 50m
- name: configure
command: ['sh', '/config/configure']
image: "registry.gitlab.com/gitlab-org/cloud-native/mirror/images/busybox:latest"
env:
volumeMounts:
- name: shell-config
mountPath: /config
readOnly: true
- name: shell-init-secrets
mountPath: /init-config
readOnly: true
- name: shell-secrets
mountPath: /init-secrets
readOnly: false
resources:
requests:
cpu: 50m
securityContext:
runAsUser: 1000
fsGroup: 1000
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
app: gitlab-shell
release: gitlab
automountServiceAccountToken: false
containers:
- name: gitlab-shell
image: "registry.gitlab.com/gitlab-org/build/cng/gitlab-shell:v14.23.0"
securityContext:
runAsUser: 1000
ports:
- containerPort: 2223
name: ssh
env:
- name: GITALY_FEATURE_DEFAULT_ON
value: "1"
- name: CONFIG_TEMPLATE_DIRECTORY
value: '/etc/gitlab-shell'
- name: CONFIG_DIRECTORY
value: '/srv/gitlab-shell'
- name: KEYS_DIRECTORY
value: '/etc/gitlab-secrets/ssh'
- name: SSH_DAEMON
value: "openssh"
volumeMounts:
- name: shell-config
mountPath: '/etc/gitlab-shell'
- name: shell-secrets
mountPath: '/etc/gitlab-secrets'
readOnly: true
- name: shell-config
mountPath: '/etc/krb5.conf'
subPath: krb5.conf
readOnly: true
- name: sshd-config
mountPath: /etc/ssh/sshd_config
subPath: sshd_config
readOnly: true
- name: etc-ssl-certs
mountPath: /etc/ssl/certs/
readOnly: true
- name: etc-pki-ca-trust-extracted-pem
mountPath: /etc/pki/ca-trust/extracted/pem
readOnly: true
livenessProbe:
exec:
command:
- /scripts/healthcheck
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 3
readinessProbe:
tcpSocket:
port: 2223
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 2
resources:
requests:
cpu: 0
memory: 6M
terminationGracePeriodSeconds: 30
volumes:
- name: shell-config
configMap:
name: gitlab-gitlab-shell
- name: sshd-config
configMap:
name: gitlab-gitlab-shell-sshd
- name: shell-init-secrets
projected:
defaultMode: 0440
sources:
- secret:
name: "gitlab-gitlab-shell-host-keys"
- secret:
name: "gitlab-gitlab-shell-secret"
items:
- key: "secret"
path: shell/.gitlab_shell_secret
# Actual config dirs that will be used in the container
- name: shell-secrets
emptyDir:
medium: "Memory"
- name: etc-ssl-certs
emptyDir:
medium: "Memory"
- name: etc-pki-ca-trust-extracted-pem
emptyDir:
medium: "Memory"
- name: custom-ca-certificates
projected:
defaultMode: 0440
sources:
- secret:
name: gitlab-wildcard-tls-ca
nodeSelector:
kubernetes.io/arch: amd64
```
^ Generated using `helm template`
## Current behavior
As a result; every time the Kubernetes health probe checks to see if the gitlab-shell pod is "ready", sshd logs the following message:
```
kex_exchange_identification: Connection closed by remote host
```
This ends up filling up the log output for the pod itself with this:
```
{"component": "gitlab-shell","subcomponent":"ssh","level":"unknown","time":"2023-07-27T18:08:55Z","message":"kex_exchange_identification: Connection closed by remote host\r"}
```
## Expected behavior
Once I understood the cause of it I'm not terribly concerned, but at a glance this would look like another service that is using GitLab is not working properly or a client is experiencing odd failures.
I'd expect that the readiness probe was such that the logs were not producing errors that are actually nothing to be concerned about.
## Versions
- Chart: `7.1.2`
- Platform:
- Cloud: N/A
- Self-hosted: `1.27.3`
- Kubernetes: (`kubectl version`)
- Client: `Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.4", GitCommit:"fa3d7990104d7c1f16943a67f11b154b71f6a132", GitTreeState:"clean", BuildDate:"2023-07-19T12:20:54Z", GoVersion:"go1.20.6", Compiler:"gc", Platform:"linux/amd64"}`
- Server: `Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3", GitCommit:"25b4e43193bcda6c7328a6d147b1fb73a33f1598", GitTreeState:"clean", BuildDate:"2023-06-14T09:47:40Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"linux/amd64"}`
- Helm: (`helm version`)
- Client: `3.11.1`
- Server: N/A
## Relevant logs
(Provided above)
---
As a suggestion, a `/scripts/readinesscheck` that executes something like this:
```bash
/bin/bash -c 'ssh -T 127.0.0.1 -p 22 -o "StrictHostKeyChecking=no" || true' 2>&1 grep Permission
```
Which would actually validate that sshd is ready to serve traffic on that port, rather than just that port being available. It would also not produce worry-some logs that actually don't need any action taken on them.https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4898Document ERB and gomplate test helpers2023-08-30T15:07:25ZMitchell NielsenDocument ERB and gomplate test helpers## Summary
The following discussion from !3289 should be addressed:
- [ ] @mnielsen started a [discussion](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/3289#note_1489653864): (+1 comment)
> @WarheadsSE Looking at ...## Summary
The following discussion from !3289 should be addressed:
- [ ] @mnielsen started a [discussion](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/3289#note_1489653864): (+1 comment)
> @WarheadsSE Looking at the `Documentation created/updated` checklist item, does it make sense to document this in [doc/development/rspec](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/doc/development/rspec.md)?
## Acceptance criteria
- [ ] `doc/development/rspec` documentation is updated with information and examples on how to use the new ERB and gomplate helpersNext 1-3 releaseshttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/4879Don't add zoekt config to gitlab.yml when secrets not mounted2023-08-17T20:45:51ZDylan Griffithdgriffith@gitlab.comDon't add zoekt config to gitlab.yml when secrets not mountedThe following discussion from !3184 should be addressed:
- [ ] @WarheadsSE started a [discussion](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/3184#note_1475227230): (+4 comments)
> :thinking: A large number of the...The following discussion from !3184 should be addressed:
- [ ] @WarheadsSE started a [discussion](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/3184#note_1475227230): (+4 comments)
> :thinking: A large number of the `specs_without_cluster` failed, with Psych parser errors ... :mag:https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4844Ensure that object storage 'connection' key is specified when needed2023-06-27T19:22:37ZMitchell NielsenEnsure that object storage 'connection' key is specified when needed## Summary
Context: https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/issues/1259#note_1429304365
While we have [_checkConfig_object_storage.tpl](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/templates/_checkConf...## Summary
Context: https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/issues/1259#note_1429304365
While we have [_checkConfig_object_storage.tpl](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/templates/_checkConfig_object_storage.tpl), we may have a configuration scenario that is not covered by these checks.
We need to ensure that the `connection` key is specified in all scenarios in which it is needed.
## Acceptance criteria
- [ ] Missing configuration scenario(s) identified
- [ ] checkConfig file is updated to include scenario(s)
- [ ] Specs are updated to test for scenario(s)https://gitlab.com/gitlab-org/charts/gitlab/-/issues/4822Issues �about custom certificate authorities2023-07-26T05:29:13Zthor.j정태균Issues �about custom certificate authorities<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
Recently,...<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
Recently, we updated gitlab from v15.7.5 to v15.11.8.
In gitlab 15.11.8 version, the image that manages custom certificate autorities has been changed to a debian os based image as shown below.
```
registry.gitlab.com/gitlab-org/build/cng/alpine-certificates:20191127-r2 -> registry.gitlab.com/gitlab-org/build/cng/certificates:v15.11.8
```
We created a custom ca as a Kubernetes secret resource to add the custom ca as a trusted certificate and also set it in the certificates section of the helm chart.
After finishing the setting, I confirmed that it was added as a trusted certificate to the ca-certificates.crt file under the /etc/ssl/certs directory of the sidekiq pod.
However, we found that the custom ca we added was registered multiple times in the file.
## Steps to reproduce
1. gitlab helm chart update (v6.7.5 -> v6.11.8)
2. Add custom ca secret resource
3. Setting values in the certificate section in the values file
4. Check ca-certificate.crt file in sidekiq container
```
...
-----BEGIN CERTIFICATE-----
<my-custom-ca.crt content>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<my-custom-ca.crt content>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<my-custom-ca.crt content>
-----END CERTIFICATE-----
```
## Configuration used
- Custom CA secret resource
```yaml
# Secret Name: my-custom-ca
apiVersion: v1 data:
my-custom-ca.crt: ++++++++
kind: Secret
metadata:
...
type: Opaque
```
- certificates section in values file
```yaml
global:
certificates:
image:
repository: registry.gitlab.com/gitlab-org/build/cng/certificates
# Default tag is `master`, overridable by `global.gitlabVersion`.
tag: v15.11.8
# pullPolicy: IfNotPresent
# pullSecrets: []
customCAs:
- secret: my-custom-ca
```
## Current behavior
- Custom CA was registered multiple times in the file(ca-certificate.crt).
## Expected behavior
- Custom CA was registered one time in the file(ca-certificate.crt).
## Versions
- Chart: v6.11.8
- Platform:
- Cloud: EKS
- Kubernetes: (`kubectl version`)
- Client: v1.22.2
- Server: v1.23.17-eks-c12679a
- Helm: (`helm version`)
- Client:
- Server:
## Relevant logs
(Please provide any relevate log snippets you have collected, using code blocks (```) to format)Next 4-6 releases