Commit c0495931 authored by Marcel Amirault's avatar Marcel Amirault 🕺 Committed by Jason Plum

Clean up markdown in charts

Many fixes to markdown in charts docs, mostly end of line
whitespace, spaces above and below headers or lists, etc.
Also fixes capitalization as I found them.
parent 7097c795
......@@ -15,16 +15,16 @@ Follow the installation instructions for [Omnibus GitLab][]. When you perform th
## Configure Omnibus GitLab
Create a minimal gitlab.rb file to be placed at /etc/gitlab/gitlab.rb. Be very explicit about what is enabled on this node, use the contents below.
Create a minimal `gitlab.rb` file to be placed at `/etc/gitlab/gitlab.rb`. Be very explicit about what is enabled on this node, use the contents below.
*Note*: This example is not intended to provide [PG HA](https://docs.gitlab.com/ee/administration/high_availability/database.html).
_**NOTE**: The values below should be replaced_
* `DB_USERNAME` default username is `gitlab`
* `DB_PASSSWORD` unencoded value
* `DB_ENCODED_PASSWORD` encoded value of `DB_PASSWORD`. Can be generated by replacing `DB_USERNAME` and `DB_PASSWORD` with real values in: `echo -n 'DB_PASSSWORDDB_USERNAME' | md5sum - | cut -d' ' -f1`
* `AUTH_CIDR_ADDRESS` configure the CIDRs for MD5 authentication, should be the smallest possible subnets of your cluster or it's gateway. For minikube this value is `192.168.100.0/12`
- `DB_USERNAME` default username is `gitlab`
- `DB_PASSSWORD` unencoded value
- `DB_ENCODED_PASSWORD` encoded value of `DB_PASSWORD`. Can be generated by replacing `DB_USERNAME` and `DB_PASSWORD` with real values in: `echo -n 'DB_PASSSWORDDB_USERNAME' | md5sum - | cut -d' ' -f1`
- `AUTH_CIDR_ADDRESS` configure the CIDRs for MD5 authentication, should be the smallest possible subnets of your cluster or it's gateway. For minikube this value is `192.168.100.0/12`
```Ruby
# Change the address below if you do not want PG to listen on all available addresses
......@@ -57,6 +57,7 @@ redis['enable'] = false
```
After creating `gitlab.rb`, we'll reconfigure the package with `gitlab-ctl reconfigure`. Once the task has completed, check the running processes with `gitlab-ctl status`. The output should appear as such:
```
# gitlab-ctl status
run: logrotate: (pid 4856) 1859s; run: log: (pid 31262) 77460s
......
......@@ -19,23 +19,23 @@ To use an external database with the `gitlab` chart, there are a few prerequisit
You need to set the following parameters:
* `postgresql.install`: Set to `false` to disable the embedded database.
* `global.psql.host`: Set to the hostname of the external database, can be a domain or an IP address.
* `global.psql.password.secret`: The name of the [secret which contains the database password for the `gitlab` user.](../../installation/secrets.md#postgresql-password).
* `global.psql.password.key`: The key within the secret, which contains the password. The password should be *unencoded* value.
- `postgresql.install`: Set to `false` to disable the embedded database.
- `global.psql.host`: Set to the hostname of the external database, can be a domain or an IP address.
- `global.psql.password.secret`: The name of the [secret which contains the database password for the `gitlab` user.](../../installation/secrets.md#postgresql-password).
- `global.psql.password.key`: The key within the secret, which contains the password. The password should be *unencoded* value.
Items below can be further customized if you are not using the defaults:
* `global.psql.port`: The port the database is available on, defaults to `5432`.
* `global.psql.database`: The name of the database.
* `global.psql.username`: The user with access to the database.
- `global.psql.port`: The port the database is available on, defaults to `5432`.
- `global.psql.database`: The name of the database.
- `global.psql.username`: The user with access to the database.
If you use a mutual TLS connection to the database:
* `global.psql.ssl.secret`: A secret containing client certificate, key and certificate authority.
* `global.psql.ssl.serverCA`: The key inside the secret refering the certificate authority (CA).
* `global.psql.ssl.clientCertificate`: They key inside the secret refering the client certificate.
* `global.psql.ssl.clientKey`: The client inside the secret.
- `global.psql.ssl.secret`: A secret containing client certificate, key and certificate authority.
- `global.psql.ssl.serverCA`: The key inside the secret refering the certificate authority (CA).
- `global.psql.ssl.clientCertificate`: They key inside the secret refering the client certificate.
- `global.psql.ssl.clientKey`: The client inside the secret.
For example, pass these values via helm's `--set` flag while deploying:
......
......@@ -65,6 +65,7 @@ gitaly['key_path'] = "path/to/key.pem"
```
After creating `gitlab.rb`, reconfigure the package with `gitlab-ctl reconfigure`. Once the task has completed, check the running processes with `gitlab-ctl status`. The output should appear as such:
```
# gitlab-ctl status
run: gitaly: (pid 30562) 77637s; run: log: (pid 30561) 77637s
......
......@@ -11,17 +11,16 @@ Disable the `gitaly` chart and the Gitaly service it provides, and point the oth
You need to set the following parameters:
* `gitlab.gitaly.enabled`: Set to `false` to disable the included Gitaly chart.
* `global.gitaly.host`: Set to the hostname of the external Gitaly, can be a domain or an IP address.
* `global.gitaly.authToken.secret`: The name of the [secret which contains the token for authentication][gitaly-secret].
* `global.gitaly.authToken.key`: The key within the secret, which contains the token content.
* `gitlab.gitaly.shell.authToken.secret`: The name of the [secret which contains secret for gitlab-shell][gitlab-shell-secret].
* `gitlab.gitaly.shell.authToken.key`: The key within the secret, which contains the secret content.
- `gitlab.gitaly.enabled`: Set to `false` to disable the included Gitaly chart.
- `global.gitaly.host`: Set to the hostname of the external Gitaly, can be a domain or an IP address.
- `global.gitaly.authToken.secret`: The name of the [secret which contains the token for authentication][gitaly-secret].
- `global.gitaly.authToken.key`: The key within the secret, which contains the token content.
- `gitlab.gitaly.shell.authToken.secret`: The name of the [secret which contains secret for GitLab Shell][gitlab-shell-secret].
- `gitlab.gitaly.shell.authToken.key`: The key within the secret, which contains the secret content.
Items below can be further customized if you are not using the defaults:
* `global.gitaly.port`: The port the database is available on, defaults to `8075`
- `global.gitaly.port`: The port the database is available on, defaults to `8075`
For example, pass these values via helm's `--set` flag while deploying:
......@@ -36,4 +35,4 @@ helm install . \
```
[gitaly-secret]: ../../installation/secrets.md#gitaly-secret
[gitlab-shell-secret]: ../../installation/secrets.md#gitlab-shell-secret
\ No newline at end of file
[gitlab-shell-secret]: ../../installation/secrets.md#gitlab-shell-secret
......@@ -8,14 +8,14 @@ this guide will help.
## TCP services in the external ingress controller
The gitlab-shell component requires TCP traffic to pass through on
The GitLab Shell component requires TCP traffic to pass through on
port 22 (by default; this can be changed). Ingress does not directly support TCP services, so some additonal configuration is necessary. Your nginx ingress controller may have been [deployed directly](https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md) (i.e. with a Kubernetes spec file) or through the [official Helm chart](https://github.com/helm/charts/tree/master/stable/nginx-ingress). The configuration of the TCP pass through will differ depending on the deployment approach.
### Direct deployment
In a direct deployment, the nginx ingress controller handles configuring TCP services with a
`ConfigMap` (see docs [here](https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md)).
Assuming your Gitlab chart is deployed to the namespace `gitlab` and your helm
Assuming your GitLab chart is deployed to the namespace `gitlab` and your helm
release is named `mygitlab`, your `ConfigMap` should be something like this:
```
......@@ -57,9 +57,9 @@ tcp:
The format for the value is the same as describe above in the "Direct Deployment" section.
## Customize the gitlab ingress options
## Customize the GitLab ingress options
The nginx ingress controller uses an annotation to mark which ingress controller
The Nginx Ingress controller uses an annotation to mark which ingress controller
will service a particular `Ingress` (see [docs](https://github.com/kubernetes/ingress-nginx#annotation-ingressclass)).
You can configure the ingress class to use with this chart using the
`global.ingress.class` setting. Make sure to set this in your helm options.
......@@ -76,12 +76,13 @@ disable the ingress controller that is deployed by default with this chart:
```
## Custom certifcate management
The full scope of your TLS options are documented [elswhere](https://gitlab.com/charts/gitlab/blob/master/doc/installation/tls.md).
If you are using an external ingress controller, you may also be using an external cert-manager instance
or managing your certificates in some other custom manner. The full documentation around your TLS options is [here](https://gitlab.com/charts/gitlab/blob/master/doc/installation/tls.md),
however for the purposes of this discussion, here are the two values that would need to be set to disable the cert-manager chart and tell
the gitlab component charts to NOT look for the built in certificate resources:
however for the purposes of this discussion, here are the two values that would need to be set to disable the cert-manager chart and tell
the GitLab component charts to NOT look for the built in certificate resources:
```bash
--set certmanager.install=false
......
# IAM roles for AWS
The default configuration for external object storage in the charts is to use access and secret keys.
It is also possible to use IAM roles in combination with [kube2iam](https://github.com/jtblin/kube2iam) or [kiam](https://github.com/uswitch/kiam).
The default configuration for external object storage in the charts is to use access and secret keys.
It is also possible to use IAM roles in combination with [kube2iam](https://github.com/jtblin/kube2iam) or [kiam](https://github.com/uswitch/kiam).
## IAM role
......@@ -28,7 +28,7 @@ s3:
region: us-east-1
```
*Note*: If you provide the keypair, IAM role will be ignored. See [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default) for more details.
*Note*: If you provide the keypair, IAM role will be ignored. See [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default) for more details.
### LFS, Artifacts, Uploads, Packages, Pseudonymizer
......@@ -61,4 +61,4 @@ The [s3cmd.config](./index.md#backups-storage-example) secret is to be created w
```
[default]
bucket_location = us-east-1
```
\ No newline at end of file
```
# External object storage
Gitlab relies on object storage for highly-available persistent data in Kubernetes.
GitLab relies on object storage for highly-available persistent data in Kubernetes.
By default, an S3-compatible storage solution named `minio` is deployed with the
chart, but for production quality deployments, we recommend using a hosted
object storage solution like Google Cloud Storage or AWS S3.
......@@ -87,11 +87,13 @@ diffs, and pseudonymizer is done via the `global.appConfig.lfs`,
--set global.appConfig.pseudonymizer.connection.key=connection
```
> **Note**: Currently a different bucket is needed for each, otherwise performing a restore from backup will not properly function.
NOTE: **Note:**
Currently a different bucket is needed for each, otherwise performing a restore from backup will not properly function.
> **Note**: Storing MR diffs on external storage is not enabled by default. So,
> for the object storage settings for `externalDiffs` to take effect,
> `global.appConfig.externalDiffs.enabled` key should have a `true` value.
NOTE: **Note:**
Storing MR diffs on external storage is not enabled by default. So,
for the object storage settings for `externalDiffs` to take effect,
`global.appConfig.externalDiffs.enabled` key should have a `true` value.
See the [charts/globals documentaion on appConfig](../../charts/globals.md#configure-appconfig-settings) for full details.
......
This diff is collapsed.
......@@ -31,9 +31,9 @@ can do one of three things:
- Install directly from the [Google Storage APIs](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl).
- Install with the appropriate package management system:
- Linux: your package manager of choice, or Snap.
- [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-homebrew-on-macos)
- [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-chocolatey-on-windows)
- Linux: your package manager of choice, or Snap.
- [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-homebrew-on-macos)
- [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-chocolatey-on-windows)
### Installing Minikube
......@@ -65,7 +65,7 @@ change according to the pieces being tested, and the requirements as listed:
NOTE: **Note**:
This is created in your home directory under `~/.minikube/machines/minikube/`.
- `--kubernetes-version string`: The kubernetes version that the minikube VM will use (e.g., `v1.2.3`).
- `--kubernetes-version string`: The Kubernetes version that the minikube VM will use (e.g., `v1.2.3`).
- `--registry-mirror stringSlice`: Registry mirrors to pass to the Docker daemon.
NOTE: **Note:**
......@@ -167,7 +167,7 @@ self-signed certificates at this time, and as such, should be disabled by settin
When using the recommended 3 CPU and 8 GB of RAM, use
[`values-minikube.yaml`](https://gitlab.com/charts/gitlab/blob/master/examples/values-minikube.yaml)
as a base.
as a base.
```shell
helm repo add gitlab https://charts.gitlab.io/
......@@ -220,8 +220,8 @@ covering most operating systems.
### Logging in
You can access the GitLab instance by visiting the domain specified, [https://gitlab.192.168.99.100.nip.io](https://gitlab.192.168.99.100.nip.io) is used in these examples. If you manually created the secret for initial root password, you can use that to sign in as root user. If not, Gitlab automatically created a random password for the root user. This can be extracted by the following command (replace `<name>` by name of the release - which is gitlab if you used the command above).
You can access the GitLab instance by visiting the domain specified, [https://gitlab.192.168.99.100.nip.io](https://gitlab.192.168.99.100.nip.io) is used in these examples. If you manually created the secret for initial root password, you can use that to sign in as root user. If not, GitLab automatically created a random password for the root user. This can be extracted by the following command (replace `<name>` by name of the release - which is `gitlab` if you used the command above).
```shell
kubectl get secret <name>-gitlab-initial-root-password -ojsonpath='{.data.password}' | base64 --decode ; echo
```
\ No newline at end of file
```
......@@ -30,14 +30,14 @@ The script reads various parameters from environment variables, or command line
The table below describes all variables.
| Variable | Description | Default value |
|-----------------|-----------------------------------------------------------------------------|----------------------------------|
| REGION | The region where your cluster lives | us-east-2 |
| CLUSTER_NAME | The name of the cluster | gitlab-cluster |
| CLUSTER_VERSION | The version of your EKS cluster | 1.11 |
| NUM_NODES | The number of nodes required | 2 |
| MACHINE_TYPE | The type of nodes to deploy | m5.xlarge |
| SERVICE_ACCOUNT | The service account name to use for helm/tilller | tiller |
| Variable | Description | Default value |
|-------------------|--------------------------------------------------|------------------|
| `REGION` | The region where your cluster lives | `us-east-2` |
| `CLUSTER_NAME` | The name of the cluster | `gitlab-cluster` |
| `CLUSTER_VERSION` | The version of your EKS cluster | `1.11` |
| `NUM_NODES` | The number of nodes required | `2` |
| `MACHINE_TYPE` | The type of nodes to deploy | `m5.xlarge` |
| `SERVICE_ACCOUNT` | The service account name to use for helm/tilller | `tiller` |
Run the script, by passing in your desired parameters. It can work with the
default parameters.
......@@ -53,6 +53,7 @@ The script can also be used to clean up the created EKS resources:
```
### Manual cluster creation
For the most up to date instructions, follow Amazon's
[EKS getting started guide](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html).
......
......@@ -17,9 +17,9 @@ installations on OpenShift:
1. Requirement of `anyuid` scc:
- Different components of GitLab, like Sidekiq, unicorn, etc., use UID 1000 to run services.
- PostgreSQL chart runs the service as the `root` user.
- [Issue #752](https://gitlab.com/charts/gitlab/issues/752) is open to investigate more on fixing this.
- Different components of GitLab, like Sidekiq, unicorn, etc., use UID 1000 to run services.
- PostgreSQL chart runs the service as the `root` user.
- [Issue #752](https://gitlab.com/charts/gitlab/issues/752) is open to investigate more on fixing this.
1. If using `hostpath` volumes, the persistent volume directories in host need to
be given `0777` permissions, for granting all users access to the volumes.
......@@ -34,56 +34,56 @@ installations on OpenShift:
to install and configure a cluster.
1. Run `oc cluster status` and confirm the cluster is running:
```bash
oc cluster status
```
```bash
oc cluster status
```
The output should be similar to:
The output should be similar to:
```
Web console URL: https://gitlab.example.com:8443/console/
```
Web console URL: https://gitlab.example.com:8443/console/
Config is at host directory
Volumes are at host directory
Persistent volumes are at host directory /home/okduser/openshift/openshift.local.clusterup/openshift.local.pv
Data will be discarded when cluster is destroyed
```
Config is at host directory
Volumes are at host directory
Persistent volumes are at host directory /home/okduser/openshift/openshift.local.clusterup/openshift.local.pv
Data will be discarded when cluster is destroyed
```
Note the location of Persistent Volumes in the host machine (in the above example
`/home/okduser/openshift/openshift.local.clusterup/openshift.local.pv`).
The following command expects that path in the `PV_HOST_DIRECTORY` environment variable.
Note the location of Persistent Volumes in the host machine (in the above example
`/home/okduser/openshift/openshift.local.clusterup/openshift.local.pv`).
The following command expects that path in the `PV_HOST_DIRECTORY` environment variable.
1. Modify the permissions of PV directories (replace the path in the following
command by the value from above):
```bash
sudo chmod -R a+rwx ${PV_HOST_DIRECTORY}/*
```
```bash
sudo chmod -R a+rwx ${PV_HOST_DIRECTORY}/*
```
1. Switch to the system administrator user:
```bash
oc login -u system:admin
```
```bash
oc login -u system:admin
```
1. Add `anyuid` scc your namespace's default user:
```bash
oc project ${YOUR_NAMESPACE}
oc adm policy add-scc-to-user anyuid -z default -n ${YOUR_NAMESPACE}
oc adm policy add-scc-to-user anyuid -z gitlab-runner -n ${YOUR_NAMESPACE}
```
```bash
oc project ${YOUR_NAMESPACE}
oc adm policy add-scc-to-user anyuid -z default -n ${YOUR_NAMESPACE}
oc adm policy add-scc-to-user anyuid -z gitlab-runner -n ${YOUR_NAMESPACE}
```
CAUTION: **Warning**:
This setting will be applied across the specified namespace and will result
in Docker images that does not explicitly specify user running as `root`.
CAUTION: **Warning**:
This setting will be applied across the specified namespace and will result
in Docker images that does not explicitly specify user running as `root`.
1. Create the service account and `rolebinding` for RBAC and [install Tiller](../tools.md#helm):
```bash
kubectl create -f https://gitlab.com/charts/gitlab/raw/master/doc/installation/examples/rbac-config.yaml
helm init --service-account tiller
```
```bash
kubectl create -f https://gitlab.com/charts/gitlab/raw/master/doc/installation/examples/rbac-config.yaml
helm init --service-account tiller
```
If you want to enable Git over SSH, you need to take further steps. Theses steps can be taken either before
or after installation. The reason is that OpenShift [Routers](https://docs.okd.io/3.11/architecture/networking/routes.html#routers)
......@@ -95,8 +95,6 @@ Shell. You can use [Service with External IP](https://docs.openshift.com/contain
to get SSH traffic into the cluster, but it requires more advanced configuration on both OpenShift and the nodes.
For further details, see [OpenShift manual](https://docs.openshift.com/container-platform/3.11/dev_guide/expose_service/expose_internal_ip_service.html).
## Next Steps
Continue with the [installation of the chart](../deployment.md) once you have
......@@ -109,9 +107,9 @@ installation procedure:
the nginx-ingress service that is included in the charts. Pass the following
flag to the `helm install` command:
```bash
--set nginx-ingress.enabled=false
```
```bash
--set nginx-ingress.enabled=false
```
1. [Use your own SSL certificates](../tls.md#option-2-use-your-own-wildcard-certificate)
......@@ -119,20 +117,20 @@ installation procedure:
Shell service (you can assign multiple external IPs if needs be). Use the following command argument
to pass one or more external IPs, as an array:
```bash
--set gitlab.gitlab-shell.service.externalIPs='{x.x.x.x}'
```
```bash
--set gitlab.gitlab-shell.service.externalIPs='{x.x.x.x}'
```
```bash
--set gitlab.gitlab-shell.service.externalIPs='{x.x.x.x,y.y.y.y}'
```
```bash
--set gitlab.gitlab-shell.service.externalIPs='{x.x.x.x,y.y.y.y}'
```
You may have to use an alternative port, in case SSH port is already in use on your node. You may
have to use a different domain name as well. You can use the following for this purpose:
```bash
--set global.shell.port=222
--set global.hosts.ssh=ssh.gitlab.example.com
```
```bash
--set global.shell.port=222
--set global.hosts.ssh=ssh.gitlab.example.com
```
Check out the [OpenShift examples](https://gitlab.com/charts/gitlab/tree/master/examples/openshift).
......@@ -2,15 +2,15 @@
## Prerequisites
* Deployment using omnibus-gitlab package needs to be running. Run `gitlab-ctl
status` and confirm no services report a `down` state.
- Deployment using Omnibus GitLab package needs to be running. Run `gitlab-ctl status`
and confirm no services report a `down` state.
* `/etc/gitlab/gitlab-secrets.json` file from package based installation.
- `/etc/gitlab/gitlab-secrets.json` file from package based installation.
* A Helm charts based deployment running the same GitLab version as the
omnibus-gitlab package based installation.
- A Helm charts based deployment running the same GitLab version as the
Omnibus GitLab package based installation.
* Object storage service which the Helm chart based deployment is configured to
- Object storage service which the Helm chart based deployment is configured to
use. For production use, we recommend you use an [external object storage] and
have the login credentials to access it ready. If you are using the built-in
Minio service, [read the docs](minio.md) on how to grab the login credentials
......@@ -20,65 +20,72 @@
1. Migrate existing files (uploads, artifacts, lfs objects) from package based
installation to object storage.
1. Modify `/etc/gitlab/gitlab.rb` file and configure object storage for
[uploads](https://docs.gitlab.com/ee/administration/uploads.html#s3-compatible-connection-settings),
[artifacts](https://docs.gitlab.com/ee/administration/job_artifacts.html#s3-compatible-connection-settings)
and
[LFS](https://docs.gitlab.com/ee/workflow/lfs/lfs_administration.html#s3-for-omnibus-installations).
**`Note:`** This **must** be the same object storage service that the
Helm charts based deployment is connected to.
1. Run reconfigure to apply the changes
```
$ sudo gitlab-ctl reconfigure
```
1. Migrate existing artifacts to object storage
```
$ sudo gitlab-rake gitlab:artifacts:migrate
$ sudo gitlab-rake gitlab:traces:migrate
```
1. Migrate existing LFS objects to object storage
```
$ sudo gitlab-rake gitlab:lfs:migrate
```
1. Migrate existing uploads to object storage
```
$ sudo gitlab-rake gitlab:uploads:migrate:all
```
Docs: <https://docs.gitlab.com/ee/administration/raketasks/uploads/migrate.html#migrate-to-object-storage>
1. Visit the omnibus-gitlab package based GitLab instance and make sure the
uploads are available. For example check if user, group and project
avatars are rendered fine, image and other files added to issues load
correctly, etc.
1. Move the uploaded files from their current location so that
they won't end up in the tarball. The default locations are
* uploads: `/var/opt/gitlab/gitlab-rails/uploads/`
* lfs: `/var/opt/gitlab/gitlab-rails/shared/lfs-objects`
* artifacts: `/var/opt/gitlab/gitlab-rails/shared/artifacts`
```
$ sudo mv /var/opt/gitlab/gitlab-rails/uploads{,.bak}
$ sudo mv /var/opt/gitlab/gitlab-rails/shared/lfs-objects{,.bak}
$ sudo mv /var/opt/gitlab/gitlab-rails/shared/artifacts{,.bak}
```
1. Run reconfigure to recreate empty directories in place, so backup task
won't fail.
```
$ sudo gitlab-ctl reconfigure
```
1. Modify `/etc/gitlab/gitlab.rb` file and configure object storage for
[uploads](https://docs.gitlab.com/ee/administration/uploads.html#s3-compatible-connection-settings),
[artifacts](https://docs.gitlab.com/ee/administration/job_artifacts.html#s3-compatible-connection-settings)
and [LFS](https://docs.gitlab.com/ee/workflow/lfs/lfs_administration.html#s3-for-omnibus-installations).
**`Note:`** This **must** be the same object storage service that the
Helm charts based deployment is connected to.
1. Run reconfigure to apply the changes
```sh
$ sudo gitlab-ctl reconfigure
```
1. Migrate existing artifacts to object storage
```sh
$ sudo gitlab-rake gitlab:artifacts:migrate
$ sudo gitlab-rake gitlab:traces:migrate
```
1. Migrate existing LFS objects to object storage
```sh
$ sudo gitlab-rake gitlab:lfs:migrate
```
1. Migrate existing uploads to object storage
```sh
$ sudo gitlab-rake gitlab:uploads:migrate:all
```
Docs: <https://docs.gitlab.com/ee/administration/raketasks/uploads/migrate.html#migrate-to-object-storage>
1. Visit the Omnibus GitLab package based GitLab instance and make sure the
uploads are available. For example check if user, group and project
avatars are rendered fine, image and other files added to issues load
correctly, etc.
1. Move the uploaded files from their current location so that
they won't end up in the tarball. The default locations are
- uploads: `/var/opt/gitlab/gitlab-rails/uploads/`
- lfs: `/var/opt/gitlab/gitlab-rails/shared/lfs-objects`
- artifacts: `/var/opt/gitlab/gitlab-rails/shared/artifacts`
```sh
$ sudo mv /var/opt/gitlab/gitlab-rails/uploads{,.bak}
$ sudo mv /var/opt/gitlab/gitlab-rails/shared/lfs-objects{,.bak}
$ sudo mv /var/opt/gitlab/gitlab-rails/shared/artifacts{,.bak}
```
1. Run reconfigure to recreate empty directories in place, so backup task
won't fail.
```
$ sudo gitlab-ctl reconfigure
```
1. [Create backup tarball](https://docs.gitlab.com/ee/raketasks/backup_restore.html#creating-a-backup-of-the-gitlab-system)
```
$ sudo gitlab-rake gitlab:backup:create
```
```
$ sudo gitlab-rake gitlab:backup:create
```
The backup file will be stored in `/var/opt/gitlab/backups` directory, unless
[explicitly changed](https://docs.gitlab.com/omnibus/settings/backups.html#manually-manage-backup-directory)
in `gitlab.rb`.
......@@ -89,12 +96,13 @@
on how to restore the secrets from package based installation.
1. Restart all pods to make sure changes are applied
```
kubectl delete pods -lrelease=<helm release name>
```
```
kubectl delete pods -lrelease=<helm release name>
```
1. Visit the Helm based deployment and confirm projects, groups, users, issues
etc. that existed in omnibus-gitlab package based installation are restored.
etc. that existed in Omnibus GitLab package based installation are restored.
Also verify if the uploaded files (avatars, files uploaded to issues, etc.)
are loaded fine.
......
# [Migrating from Omnibus-Gitlab package based installation](./index.md)
# [Migrating from Omnibus GitLab package based installation](index.md)
## Using built-in Minio service for object storage
......@@ -10,38 +10,41 @@ look at the `gitlab.yml` file that is generated in sidekiq, unicorn and
task-runner pods. Follow the steps to grab it from the sidekiq pod
1. Find out the name of the sidekiq pod
```bash
$ kubectl get pods -lapp=sidekiq
```
```bash
$ kubectl get pods -lapp=sidekiq
```
1. Grab the `gitlab.yml` file from sidekiq pod
```bash
$ kubectl exec <sidekiq pod name> -- cat /srv/gitlab/config/gitlab.yml
```
1. In the gitlab.yml file, there will be a section for uploads with details of
```bash
$ kubectl exec <sidekiq pod name> -- cat /srv/gitlab/config/gitlab.yml
```
1. In the `gitlab.yml` file, there will be a section for uploads with details of
object storage connection. Something similar to the following
```yaml
uploads:
enabled: true
object_store:
enabled: true
remote_directory: gitlab-uploads
direct_upload: true
background_upload: false
proxy_download: true
connection:
provider: AWS
region: <S3 region>
aws_access_key_id: "<access key>"
aws_secret_access_key: "<secret access key>"
host: <Minio host>
endpoint: <Minio endpoint>
path_style: true
```
```yaml
uploads:
enabled: true
object_store:
enabled: true
remote_directory: gitlab-uploads
direct_upload: true
background_upload: false
proxy_download: true
connection:
provider: AWS
region: <S3 region>
aws_access_key_id: "<access key>"
aws_secret_access_key: "<secret access key>"
host: <Minio host>
endpoint: <Minio endpoint>
path_style: true
```
1. Use this information to configure object storage in `/etc/gitlab/gitlab.rb`
file of omnibus-gitlab package based deployment, as detailed in the [docs](https://docs.gitlab.com/ee/administration/uploads.html#s3-compatible-connection-settings).
file of Omnibus GitLab package based deployment, as detailed in the [docs](https://docs.gitlab.com/ee/administration/uploads.html#s3-compatible-connection-settings).
**Note:** For connecting to the Minio service from outside the cluster, the
Minio host URL alone is enough. Helm charts based installations are
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment