Commit c3179820 authored by Achilleas Pipinellis's avatar Achilleas Pipinellis 🚀 Committed by Achilleas Pipinellis

Clean up the tools, cloud and preparation sections

Moved:

- doc/cloud -> doc/installation/cloud
- doc/helm/index.md -> doc/installation/tools.md
- doc/installation/version-mappings.md -> doc/index.md

Refactored:

- the main index page
- the installation page
- the tools page
- the installation/cloud/ pages
parent 0e7b4b83
......@@ -58,6 +58,7 @@ Customers who would like to get started quickly and easily should begin with thi
The gitlab chart is made of multiple subcharts. These charts provide individual components of the GitLab software.
Subcharts included are :
* [sidekiq](https://gitlab.com/charts/gitlab/tree/master/charts/gitlab/charts/sidekiq)
* [unicorn](https://gitlab.com/charts/gitlab/tree/master/charts/gitlab/charts/unicorn)
* [gitlab-shell](https://gitlab.com/charts/gitlab/tree/master/charts/gitlab/charts/gitlab-shell)
......
# Preparing EKS resources
For a fully functional GitLab instance, you will need a few resources before deployment of this chart.
1. An [EKS cluster](#creating-the-EKS-cluster)
1. [Persistent volume settings](#persistent-volume-management)
1. [TLS certificates](#external-access-to-gitlab)
## Creating the EKS cluster
For the most up to date instructions, follow Amazon's [EKS getting started guide](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html).
Administrators may also want to consider the [new AWS Service Operator for Kubernetes](https://aws.amazon.com/blogs/opensource/aws-service-operator-kubernetes-available/)
to simplify this process.
> **Note:**
>
> Enabling the AWS Service Operator requires a method of managing roles within the cluster. The initial
> services handling that management task are provided by third party developers. Administrators should
> keep that in mind when planning for deployment.
## Persistent Volume Management
There are two methods to manage volume claims on Kubernetes:
1. Manually create a persistent volume
1. Automatic persistent volume creation through dynamic provisioning
Learn more in the [cluster storage](../installation/storage.md) documentation.
> **Special Consideration:**
>
> We currently recommend using manual provisioning of persistent volumes. Amazon EKS
> clusters default to spanning multiple zones. Dynamic provisioning, if not configured
> to use a storage class locked to a particular zone leads to a scenario where pods may
> exist in a different zone from storage volumes and be unable to access data.
>
> Administrators who need to deploy in multiple zones should familiarize themselves
> with [how to set up cluster storage](../installation/storage.md) and review
> [Amazon's own documentation on storage classes](https://docs.aws.amazon.com/eks/latest/userguide/storage-classes.html)
> when defining their storage solution.
## External Access to GitLab
By default, GitLab will an deploy an ingress which will create an associated Elastic Load Balancer. Since the DNS names of ELB's cannot be known ahead of time, it is difficult to utilize Let's Encrypt to automatically provision HTTPS certificates.
We recommend [using your own certificates](../installation/tls.md#option-2-use-your-own-wildcard-certificate), and then mapping your desired DNS name to the created ELB using a CNAME record.
> **NOTE:**
>
> For environments where internal loadbalancers are required,
> [Amazon's Elastic Load Balancers](https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html)
> require [special annotations](https://gitlab.com/charts/gitlab/blob/master/examples/eks_loadbalancer_annotations.yml).
# Next Steps
Continue with the [installation of the chart](../installation/index.md) once you have the cluster up and running, and have the static IP and DNS entry ready.
# Installing on Cloud based providers
* [Amazon EKS](eks.md)
* [Google Kubernetes Engine](gke.md)
* [OpenShift Origin](openshift.md)
# Installing GitLab on OKD (OpenShift Origin)
This document describes a basic outline of how to get GitLab up and running on an OKD instance using the official Helm charts.
> * **`Note:`**: This guide has been tested only on Openshift Origin 3.11.0. It is not guaranteed to work on other versions, or SaaS offering of OpenShift, OpenShift Online.
> * If you face any problems in installing or configuring GitLab by following this guide, please open issues at our [issue tracker](https://gitlab.com/charts/gitlab/issues)
> * Feedbacks and Merge Requests to improve this document are welcome.
## Known issues
The following issues are known and expected to be applicable to GitLab installations on OpenShift
1. Requirement of `anyuid` scc.
* Different components of GitLab, like sidekiq, unicorn, etc., use UID 1000 to run services.
* PostgreSQL chart runs the service as `root` user.
* #752 is open to investigate more on fixing this.
1. If using hostpath volumes, the persistent volume directories in host need to be given 0777 permission, for granting all users access to the volumes.
1. Git operations over SSH are not supported by OpenShift's built-in router. #892 is open to investigate more on fixing this.
1. GitLab Registry is known not to work with OpenShift's built-in router. #893 is open to investigate more on fixing this.
1. Automatic issuing of SSL certificates from Let's Encrypt will not work with OpenShift router. We suggest [using your own certificates](../installation/tls.md#option-2-use-your-own-wildcard-certificate). #894 is open to investigate more on fixing this.
## Prerequisite steps
1. Refer to [official documentation](https://www.okd.io/download.html#oc-platforms), and install and configure a cluster.
1. Run `oc cluster status` and confirm the cluster is running.
```bash
$ oc cluster status
Web console URL: https://gitlab.example.com:8443/console/
Config is at host directory
Volumes are at host directory
Persistent volumes are at host directory /home/okduser/openshift/openshift.local.clusterup/openshift.local.pv
Data will be discarded when cluster is destroyed
```
Note the location of Persistent Volumes in the host machine (In the above exmaple, that is `/home/okduser/openshift/openshift.local.clusterup/openshift.local.pv`).
The following command expects that path in the `PV_HOST_DIRECTORY` environment variable.
1. Modify permissions of PV directories (replace the path in the following command by the value from above)
```bash
sudo chmod -R a+rwx ${PV_HOST_DIRECTORY}/*
```
1. Switch to system administrator user
```bash
oc login -u system:admin
```
1. Add `anyuid` scc to the system user
```bash
oc adm policy add-scc-to-group anyuid system:authenticated
```
**`Warning`**: This setting will be applied across all namespaces and will result in docker images that does not explicitly specify USER running as `root` user.
#895 is open to document different service accounts required and to describe adding scc to those service accounts only, so the impact can be limited.
1. [Create service account and rolebinding for RBAC and install Tiller](../helm/index.md)
```bash
kubectl create -f https://gitlab.com/charts/gitlab/raw/master/doc/helm/examples/rbac-config.yaml
helm init --service-account tiller
```
# Next Steps
1. Take note of the following changes from normal chart installation procedure
1. We will be using OpenShift's built-in router, and hence need to disable the nginx-ingress service that is included in the charts. For that, pass the following flag to the `helm install` command
```bash
--set nginx-ingress.enabled=false
```
1. Since built-in Registry is known not to work with OpenShift using the Helm charts, disable the registry service. For that, pass the following flag to the `helm install` command
```
--set registry.enabled=false
```
1. [Use your own SSL certificates](../installation/tls.md#option-2-use-your-own-wildcard-certificate)
1. Continue with the [installation of the chart](../installation/index.md)
......@@ -29,7 +29,7 @@ We will bump it for:
### Example release scenarios:
|Chart Version|GitLab Version|Release Scenario|
|-|-|-|
|-------------|--------------|----------------|
|`0.2.0`|`11.0.0`| GitLab 11 release, and Chart beta |
|`0.2.1`|`11.0.1`| GitLab patch release |
|`0.2.2`|`11.0.1`| Chart changes released |
......@@ -46,7 +46,6 @@ We will bump it for:
<sup>1</sup> If we have two chart version that both would need to be upgraded to the same image version for a security release, we will just update the newer one. Otherwise automating release logic will be overly complicated. Users can workaround if needed by manually specifying the image version, or upgrading their chart.
### Future iteration
While we considered just using the GitLab version as our own, we are not yet in lockstep with gitlab releases to the point where we would make a breaking change here in the chart, and require gitlab to bump the version number to 12 for instance. For now we will move forward with a chart specific version scheme, until we get to the point where we have the charts stable enough that we are comfortable with sharing the same version, and a chart update being a reasonable reason to bump GitLab's core version.
......
# Helm
This document is intended to provide an overview of working with [Helm][helm] for [Kubernetes][k8s-io].
## Supported versions
This chart is currently only tested and support with Helm `v2`.
Helm `v1` is explicitly not supported. Helm `v3` may work, but there has not been and will not be any testing for the time being.
## Helm is not stand-alone
To make use of Helm, you must have a [Kubernetes][k8s-io] cluster. Follow the
[dependencies documentation](../installation/tools.md) to ensure you can access
your cluster using `kubectl`.
Helm consists of two parts, the `helm` (client) and `tiller` (server) inside
Kubernetes.
NOTE: **Note**:
If you are not able to run tiller in your cluster, for example on OpenShift,
it's possible to use [tiller locally](#local-tiller) and avoid deploying it
into the cluster. This should only be used when tiller cannot be normally deployed.
# Getting Helm
NOTE: **Note**:
We support using Helm versions in the 2.x line with 2.9.0 being our minimum
supported version.
You can get Helm from the project's [releases page](https://github.com/kubernetes/helm/releases),
or follow other options under the official documentation of
[Installing Helm](https://docs.helm.sh/using_helm/#installing-helm).
# Initialize Helm and Tiller
Tiller is deployed into the cluster and interacts with the Kubernetes API to
deploy your applications. If role based access control (RBAC) is enabled, Tiller
will need to be [granted permissions](#preparing-for-helm-with-rbac) to allow it
to talk to the Kubernetes API.
If RBAC is not enabled, skip to [initalizing Helm](#initialize-helm).
If you are not sure whether RBAC is enabled in your cluster, or to learn more,
read through our [RBAC documentation](../installation/rbac.md).
## Preparing for Helm with RBAC
NOTE: **Note**:
Ensure you have `kubectl` installed and it's up to date. Older versions do not
have support for RBAC and will generate errors.
Helm's Tiller will need to be granted permissions to perform operations. These
instructions grant cluster wide permissions, however for more advanced deployments
[permissions can be restricted to a single namespace](https://docs.helm.sh/using_helm/#example-deploy-tiller-in-a-namespace-restricted-to-deploying-resources-only-in-that-namespace).
To grant access to the cluster, we will create a new `tiller` service account
and bind it to the `cluster-admin` role:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
```
For ease of use, these instructions will utilize the
[sample YAML file](examples/rbac-config.yaml) in this repository. To apply the
configuration, we first need to connect to the cluster.
You can either use:
- [GKE cluster](#connect-to-gke-cluster)
- [EKS cluster](#connect-to-eks-cluster)
- [Local minikube cluster](#connect-to-local-minikube-cluster)
### Connect to GKE cluster
The command for connection to the cluster can be obtained from the [Google Cloud Platform Console][gcp-k8s]
by the individual cluster.
Look for the **Connect** button in the clusters list page.
**Or**
Use the command below, filling in your cluster's information:
```sh
gcloud container clusters get-credentials <cluster-name> --zone <zone> --project <project-id>
```
### Connect to EKS cluster
For the most up to date instructions, follow the Amazon EKS documentation on
[connecting to a cluster](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-configure-kubectl).
### Connect to local minikube cluster
If you are doing local development, you can use `minikube` as your
local cluster. If `kubectl cluster-info` is not showing `minikube` as the current
cluster, use `kubectl config set-cluster minikube` to set the active cluster.
### Upload the RBAC config
#### Upload the RBAC config as an admin user (GKE)
For GKE, you need to grab the admin credentials:
```
gcloud container clusters describe <cluster-name> --zone <zone> --project <project-id> --format='value(masterAuth.password)'
```
This command will output the admin password. We need the password to authenticate with `kubectl` and create the role.
We will also create an admin user for this cluster. Use a name you prefer but
for this example we will include the cluster's name in it.
```
CLUSTER_NAME=name-of-cluster
kubectl config set-credentials $CLUSTER_NAME-admin-user --username=admin --password=xxxxxxxxxxxxxx
kubectl --user=$CLUSTER_NAME-admin-user create -f https://gitlab.com/charts/gitlab/raw/master/doc/helm/examples/rbac-config.yaml
```
#### Upload the RBAC config
For other clusters like Amazon EKS, you can direclty upload the RBAC configuration.
```
kubectl create -f https://gitlab.com/charts/gitlab/raw/master/doc/helm/examples/rbac-config.yaml
```
## Initialize Helm
Deploy Helm Tiller with a service account
```
helm init --service-account tiller
```
If your cluster
previously had Helm/Tiller installed, run the following to ensure that the deployed version of Tiller matches the local Helm version:
```
helm init --upgrade --service-account tiller
```
# Additional Information
The Distribution Team has a [training presentation for Helm Charts](https://docs.google.com/presentation/d/1CStgh5lbS-xOdKdi3P8N9twaw7ClkvyqFN3oZrM1SNw/present).
## Templates
Templating in Helm is done via golang's [text/template][] and [sprig][].
Some information on how all the inner workings behave:
- [Functions and Pipelines][helm-func-pipeline]
- [Subcharts and Globals][helm-subchart-global]
## Tips and Tricks
Helm repository has some additional information on developing with helm in it's
[tips and tricks section](https://github.com/kubernetes/helm/blob/master/docs/charts_tips_and_tricks.md).
[helm]: https://helm.sh
[helm-using]: https://docs.helm.sh/using_helm
[k8s-io]: https://kubernetes.io/
[gcp-k8s]: https://console.cloud.google.com/kubernetes/list
[text/template]: https://golang.org/pkg/text/template/
[sprig]: https://godoc.org/github.com/Masterminds/sprig
[helm-func-pipeline]: https://github.com/kubernetes/helm/blob/master/docs/chart_template_guide/functions_and_pipelines.md
[helm-subchart-global]: https://github.com/kubernetes/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md
## Local tiller
_This is not recommended_
If you are not able to run tiller in your cluster, this chart includes a script
that should allow you to use helm with running tiller in your cluster. The
script uses your personal Kubernetes credentials and configuration to apply
the chart. This method is not well supported, but should work.
To use the script, skip this entire section about initializing helm. Instead,
make sure you have Docker installed locally and run
`bin/localtiller-helm --client-only`. After that, you can substitute
`bin/localtiller-helm` anywhere these instructions direct you to run `helm`.
......@@ -27,15 +27,47 @@ Some features of GitLab are not currently available using the Helm chart:
- [No in-cluster HA database](https://gitlab.com/charts/gitlab/issues/48)
- MySQL will not be supported, as support is [deprecated within GitLab](https://docs.gitlab.com/omnibus/settings/database.html#using-a-mysql-database-management-server-enterprise-edition-only)
## Requirements
## GitLab version mappings
In order to deploy GitLab on Kubernetes, the following are required:
The table below maps some of the key previous chart versions and GitLab versions.
1. `helm` and `kubectl` [installed on your computer](installation/tools.md).
1. A Kubernetes cluster, version 1.8 or higher. 6vCPU and 16GB of RAM is recommended.
- [Google GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster)
- [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
- [Microsoft AKS](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal)
| Chart version | GitLab version |
|---------------|----------------|
| 1.5.0 | 11.7.0 |
| 1.4.0 | 11.6.0 |
| 1.3.0 | 11.5.0 |
| 1.2.0 | 11.4.0 |
| 1.1.0 | 11.3.0 |
| 1.0.0 | 11.2.0 |
| 0.3.5 | 11.1.4 |
| 0.2.4 | 11.0.4 |
To see the full list of the `gitlab` chart versions and the GitLab version they
maps to, issue the following command with [helm](installation/tools.md#helm):
```sh
helm repo add gitlab https://charts.gitlab.io/
helm search -l gitlab/gitlab
```
You will receive an output similar to:
```
NAME CHART VERSION APP VERSION
gitlab/gitlab 1.5.0 11.7.0
gitlab/gitlab 1.4.4 11.6.5
gitlab/gitlab 1.4.3 11.6.4
gitlab/gitlab 1.4.2 11.6.3
gitlab/gitlab 1.4.1 11.6.2
```
Read more about our [charts versioning](development/release.md) in our
development docs.
Make sure to also check the [releases documentation](releases/index.md) for
information on important releases, and see the
[changelog](https://gitlab.com/charts/gitlab/blob/master/CHANGELOG.md) for the
full details on any release.
## Installing GitLab using the Helm Chart
......@@ -51,6 +83,7 @@ Once your GitLab Chart is installed, configuration changes and chart updates
should be done using `helm upgrade`:
```sh
helm repo add gitlab https://charts.gitlab.io/
helm repo update
helm upgrade --reuse-values gitlab gitlab/gitlab
```
......@@ -65,6 +98,11 @@ To uninstall the GitLab Chart, run the following:
helm delete gitlab
```
## Migrate from Omnibus GitLab to Kubernetes
To migrate your existing Omnibus GitLab instance to your Kubernetes cluster,
follow the [migration documentation](installation/migration/index.md).
## Advanced configuration
See [Advanced Configuration](advanced/index.md).
......@@ -76,6 +114,3 @@ See [Troubleshooting](troubleshooting/index.md).
## Misc
[Weekly demos preparation](preparation/index.md)
[kube-srv]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
[storageclass]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storageclasses
# Preparing EKS resources
For a fully functional GitLab instance, you will need a few resources before
deploying the `gitlab` chart.
## Creating the EKS cluster
For the most up to date instructions, follow Amazon's
[EKS getting started guide](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html).
Administrators may also want to consider the
[new AWS Service Operator for Kubernetes](https://aws.amazon.com/blogs/opensource/aws-service-operator-kubernetes-available/)
to simplify this process.
NOTE: **Note:**
Enabling the AWS Service Operator requires a method of managing roles within the cluster. The initial
services handling that management task are provided by third party developers. Administrators should
keep that in mind when planning for deployment.
## Persistent Volume Management
There are two methods to manage volume claims on Kubernetes:
- Manually create a persistent volume.
- Automatic persistent volume creation through dynamic provisioning.
We currently recommend using manual provisioning of persistent volumes. Amazon EKS
clusters default to spanning multiple zones. Dynamic provisioning, if not configured
to use a storage class locked to a particular zone leads to a scenario where pods may
exist in a different zone from storage volumes and be unable to access data.
Administrators who need to deploy in multiple zones should familiarize themselves
with [how to set up cluster storage](../storage.md) and review
[Amazon's own documentation on storage classes](https://docs.aws.amazon.com/eks/latest/userguide/storage-classes.html)
when defining their storage solution.
## External Access to GitLab
By default, GitLab will deploy an ingress which will create an associated
Elastic Load Balancer (ELB). Since the DNS names of the ELB cannot be known
ahead of time, it's difficult to utilize Let's Encrypt to automatically provision
HTTPS certificates.
We recommend [using your own certificates](../tls.md#option-2-use-your-own-wildcard-certificate),
and then mapping your desired DNS name to the created ELB using a CNAME record.
NOTE: **Note:**
For environments where internal load balancers are required,
[Amazon's Elastic Load Balancers](https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html)
require [special annotations](https://gitlab.com/charts/gitlab/blob/master/examples/eks_loadbalancer_annotations.yml).
## Next Steps
Continue with the [installation of the chart](../deployment.md) once you have
the cluster up and running, and the static IP and DNS entry ready.
# Preparing GKE resources
For a fully functional GitLab instance, you will need to create a few resources before deployment of this chart.
1. A [GKE cluster](#creating-the-gke-cluster) with an associated external IP
For a fully functional GitLab instance, you will need a few resources before
deploying the `gitlab` chart.
## Creating the GKE cluster
To make getting started easier, we have provided a script to [automate cluster creation](#scripted-cluster-creation-on-gke). Alternatively, a cluster can be [created manually](#manual-cluster-creation) as well.
To get started easier, a script is provided to automate the cluster creation.
Alternatively, a cluster can be created manually as well.
### Scripted cluster creation
We have created a [bootstrap script](https://gitlab.com/charts/gitlab/blob/master/scripts/gke_bootstrap_script.sh) to automate much of the setup process for users on GCP/GKE. It will:
A [bootstrap script](https://gitlab.com/charts/gitlab/blob/master/scripts/gke_bootstrap_script.sh)
has been created to automate much of the setup process for users on GCP/GKE.
The script will:
* Create a new GKE cluster
* Allow the cluster to modify DNS records
* Setup kubectl, and connect it to the cluster
* Initialize Helm and install Tiller
1. Create a new GKE cluster.
1. Allow the cluster to modify DNS records.
1. Setup `kubectl`, and connect it to the cluster.
1. Initialize Helm and install Tiller.
Google Cloud SDK is a dependency of this script, you will have to make sure it is [set up correctly](../helm/index.md#connect-to-the-cluster) in order for the script to work.
Google Cloud SDK is a dependency of this script, so make sure it's
[set up correctly](../tools.md#connect-to-the-cluster) in order for the script
to work.
The script reads various parameters from environment variables and an argument `up` or `down` for bootstrap and clean up respectively.
The script reads various parameters from environment variables and an argument
`up` or `down` for bootstrap and clean up respectively.
The table below describes all variables.
......@@ -28,7 +34,7 @@ The table below describes all variables.
| REGION | The region where your cluster lives | us-central1 |
| ZONE | The zone where your cluster instances lives | us-central1-a |
| CLUSTER_NAME | The name of the cluster | gitlab-cluster |
| CLUSTER_VERSION | The version of your GKE cluster | GKE default, check [GKE release notes][]. |
| CLUSTER_VERSION | The version of your GKE cluster | GKE default, check the [GKE release notes](https://cloud.google.com/kubernetes-engine/release-notes) |
| MACHINE_TYPE | The cluster instances' type | n1-standard-4 |
| NUM_NODES | The number of nodes required. | 2 |
| PROJECT | the id of your GCP project | No defaults, required to be set. |
......@@ -36,15 +42,14 @@ The table below describes all variables.
| PREEMPTIBLE | Cheaper, clusters live at *most* 24 hrs. No SLA on nodes/disks | false |
| USE_STATIC_IP | Create a static IP for Gitlab instead of an ephemeral IP with managed DNS | false |
[GKE release notes]: https://cloud.google.com/kubernetes-engine/release-notes
Run the script, passing in your desired parameters. (The script can work with default parameters except for `PROJECT` which is required.)
Run the script, by passing in your desired parameters. It can work with the
default parameters except for `PROJECT` which is required:
```bash
PROJECT=<gcloud project id> ./scripts/gke_bootstrap_script.sh up
```
The script can also be used to clean up the created GKE resources by running
The script can also be used to clean up the created GKE resources:
```bash
PROJECT=<gcloud project id> ./scripts/gke_bootstrap_script.sh down
......@@ -58,22 +63,23 @@ Two resources need to be created in GCP, a Kubernetes cluster and an external IP
#### Creating the Kubernetes cluster
To provision the Kubernetes cluster manually, follow the [GKE instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster).
To provision the Kubernetes cluster manually, follow the
[GKE instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster).
* We recommend a cluster with 8vCPU and 30gb of RAM.
* Make a note of the cluster's region, it will be needed in the next step.
- We recommend a cluster with 8vCPU and 30GB of RAM.
- Make a note of the cluster's region, it will be needed in the following step.
#### Creating the external IP
An external IP is required so that your cluster can be reachable. The external IP needs to be regional and in the same region as the cluster itself.
> A global IP or an IP outside the cluster's region will not work.
An external IP is required so that your cluster can be reachable. The external
IP needs to be regional and in the same region as the cluster itself. A global
IP or an IP outside the cluster's region will **not work**.
To create a static IP run the following gcloud command:
To create a static IP run:
`gcloud compute addresses create ${CLUSTER_NAME}-external-ip --region $REGION --project $PROJECT`
To get the address of the newly created IP run the following gcloud command:
To get the address of the newly created IP:
`gcloud compute addresses describe ${CLUSTER_NAME}-external-ip --region $REGION --project $PROJECT --format='value(address)'`
......@@ -82,10 +88,12 @@ We will use this IP to bind with a DNS name in the next section.
## DNS Entry
If you created your cluster manually or used the `USE_STATIC_IP` option with the scripted creation,
you'll need a public domain with an `A record` wild card DNS entry pointing to the IP we just created.
you'll need a public domain with an A record wild card DNS entry pointing to the IP we just created.
Follow [This](https://cloud.google.com/dns/quickstart) to create the DNS entry.
Follow the [Google DNS quickstart guide](https://cloud.google.com/dns/quickstart)
to create the DNS entry.
# Next Steps
## Next Steps
Continue with the [installation of the chart](../installation/index.md) once you have the cluster up and running, and have the static IP and DNS entry ready.
Continue with the [installation of the chart](../deployment.md) once you have
the cluster up and running, and the static IP and DNS entry ready.
# Installing on Cloud based providers
- [Amazon EKS](eks.md)
- [Google Kubernetes Engine](gke.md)
- [OpenShift Origin](openshift.md)
# Installing GitLab on OKD (OpenShift Origin)
This document describes a basic outline of how to get GitLab up and running on
an OKD instance using the official Helm charts.
NOTE: **Note:**:
This guide has been tested only on Openshift Origin 3.11.0 and is not guaranteed
to work on other versions, or SaaS offering of OpenShift, OpenShift Online.
If you face any problems in installing or configuring GitLab by following this
guide, open issues at our [issue tracker](https://gitlab.com/charts/gitlab/issues).
Feedback and Merge Requests to improve this document are welcome.
## Known issues
The following issues are known and expected to be applicable to GitLab
installations on OpenShift:
1. Requirement of `anyuid` scc:
- Different components of GitLab, like Sidekiq, unicorn, etc., use UID 1000 to run services.
- PostgreSQL chart runs the service as the `root` user.
- [Issue #752](https://gitlab.com/charts/gitlab/issues/752) is open to investigate more on fixing this.
1. If using `hostpath` volumes, the persistent volume directories in host need to
be given `0777` permissions, for granting all users access to the volumes.
1. Git operations over SSH are not supported by OpenShift's built-in router.
[Issue #892](https://gitlab.com/charts/gitlab/issues/892) is open to
investigate more on fixing this.
1. GitLab Registry is known not to work with OpenShift's built-in router.
[Issue #893](https://gitlab.com/charts/gitlab/issues/893) is open to
investigate more on fixing this.
1. Automatic issuing of SSL certificates from Let's Encrypt will not work with
OpenShift router. We suggest [using your own certificates](../tls.md#option-2-use-your-own-wildcard-certificate).
[Issue #894](https://gitlab.com/charts/gitlab/issues/894) is open to
investigate more on fixing this.
## Prerequisite steps
1. Refer to [official documentation](https://www.okd.io/download.html#oc-platforms)
to install and configure a cluster.
1. Run `oc cluster status` and confirm the cluster is running:
```bash
oc cluster status
```
The output should be similar to:
```
Web console URL: https://gitlab.example.com:8443/console/
Config is at host directory
Volumes are at host directory
Persistent volumes are at host directory /home/okduser/openshift/openshift.local.clusterup/openshift.local.pv
Data will be discarded when cluster is destroyed
```
Note the location of Persistent Volumes in the host machine (in the above example
`/home/okduser/openshift/openshift.local.clusterup/openshift.local.pv`).
The following command expects that path in the `PV_HOST_DIRECTORY` environment variable.
1. Modify the permissions of PV directories (replace the path in the following
command by the value from above):
```bash
sudo chmod -R a+rwx ${PV_HOST_DIRECTORY}/*
```
1. Switch to the system administrator user:
```bash
oc login -u system:admin
```
1. Add `anyuid` scc to the system user:
```bash
oc adm policy add-scc-to-group anyuid system:authenticated
```
CAUTION: **Warning**:
This setting will be applied across all namespaces and will result in Docker
images that does not explicitly specify USER running as `root` user.
[Issue #895](https://gitlab.com/charts/gitlab/issues/895) is open to
document different service accounts required and to describe adding scc to
those service accounts only, so the impact can be limited.
1. Create the service account and `rolebinding` for RBAC and [install Tiller](../tools.md#helm):
```bash
kubectl create -f https://gitlab.com/charts/gitlab/raw/master/doc/helm/examples/rbac-config.yaml
helm init --service-account tiller
```
## Next Steps
Continue with the [installation of the chart](../deployment.md) once you have
the cluster up and running, and the static IP and DNS entry ready.
Before doing so take note of the following changes from the normal chart
installation procedure:
1. We will be using OpenShift's built-in router, and hence need to disable
the nginx-ingress service that is included in the charts. Pass the following
flag to the `helm install` command:
```bash
--set nginx-ingress.enabled=false
```
1. Since built-in Registry is known not to work with OpenShift using the Helm
charts, disable the registry service. Pass the following flag to the
`helm install` command:
```sh
--set registry.enabled=false
```
1. [Use your own SSL certificates](../tls.md#option-2-use-your-own-wildcard-certificate)
......@@ -40,7 +40,7 @@ static IP. For example if you choose `example.com` and you have a static IP
of `10.10.10.10`, then `gitlab.example.com`, `registry.example.com` and
`minio.example.com` (if using minio) should all resolve to `10.10.10.10`.
If you are using GKE, there is some documentation [here](../cloud/gke.md#creating-the-external-ip)