RashmiPati-Accenture-05-02-2025
<Your Name> - <Your Company> - <Date>
Deploying GitLab Reference Architecture on AWS with GET
IMPORTANT: This workshop assumes that you are using GET 2.0.1 or newer.
Workshop pre-work
This should be completed before the SKO breakout session
If you are a GitLab Teammember, be sure to use GitLab Sandbox Cloud to create an AWS account for you.
GET uses Terraform to provision Infrastructure and Ansible to manage GitLab configuration; It is possible to use Terraform and Ansible installed locally; However, during this session, to avoid potential environmental issues, we are going to use Toolkit's image. To do so we need to prepare our environment by installing Docker and Git. Our suggestion is to deploy an EC2 instance in the region you have chosen to run this workshop. The instance OS is a suggestion but the Docker installation might be different if you pick a different OS. Check the official Docker documentation.
The instructions that follow are prescriptive of an amd64 Mac.
-
-
Access the
AWS Management Console
-
Choose a region and add it to the next line (e.g. us-east-2):
- AWS Region: us-east-2
- Create an AWS Access Key. Save these keys locally: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to later use;
-
Choose a
prefix
(gitlab-YOURGITLABHANDLE-3k for example): gitlab-rashmi-3k -
Access EC2
- Create a Key Pair
-
Allocate an Elastic IP and add it and the
Allocation ID
to the lines below: - Elastic IP: 18.117.58.131
- Allocation ID: eipalloc-0a40c31aa29c999f1
-
Choose a region and add it to the next line (e.g. us-east-2):
-
Access the
-
-
Launch a
c5.large
instance withAmazon Linux 2 AMI (HVM) 64-bit (x86)
selecting the key pair created previously. Note: the costs of deploying GitLab in this workshop are the responsibility of the workshop student or student's company.-
Make sure you have Security group for
SSH TCP 22 0.0.0.0/0
- Access your instance by selecting from the Instances listing your Instance ID, followed by selecting Connect, and selecting Connect again under the EC2 Instance Connect tab
-
Install Docker by running in the terminal you connected to the following commands:
-
sudo yum update -y
-
sudo yum install -y docker
-
sudo systemctl enable docker
-
sudo systemctl start docker
-
sudo usermod -aG docker ec2-user
-
-
Run
sudo yum install -y git
-
Make sure you have Security group for
-
Launch a
-
-
On the same instance terminal run the next steps:
-
Run
cd /home/ec2-user
-
Run
git clone https://gitlab.com/gitlab-org/gitlab-environment-toolkit.git
to clone GET -
Run
sudo docker pull registry.gitlab.com/gitlab-org/gitlab-environment-toolkit:latest
to pull the Toolkit's image- If you got
Getting permission denied while trying to connect to the Docker daemon socket
just restart the instance, if the issue persist you can runsudo chmod 666 /var/run/docker.sock
ssh on it again.
- If you got
-
Run
cd gitlab-environment-toolkit
to access the keys directory -
Run
ssh-keygen -t rsa -b 2048 -f keys/id_rsa
leaving the passphrase empty, and completing the ssh key creation (these will be used on the gitlab instance)
-
Run
-
On the same instance terminal run the next steps:
Workshop work
During the session
Terraform Configuration
-
-
Configure GET to deploy the 3K reference architecture. Open a terminal and run
cd /home/ec2-user/gitlab-environment-toolkit
cloned previously, run the steps that follow:-
run
mkdir terraform/environments/3k
-
run
cd terraform/environments/3k
-
run
touch variables.tf main.tf environment.tf
-
run
-
Configure GET to deploy the 3K reference architecture. Open a terminal and run
The directory structure should look like below:
gitlab-environment-toolkit
└── terraform
└── environments
└── 3k
├── variables.tf
└── main.tf
└── environment.tf
Following the documentation we are going to edit the 3 files created in the previous step:
-
-
In the
variables.tf
file replace the variables accordingly.-
Replace
prefix
value -
Replace
region
value -
Replace
external_ip_allocation
-
Replace
ssh_public_key_file
with your key location (if the pre-work instructions were followed, the location is"../../../keys/id_rsa.pub"
)
-
Replace
-
In the
Both the public key and the allocation ID were provisioned as part of workshop pre-work (beginning of this tutorial) as well as the prefix
. This prefix
variable later is going to be used on our Ansible configuration. Your variables.tf
file should look like the following:
variable "prefix" {
default = "gitlab-afonseca-3k"
}
variable "region" {
default = "eu-central-1"
}
variable "ssh_public_key_file" {
default = "../../../keys/id_rsa.pub"
}
# This can be found in the Elastic IPs section
variable "external_ip_allocation" {
default = "eipalloc-08d875f994cb379bb"
}
We use local storage of the Terraform state for this workshop. Alternatively, you can store it in a previously provisioned bucket.
-
-
The file
main.tf
should look like below:
-
The file
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
# Configure the AWS Provider
provider "aws" {
region = var.region
}
Edit the environment.tf
file to configure Terraform with the target Reference Architecture. During this workshop, we are going to use created network mode with 3 subnets. GET is quite flexible and the network configuration can be customized to your needs. For more information check the Advanced Network documentation.
-
- For the 3K Architecture this file should look like follows:
module "gitlab_ref_arch_aws" {
source = "../../modules/gitlab_ref_arch_aws"
prefix = var.prefix
ssh_public_key = file(var.ssh_public_key_file)
# using network mode create with 3 subnets
create_network = true
subnet_pub_count = 3
# External load balancer node
haproxy_external_node_count = 1
haproxy_external_instance_type = "c5.large"
# Redis
redis_node_count = 3
redis_instance_type = "m5.large"
# Consul + Sentinel
consul_node_count = 3
consul_instance_type = "c5.large"
# Postgres
postgres_node_count = 3
postgres_instance_type = "m5.large"
# Pgbouncer
pgbouncer_node_count = 3
pgbouncer_instance_type = "c5.large"
# External Load Balancer
haproxy_external_elastic_ip_allocation_ids = [var.external_ip_allocation]
# Internal Load balancer node
haproxy_internal_node_count = 1
haproxy_internal_instance_type = "c5.large"
# gitaly
gitaly_node_count = 3
gitaly_instance_type = "m5.xlarge"
# praefect
praefect_node_count = 3
praefect_instance_type = "c5.large"
# praefect postgres
praefect_postgres_node_count = 1
praefect_postgres_instance_type = "c5.large"
# nfs
gitlab_nfs_node_count = 1
gitlab_nfs_instance_type = "c5.xlarge"
#rails
gitlab_rails_node_count = 3
gitlab_rails_instance_type = "c5.2xlarge"
#grafana and prometheus
monitor_node_count = 1
monitor_instance_type = "c5.large"
#sidekiq
sidekiq_node_count = 4
sidekiq_instance_type = "m5.large"
}
Ansible configuration
Now let's configure Ansible, first creating the required directory structure and files.
-
-
In a terminal from the directory
gitlab-environment-toolkit
execute the following steps:-
Run
mkdir -p ansible/environments/3k/inventory
-
Run
cd ansible/environments/3k/inventory
-
Run
touch 3k.aws_ec2.yml vars.yml
-
Run
-
In a terminal from the directory
The directory structure should look like below:
gitlab-environment-toolkit
└── ansible
└── environments
└── 3k
└── inventory
├── 3k.aws_ec2.yml
└── vars.yaml
-
-
Now lets configure the AWS Dynamic Inventory plugin. Edit
3k.aws_ec2.yml
to look similar to the following:-
Replace
region
accordingly -
Replace
tag:gitlab_node_prefix:
accordingly to yourprefix
-
Don't change
ansible_host
. The valuepublic_ip_address
is the correct value.
-
Replace
-
Now lets configure the AWS Dynamic Inventory plugin. Edit
plugin: aws_ec2
regions:
- eu-central-1
filters:
tag:gitlab_node_prefix: gitlab-afonseca-3k # Same prefix set in Terraform
keyed_groups:
- key: tags.gitlab_node_type
separator: ''
- key: tags.gitlab_node_level
separator: ''
hostnames:
# List host by name instead of the default public ip
- tag:Name
compose:
# Use the public IP address to connect to the host
# (note: this does not modify inventory_hostname, which is set via I(hostnames))
ansible_host: public_ip_address
Now we need to provide the variables to the Ansible playbooks, for simplicity, we are going to use just one Password across the application, but this can be configured as per your needs. We are also avoiding passwords in the playbooks using environment variables through
lookup('env',...)
that pulls the password for us. In the next section, the variableGITLAB_PASSWORD
will be defined.
-
-
Set the variables accordingly to produce a
vars.yml
similar to the following-
Replace
aws_region
-
Replace
prefix
-
Replace
external_url
with the allocated elastic ip -
Uncomment and add your
gitlab_license_file
if it's available
-
Replace
-
Set the variables accordingly to produce a
all:
vars:
# Ansible Settings
ansible_user: ubuntu
ansible_ssh_private_key_file: "../keys/id_rsa"
# Cloud Settings, available options: gcp, aws, azure
cloud_provider: "aws"
# AWS only settings
aws_region: "eu-central-1"
# General Settings
prefix: "gitlab-afonseca-3k"
external_url: "http://3.64.162.121"
# gitlab_license_file: "../../../sensitive/GitLabBV.gitlab-license"
# Component Settings
patroni_remove_data_directory_on_rewind_failure: false
patroni_remove_data_directory_on_diverged_timelines: false
# Passwords / Secrets
gitlab_root_password: "{{ lookup('env', 'GITLAB_PASSWORD') }}"
grafana_password: "{{ lookup('env', 'GITLAB_PASSWORD') }}"
postgres_password: "{{ lookup('env', 'GITLAB_PASSWORD') }}"
patroni_password: "{{ lookup('env', 'GITLAB_PASSWORD') }}"
consul_database_password: "{{ lookup('env', 'GITLAB_PASSWORD') }}"
gitaly_token: "{{ lookup('env', 'GITLAB_PASSWORD') }}"
pgbouncer_password: "{{ lookup('env', 'GITLAB_PASSWORD') }}"
redis_password: "{{ lookup('env', 'GITLAB_PASSWORD') }}"
praefect_external_token: "{{ lookup('env', 'GITLAB_PASSWORD') }}"
praefect_internal_token: "{{ lookup('env', 'GITLAB_PASSWORD') }}"
praefect_postgres_password: "{{ lookup('env', 'GITLAB_PASSWORD') }}"
Running Terraform and Ansible from Toolkit's Container
-
-
Open a terminal and set the following environment variables for the Terraform module:
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
andGITLAB_PASSWORD
, using for the GitLab root password something like3XKq6Bu3QnEze2uW
.-
Run,
export AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID>
-
Run,
export AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY>
-
Run,
export GITLAB_PASSWORD=<GITLAB_PASSWORD>
-
Run,
-
Open a terminal and set the following environment variables for the Terraform module:
To generate your own password use a password generator uncheck
Include Symbols:
and checkExclude Ambiguous Characters
to avoid issues with password polices across architecture components. The requirements below generally work fine:
-
-
Configure Ansible output logging
-
Open a terminal edit
gitlab-environment-toolkit/ansible/ansible.cfg
adding the following lines under[defaults]
:
-
Open a terminal edit
-
Configure Ansible output logging
log_path = /gitlab-environment-toolkit/ansible/environments/3k/inventory/ansible.log
display_args_to_stdout = True
-
-
Run a Toolkit Docker Container in the interactive passing the environment variables and mounting the required volume folders.
First, take a note of the full path to the folders that follows:
-
gitlab-environment-toolkit/keys
-
gitlab-environment-toolkit/ansible/environments/3k
-
gitlab-environment-toolkit/ansible/ansible.cfg
-
gitlab-environment-toolkit/terraform/environments/3k
-
Replace the following markers with the full path in the Docker command below:
- <host_full_path_to_keys>
- <host_full_path_to_ansible_environment_3k>
- <host_full_path_to_ansible.cfg_file>
- <host_path_to_terraform_environment_3k>
-
-
Run a Toolkit Docker Container in the interactive passing the environment variables and mounting the required volume folders.
First, take a note of the full path to the folders that follows:
docker run -it \
-v <host_full_path_to_keys>:/gitlab-environment-toolkit/keys \
-v <host_full_path_to_ansible_environment_3k>:/gitlab-environment-toolkit/ansible/environments/3k \
-v <host_full_path_to_ansible.cfg_file>:/gitlab-environment-toolkit/ansible/ansible.cfg \
-v <host_path_to_terraform_environment_3k>:/gitlab-environment-toolkit/terraform/environments/3k \
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
-e GITLAB_PASSWORD=$GITLAB_PASSWORD \
registry.gitlab.com/gitlab-org/gitlab-environment-toolkit:latest
In my instance, the paths looks like this:
docker run -it \
-v /home/ec2-user/gitlab-environment-toolkit/keys:/gitlab-environment-toolkit/keys \
-v /home/ec2-user/gitlab-environment-toolkit/ansible/environments/3k:/gitlab-environment-toolkit/ansible/environments/3k \
-v /home/ec2-user/gitlab-environment-toolkit/ansible/ansible.cfg:/gitlab-environment-toolkit/ansible/ansible.cfg \
-v /home/ec2-user/gitlab-environment-toolkit/terraform/environments/3k:/gitlab-environment-toolkit/terraform/environments/3k \
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
-e GITLAB_PASSWORD=$GITLAB_PASSWORD \
registry.gitlab.com/gitlab-org/gitlab-environment-toolkit:latest
-
-
Run the Terraform sequence to provision the required infrastructure from within the container
-
Install Terraform in the container with
mise install terraform -y
-
Run
cd /gitlab-environment-toolkit/terraform/environments/3k
-
Run
terraform init
-
Run
terraform plan -out 3k.aws_ec2.tfplan
-
Run
terraform apply 3k.aws_ec2.tfplan
- (w/o "plan") Enter 'yes' to perform the actions
-
Install Terraform in the container with
-
Run the Terraform sequence to provision the required infrastructure from within the container
-
-
Inside the container let's run the Ansible scripts
-
Run
cd /gitlab-environment-toolkit/ansible
-
To validate Ansible configuration run
ansible all -m ping -i environments/3k/inventory --list-hosts
-
Run
ansible-playbook -i environments/3k/inventory playbooks/all.yml
- NOTE: This might take up to an hour to complete
-
Once the process is complete, with no
ERROR
s, you may exit the container by typingexit
(or viaCtrl+D
)
-
Run
-
Inside the container let's run the Ansible scripts
Testing the GitLab Deployment
-
-
Back to the instance terminal lets ssh in a rails node to check gitlab status and puma logs.
-
Running something similar to
ssh -i /home/ec2-user/gitlab-environment-toolkit/keys/id_rsa ubuntu@your-rails-node-public-dns
-
Run
sudo gitlab-ctl status
-
Run
sudo gitlab-ctl tail puma
-
Access the GitLab application using
http://<your_elastic_ip>
(same as provided onexternal_url
invars.yaml
)-
Login with user
root
and the password as set in the variableGITLAB_PASSWORD
-
Create an issue in a new project, including the default
README.md
file -
Create a merge request (MR) from the issue and change the project's
README.md
by adding as titleCongratulations! You deployed GitLab HA with GET
- Commit change and merge the MR
-
Login with user
-
Running something similar to
-
Back to the instance terminal lets ssh in a rails node to check gitlab status and puma logs.
Post Screenshots (for certification only)
-
- Provide screenshots of your AWS console showing the various compute nodes that were provisioned during this workshop.
-
-
Provide screenshots of the running instance of GitLab (include the url bar with matching IP address to your
elastic ip
from initial steps) as a comment in this issue.
-
Provide screenshots of the running instance of GitLab (include the url bar with matching IP address to your
-
-
Attach the GET (Ansible) logs as a comment to this issue. You can copy it locally to attach to the issue. Running a command similar to
scp -i "afonseca-kp-frankfurt.pem" ec2-user@your-public-dns:/path/to/ansible.log /local/path/to/ansible.log
-
Attach the GET (Ansible) logs as a comment to this issue. You can copy it locally to attach to the issue. Running a command similar to
-
- Return to LevelUp to provide a link to this issue and your user ID. We will administer the grade for this workshop in LevelUp
Next Steps
Complete the workshop
NOTE: Please don't forget to destroy this environment using Terraform once you are done using it. You can do that from the container running:
-
cd /gitlab-environment-toolkit/terraform/environments/3k
-
terraform destroy
- To terminate the GET instance, through the AWS EC2 console right click it, Stop instance and then Terminate instance.
Continue to use cloud managed Services for stateful components
- Create an issue to follow how to replace and use cloud managed services like RDS and ElastiCache with GET.
That is everything for today. Thank you so much for your attendance!