Commit 7e50c8b1 authored by Hanif Smith-Watson's avatar Hanif Smith-Watson
Browse files

Fix incorrect pages in sitemap

parent ee109cc0
......@@ -2823,7 +2823,7 @@ features:
- title: "Portfolio Management"
description: "Plan and track work at the project and portfolio level. Manage capacity and resources together with Portfolio Management."
link_description: "Learn more about Portfolio Management"
link: /solutions/portfolio-management/
link: /solutions/agile-delivery/
- unknown
gitlab_core: false
......@@ -253,7 +253,7 @@ categories:
gitlab_ultimate: true
# The below item is designated as "Portfolio Management" in features.yml
- feature_title: "Company-Wide Portfolio Management"
link: /solutions/portfolio-management/
link: /solutions/agile-delivery/
gitlab_core: false
gitlab_starter: false
gitlab_premium: false
......@@ -167,7 +167,7 @@ Related Reading:
**VP Shared Svcs** - Common services for IT (Project Mgt, Portfolio Mgt, Resource Mgt, perhaps QA, etc) - for enterprises with microservices model
1. Value Prop - Project & [Portfolio Mgmt](/solutions/portfolio-management/), Testing (QA)
1. Value Prop - Project & [Portfolio Mgmt](/solutions/agile-delivery/), Testing (QA)
1. Resources - future vision
......@@ -2,7 +2,7 @@
layout: handbook-page-toc
title: Product Development Flow
description: "This is the draft version of the Product Development Flow."
canonical_path: "/handbook/product-development-flow/"
canonical_path: "/handbook/product-development-flow/product-development-flow-draft.html"
{::options parse_block_html="true" /}
......@@ -2,7 +2,7 @@
layout: handbook-page-toc
title: Product Development Flow Success Metrics
description: "This page surfaces metrics related to the product development flow"
canonical_path: "/handbook/product-development-flow/success-metrics"
canonical_path: "/handbook/product-development-flow/success-metrics/"
{::options parse_block_html="true" /}
title: "The 11 Rules of GitLab Flow"
categories: insights
author: Sid Sijbrandij
author_twitter: sytses
image_title: '/images/blogimages/the-11-rules-of-gitlab-flow-cover.png'
description: "Doing Git Right: The 11 Rules of GitLab Flow"
twitter_image: '/images/tweets/the-11-rules-of-gitlab-flow.png'
Version management with [Git] is an improvement over methods used before Git in just about every way. However, many organizations end up with messy workflows, or overly complex ones. This is particularly a problem for organizations who have transitioned from another [version control system](/topics/version-control/).
In this post we're laying out 11 rules for the [GitLab Workflow][doc], to help simplify and clean it up. The major benefit of the rules (or so we hope) is that it simplifies the process and produces a more efficient and cleaner outcome.
We think there's always room for improvement, and everything is a draft. As always, **everyone can contribute**! Feedback and opinions are very welcome.
<!-- more -->
<i class="fas fa-code-branch" aria-hidden="true"></i> **1. Use feature branches, no direct commits on master.**
{: .alert .alert-success .green}
If you're coming over from [SVN], for example, you'll be used to a trunk-based workflow. When using Git you should create a **branch** for whatever you’re working on, so that you end up doing a code review before you merge.
<i class="fas fa-check-square-o" aria-hidden="true"></i> **2. Test all commits, not only ones on master.**
{: .alert .alert-success .green}
Some people set up their CI to only test what has been merged into **master**. This is too late; people should feel confident that **master** always has green tests. It doesn't make sense for people to have to test **master** before they start developing new features, for example. CI isn’t expensive, so it makes the best sense to do it this way.
<i class="fas fa-flask" aria-hidden="true"></i> **3. Run all the tests on all commits (if your tests run longer than 5 minutes have them run in parallel).**
{: .alert .alert-success .green}
If you're working on a feature branch and you add new commits, run tests then and there. If the tests are taking a long time, try running them in parallel. Do this server-side in merge requests, running the complete test suite. If you have a test suite for development and another that you only run for new versions; it’s worthwhile to set up [parallel] tests and run them all.
<i class="fas fa-code" aria-hidden="true"></i> **4. Perform code reviews before merges into master, not afterwards.**
{: .alert .alert-success .green}
Don't test everything at the end of your week. Do it on the spot, because you'll be more likely to catch things that could cause problems and others will also be working to come up with solutions.
<i class="fas fa-terminal" aria-hidden="true"></i> **5. Deployments are automatic, based on branches or tags.**
{: .alert .alert-success .green}
If you don't want to deploy **master** every time, you can create a **production branch**; but there’s no reason why you should use a script or log in somewhere to do it manually. Have everything automated, or a specific branch that triggers a [production deploy][environment].
<i class="fas fa-tags" aria-hidden="true"></i> **6. Tags are set by the user, not by CI.**
{: .alert .alert-success .green}
A user sets a **tag** and, based on that, the CI will perform an action. You shouldn’t have the CI change the repository. If you need very detailed metrics, you should have a server report detailing new versions.
<i class="fas fa-cloud-upload-alt" aria-hidden="true"></i> **7. Releases are based on tags.**
{: .alert .alert-success .green}
If you tag something, that creates a new release.
<i class="fas fa-eye-slash" aria-hidden="true"></i> **8. Pushed commits are never rebased.**
{: .alert .alert-success .green}
If you push to a public branch you shouldn't rebase it since that makes it hard to follow what you're improving, what the test results were, and it breaks cherrypicking.
We sometimes sin against this rule ourselves when we ask a contributor to squash and rebase at the end of a review process to make something easier to revert.
But in general the guideline is: code should be clean, history should be realistic.
<i class="far fa-folder-open" aria-hidden="true"></i> **9. Everyone starts from master, and targets master.**
{: .alert .alert-success .green}
This means you don’t have any long branches. You check out **master**, build your feature, create your merge request, and target **master** again. You should do your complete review **before** you merge, and not have any intermediate stages.
<i class="fas fa-bug" aria-hidden="true"></i> **10. Fix bugs in master first and release branches second.**
{: .alert .alert-success .green}
If you find a bug, the **worst** thing you can do is fix it in the just-released version, and not fix it in **master**. To avoid it, you always fix forward. Fix it in **master**, then **cherry-pick** it into another patch-release branch.
<i class="far fa-edit" aria-hidden="true"></i> **11. Commit messages reflect intent.**
{: .alert .alert-success .green}
You should not only say what you did, but also why you did it. It’s even more useful if you explain why you did this over any other options.
Read more at the [GitLab Flow documentation][doc].
Follow [@GitLab] and stay tuned for the next post!
<!-- identifiers -->
[ce]: /images/blogimages/gitlab-ce-network.png
.green {
color: rgb(60,118,61) !important;
.green i {
color: rgb(226,67,41) !important;
......@@ -21,7 +21,7 @@ using the open source version of GitLab and the enterprise features of
Here are a few things that you should explore in a GitLab trial:
* Security ([SAST](, [DAST](, and [dependency scans](
* [Portfolio management](/solutions/portfolio-management/) and tracking epics and roadmaps
* [Portfolio management](/solutions/agile-delivery/) and tracking epics and roadmaps
* [Licence management](
* [Kubernetes](/solutions/kubernetes/) integration and management
* [LDAP]( integration
title: "The ideal DevOps team structure"
author: Chrissie Buchanan
author_gitlab: cbuchanan
author_twitter: gitlab
categories: insights
image_title: '/images/blogimages/devops-team-structure.jpg'
description: "Dev and Ops working together is a beautiful thing. What is the ideal structure for DevOps to thrive?"
tags: DevOps, agile
cta_button_text: 'Just commit'
cta_button_link: '/just-commit/application-modernization/'
twitter_text: "What is the ideal #DevOps team structure?"
postType: content marketing
The seamless collaboration between Development and IT operations is a beautiful thing. [DevOps](/topics/devops/) was designed to remove silos so that these teams could work together to build, test, and deploy software faster. But there’s a lot more to DevOps than just a philosophy and a catchy abbreviation – the structure component goes much deeper than that.
For those that may be unfamiliar with [Conway’s Law](, it goes something like this:
>“Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.”
So what is the ideal DevOps team structure? There are a few things to consider here. The only way to know if your existing structure works is if you _feel_ like it works – Dev and Ops are working together and business objectives are being met or exceeded. What that looks like for every company is a little bit different and it helps to analyze different models. By looking at the pros and cons of each, and considering Conway’s Law, you can find a better fit for your team’s unique needs.
Several factors come into play when it comes to team structure:
* **Existing silos**: Are there product sets/teams that work independently?
* **Technical leadership**: Who leads teams and what is their industry experience? Do Dev and Ops have the same goals or are they guided by the individual experience of their leaders?
* **IT Operations**: Has operations fully aligned with the goals of the business, or are they still just seen as configuring servers and assisting the development team with their agenda?
* **Knowledge gaps**: Is the organization equipped today with the skills and human resources to change the DevOps structure?
## Making sense of silos
[Matthew Skelton’s blog]( covers a number of different DevOps scenarios in great detail, but we’ll discuss just a few of the silos he mentions specifically and how they impact an organization.
### Dev and Ops are completely separate
Skelton refers to this as a classic “throw it over the wall” team structure and, as implied, it’s not the most effective DevOps strategy. Both teams work in their bubbles and lack visibility into the workflow of the other team. This complete separation lacks collaboration, visibility, and understanding – vital components of what effective DevOps _should_ be. What happens is essentially blame-shifting: "**_We_** don’t know what **_they_** are doing over there, **_we_** did our part and now it's up to **_them_** to complete it," and so on. Do you see the pattern?
Going back to Conway’s Law, this is an organization that clearly doesn’t communicate well, so they have created a structure that reflects this probably without realizing it. This is not great DevOps.
### DevOps middleman
In this team structure, there are still separate Dev and Ops teams, but there is now a “DevOps” team that sits between, as a facilitator of sorts. This is not necessarily a bad thing and Skelton stresses that this arrangement has some use cases. For example, if this is a temporary solution with the goal being to make Dev and Ops more cohesive in the future, it could be a good interim strategy.
### Ops stands alone
In this scenario, Dev and DevOps are melded together while Ops remains siloed. Organizations like this still see Ops as something that supports the initiatives for software development, not something with value in itself. Organizations like this suffer from basic operational mistakes and could be much more successful if they understand the value Ops brings to the table.
## The importance of leadership
Let’s assume that, in your organization, the dysfunction you experience is 100% confirmed by Conway’s Law and there will need to be a major shift in communication in order to improve your DevOps structure. How can you do that? The secret to overcoming the challenges of cultural change related to DevOps implementations can be found in the way leaders _lead_.
Organizational change initiatives are notoriously difficult: There has to be a company-wide buy-in and many departments will have to agree on a course of action. Change isn’t easy even the most ideal scenarios, let alone organizations that aren’t communicating as well in the first place. Some of the biggest predictors of failure are:
* Resistance to change
* Low readiness for change
* Poor employee engagement
[Transformational leadership]( has a direct influence on how team members respond to DevOps changes in processes, technology, roles, and mindsets.
When it comes to defining specific roles and their functions, the team at Puppet [made these recommendations](
* **IT manager**: Builds trust with counterparts on other teams; creates a climate of learning and continuous improvement; delegates authority to team members
* **Dev manager**: Builds trust with Ops; bring Ops into the planning process early
* **Systems engineer**: Automates the things that are painful
* **Quality engineer**: Provides input into scale, performance, and on staging environments
* **Devs**: Plans deployments of new features with feedback from Ops and works with them on deployment processes
## Getting Ops involved
Operations is a discipline with its own methodologies. Just because modern cloud hosting makes it easier than ever to deploy servers without having to know one end of a SCSI cable from another doesn’t mean that everyone is an Ops master. What Ops brings to the SDLC is reliability, performance, and stability. Devs can help the production environment by using their skills to automate processes, and true DevOps plays to the strengths of each.
DevOps does not mean that developers manage production.
Dev and Ops can be in direct conflict with each other because both teams are incentivized in vastly different ways: Operations emphasizing availability, Development emphasizing feature delivery. Availability requires caution while caution is the very antithesis of speed, but both teams can learn from each other and benefit from their experience.
Ops is an ally, not a barrier, in the SDLC.
## Mind the gap(s)
What would you need today to create a more efficient DevOps team structure? Going back to Conway’s Law, it’s important to analyze how your team communicates now and think objectively about what should be better and what you would like to create. Tools can’t solve cultural problems.
Organizations have embraced new structures in order to achieve certain outcomes, and they understand the link between organizational structure and the software they create. For example, Netflix and Amazon [structure themselves around multiple small teams](, each one autonomous over a small part of the system. Teams with more monolithic codebases just can’t work this way, and in order to adopt the Netflix DevOps model, they would need to adopt a microservices architecture as well.
Microservices and containers enable a DevOps model that iterates quickly and offers more autonomy within certain groups. The architecture of the code environment has a large effect on how teams work together.
Since GitLab is a complete DevOps platform, delivered as a single application, our Dev teams are organized into stages (e.g. [Verify](/handbook/engineering/ops/verify/) group, [Create](/handbook/engineering/dev-backend/create/) group, etc.) because these would be separate products at any other company and require their own autonomy. We also have other functional DevOps groups besides "Dev" that manage other aspects of our product. We have an SRE team that manages uptime and reliability for, a [Quality department](/handbook/engineering/quality/), and a [Distribution team](/handbook/engineering/development/enablement/distribution/), just to name a few. The way that we make all these pieces fit together is through [our commitment to transparency](/blog/2017/03/14/buffer-and-gitlab-ceos-talk-transparency/) and our visibility through the entire SDLC. We’re also dedicated to [cloud native development](/blog/2017/11/30/containers-kubernetes-basics/) through containers and Kubernetes, which enables us to release faster.
A team structure that facilitates collaboration and visibility between the Dev and Ops teams, as well as tools that automate processes, are the hallmarks of an ideal [DevOps lifecycle](/stages-devops-lifecycle/auto-devops/). Keep in mind that good DevOps _doesn’t_ mean that everybody does everybody’s job. [Should developers do Ops](/blog/2019/06/05/modernize-your-ci-cd/)? Not necessarily.
Many in the beginning thought the goal of DevOps was to combine the Dev, QA, and Ops departments into a single team: Have everyone do everything and – boom – instant innovation. [These strategies, unsurprisingly, failed](
Specialists can add value, but a lack of cohesion between the Dev and Ops processes leads to unnecessary dysfunction over time. It should not be **_we_** and **_them_** – it should be **_us_**. An organization that communicates like this will inevitably build a structure that operates in much the same way. The ideal DevOps team structure is the one that lets teams work together effectively and removes the barriers between code and production.
Are you ready to build a better DevOps structure? [Just commit](/just-commit/application-modernization/).
Photo by [Zbysiu Rodak]( on [Unsplash](
{: .note}
title: "Why collaboration technology is critical for GitOps"
author: Sara Kassabian
author_gitlab: skassabian
author_twitter: sarakassabian
categories: company
image_title: '/images/blogimages/gitopsseries.png'
description: "How GitLab can be the single source of truth for infrastructure and deployment teams."
tags: git, inside GitLab
ee_cta: false
install_cta: false
cta_button_text: 'Watch: GitOps expert panel'
cta_button_link: '/why/gitops-infrastructure-automation/'
twitter_text: "How GitLab empowers the pillars of GitOps, collaboration, process, and version control."
featured: yes
postType: content marketing
merch_banner: merch_five
merch_sidebar: merch_five
_While there are plenty of DevOps tools that can fulfill some of the functions of GitOps, GitLab is the only tool that can take your application from idea to code to deployment all in one collaborative platform. GitLab strategic account leader Brad Downey shows users how we make GitOps work in a three-part blog and video series. In part one, we dig deeper into GitOps in terms of process and explain how GitLab was designed to fulfill these functions._
“GitOps” is the latest buzzword in the DevOps lexicon, and as with any new concept it has [many different interpretations]( At its core though, [GitOps](/topics/gitops/) refers to using a Git repository as the single source of truth for all the code that goes into building infrastructure and deploying applications.
Some level of automation is required to deploy the code to various clouds, and there are declarative tools to help expedite this process. For example, Terraform can be used to provision Kubernetes to any publicly available cloud, while Helm and Kubernetes can be used to deploy applications to those Kubernetes clusters.
# GitOps and GitLab
By using [a version control system](/topics/version-control/) such as Git as the single source of truth, engineers are able to update the underlying source code for their applications in a [continuous delivery](/topics/ci-cd/) format.
“The version control system ensures everything is recorded and visible and an audit trail keeps teams compliant,” says [Brad Downey](, strategic account leader at GitLab, in an [article for]( “GitOps will make it easy to revert problematic changes, becoming a single source of truth about what is happening in the system from both the software development and infrastructure perspective.”
GitLab is a single application for the entire [DevOps lifecycle](/topics/devops/) that is built on Git. In addition to being a vital tool for application development, GitLab is a collaboration platform that allows any and all stakeholders to weigh in on the code production process.
“Collaboration is key to this entire [GitOps] process,” says Brad. “Infrastructure teams, development teams, even management, project management, security, and business stakeholders, all need to collaborate together to produce this code in a fast and efficient manner.”
In the first video in our series, Brad shares an example of how GitLab has allowed teams to perform GitOps well before the process had its name.
<!-- blank line -->
<figure class="video_container">
<iframe src="" frameborder="0" allowfullscreen="true"> </iframe>
<!-- blank line -->
# Using GitLab
## Planning a project with epics
Brad created an example epic called [Scale the Cloud]( to demonstrate the process behind scaling up a Kubernetes cluster in GitLab for this GitOps series.
Since GitOps is deployment centered on version control, the first step is to define the scope of the project and identify the stakeholders. Next, we share any other information that might be necessary to make the project happen, e.g., the coding, changes to infrastructure as code, what changes must be reviewed and eventually deployed to production.
After opening an [epic]( in the associated repository, you can add some of the goals and tasks in the description. An epic allows you to track issues across different projects and milestones. An [issue]( is the main medium for collaborating ideas and planning work in GitLab. Because GitLab is multi-cloud, Brad opens three separate issues for the demo that articulate what is required to deploy the Kubernetes cluster to each unique environment: [Azure (AKS)](, [Google (GKE)](, and [Amazon (EKS)](
## Fostering collaboration and transparency with GitLab
We can see at the epic level that the issue for [scaling inside the EKS cluster]( has already been completed. Clicking the [issue]( reveals that a merge request was created from the tasks outlined in the issue, and that the MR is already merged.
To see what exactly has changed between the original code and current code, click inside the MR.
“I can see all the tests that passed before I merged it, and after,” says Brad. “I could also see what was changed in the comment history. And, making a note that I approved and merged it myself.”
The issue for scaling to GKE is not yet completed. When Brad opens the merge request, we see that it is still a [Work in Progress (WIP)](, meaning nothing has been changed yet. There is a comment on the MR from Terraform, which shows that the node count needs to change from two nodes to five nodes to prepare the GKE environment for deployment. Since Brad is also the approver for this MR, he clicks Resolve the WIP Status to kick off the pipeline, and opts to delete the source branch to merge the updated node count.
“In terms of GitOps, it isn't just about the code, it's about the collaboration. And GitLab enables the collaboration, everybody to be working on the same page,” says Brad.
In order for GitLab to be an effective collaboration tool, it also needs to be transparent which is why everyone in the organization is able to see an issue and associated MR by default. The issue and MR can be assigned to a collaborator, or the collaborator can be tagged in the comments section to have it added to their [To Do list](
Navigating back to the Epic view, which is what stakeholders will often use to view project progress, we see that the deployment for scaling GKE to five nodes is underway.
“Everybody is able to work from the same system and understand where things are at,” says Brad. “Whether you're in infrastructure or whether you're in application development, all changes follow the same process of, defining the body of work, assigning it to individuals, collaborating with teammates, and then deploying that code and using the Git repository as that single source of truth.”
_Read [part two](/topics/gitops/gitlab-enables-infrastructure-as-code/) of our GitOps series to see how infrastructure teams can use GitLab and Terraform to build dynamic infrastructure for applications._
<%= partial "includes/blog/blog-merch-banner" %>
title: "How infrastructure teams use GitLab and Terraform for GitOps"
author: Sara Kassabian
author_gitlab: skassabian
author_twitter: sarakassabian
categories: engineering
image_title: '/images/blogimages/gitopsseries.png'
description: "Read part two of our GitOps series to see how infrastructure teams can use GitLab and Terraform to build dynamic infrastructure for applications."
canonical_path: "/blog/2019/11/12/gitops-part-2/"
tags: git, CI/CD, inside GitLab
guest: true
ee_cta: false
install_cta: false
twitter_text: "How GitLab empowers the pillars of GitOps: Collaboration, process, and version control"
featured: yes
postType: content marketing
cta_button_text: 'Watch: GitOps expert panel'
cta_button_link: '/why/gitops-infrastructure-automation/'
- "/blog/2019/07/01/using-ansible-and-gitlab-as-infrastructure-for-code/"
- "/blog/2020/04/17/why-gitops-should-be-workflow-of-choice/"
- "/blog/2020/07/14/gitops-next-big-thing-automation/"
_While there are plenty of DevOps tools that can fulfill some of the functions of [GitOps](/solutions/gitops/), GitLab is the only tool that can take your application from idea to code to deployment all in one collaborative platform. GitLab strategic account leader Brad Downey shows users how we make GitOps work in a three-part series. In part two, Brad demonstrates how infrastructure teams can use GitLab and Terraform to deploy their infrastructure as code to the cloud. Learn how [GitLab powers GitOps processes in part one of our series](/blog/2019/11/04/gitlab-for-gitops-prt-1/)._
When multiple teams use a Git repository, such as GitLab, as the single source of truth for all infrastructure and application deployment code, they’re performing a good [GitOps](/topics/gitops/) procedure.
[Brad Downey](/company/team/#bdowney), strategic account leader at GitLab, demonstrates how infrastructure teams can collaborate on code in GitLab and then deploy their code to multiple cloud services using Terraform for automation.
“I'm going to walk you through how we create three different Kubernetes clusters in three different public clouds – all using a common process and collaborating with my team, all within GitLab,” says Brad in the demonstration embedded below.
<!-- blank line -->
<figure class="video_container">
<iframe src="" frameborder="0" allowfullscreen="true"> </iframe>
<!-- blank line -->
# Building your infrastructure as code in GitLab
## Getting Started
Begin by logging into the group where the project lives within GitLab. Brad created [gitops-demo group]( for this blog series. The next step is to open the []( file, which shows the underlying structure of the gitops-demo group. There are a few individual projects and two subgroups: **[infrastructure](** and **[applications](**. This demo focuses on [infrastructure](, but we’ll be visiting the application deployment project in the third blog post in the series.
## Inside the infrastructure subgroup
There is a separate repository for each cloud: Azure, GCP, and AWS, and a repository for templates.
![Infrastructure subgroup](/images/blogimages/gitops_series_2019/gitops_infra.png){: .shadow}
While similar files can be found in all three cloud repositories, Brad opens the AWS repository in this demo. All of the files are written in Terraform to automate the deployment process, while a `gitlab-ci.yml` file is also stored in the repository to provide instructions for automation.
### The backend file
We are using HashiCorp's new [Terraform Cloud Service]( as a remote location for our state file. This keeps our state file safe and in a central location so it can be accessed by any process. One advantage of using Terraform Cloud is it has the ability to lock the state to ensure only one job can run at once. This prevents multiple jobs from making conflicting changes at the same time. The code says that we’re storing the state files in the [Terraform Cloud](, in an organization called `gitops-demo` in a workspace called `aws`.
terraform {
backend "remote" {
hostname = ""
organization = "gitops-demo"
workspaces {
name = "aws"
{: .language-ruby}
“This keeps our running state in the cloud provider, so anybody – well, anybody on my team, at least – can access this at any time,” says Brad.
### file
The EKS is another Terraform file that leverages the EKS module for the Terraform cluster.
module "eks" {
source = "terraform-aws-modules/eks/aws"
cluster_name = "gitops-demo-eks"
subnets = "${module.vpc.public_subnets}"
write_kubeconfig = "false"
tags = {
Terraform = "true"
Environment = "dev"
vpc_id = "${module.vpc.vpc_id}"
worker_groups = [
instance_type = "m4.large"
asg_max_size = 5
tags = [{
key = "Terraform"
value = "true"
propagate_at_launch = true
{: .language-ruby}
We can define parameters such as what kind of subnets, how many nodes, etc., in the EKS terraform file.
### Define the GitLab admin
“I need to create a GitLab admin user on the Kubernetes cluster,” explains Brad. “I want that done [automatically as code and managed by Terraform]( So I leveraged the Kubernetes provider to do this.”
Since the code contained in this file is longer, we’re just including a link to the [gitlab-admin file]( rather than the full code excerpt.
### Register the cluster with GitLab
We just built a Kubernetes cluster! 🎉 Now, we must register the cluster with GitLab so we can deploy more code to the cluster in the future.
First we use the GitLab provider to create a group cluster named AWS cluster.
data "gitlab_group" "gitops-demo-apps" {
full_path = "gitops-demo/apps"
provider "gitlab" {
alias = "use-pre-release-plugin"
version = "v2.99.0"
resource "gitlab_group_cluster" "aws_cluster" {
provider = "gitlab.use-pre-release-plugin"
group = "${}"
name = "${module.eks.cluster_id}"
domain = ""
environment_scope = "eks/*"
kubernetes_api_url = "${module.eks.cluster_endpoint}"
kubernetes_token = "${}"
kubernetes_ca_cert = "${trimspace(base64decode(module.eks.cluster_certificate_authority_data))}"
{: .language-ruby}
The code contains the domain name, environment scope, and Kubernetes credentials.
“So after this runs, all of this will be deployed,” says Brad. “My cluster will be created in AWS and it will be automatically registered to my [gitops-demo/apps]( group.”
## Deploying our code using GitLab CI
## Terraform template
Return to the infrastructure group and open up the Templates folder. When looking at the [terraform.gitlab-ci.yml file](, we see how the CI works to deploy your infrastructure code to the cloud using Terraform.
Inside the CI file we see a few different stages: validate, plan, apply, and destroy.
We use Hashicorp’s Terraform base image to run a few different tasks.
First, we initialize Terraform.
- terraform --version
- terraform init
- apk add --update curl
- curl -o kubectl
- install kubectl /usr/local/bin/ && rm kubectl
- curl -o aws-iam-authenticator
- install aws-iam-authenticator /usr/local/bin/ && rm aws-iam-authenticator
{: .language-ruby}
Next, we validate that everything is correct.
stage: validate
- terraform validate
- terraform fmt -check=true
- branches
{: .language-ruby}
We learned in the **[previous blog post](/blog/2019/11/04/gitlab-for-gitops-prt-1/)** that good GitOps workflow has us creating a [merge request]( for our changes.
merge review:
stage: plan
- terraform plan -out=$PLAN
- echo \`\`\`diff > plan.txt
- terraform show -no-color ${PLAN} | tee -a plan.txt
- echo \`\`\` >> plan.txt
- sed -i -e 's/ +/+/g' plan.txt
- sed -i -e 's/ ~/~/g' plan.txt
- sed -i -e 's/ -/-/g' plan.txt
- MESSAGE=$(cat plan.txt)
- >-
--data-urlencode "body=${MESSAGE}"
name: plan
- merge_requests
{: .language-ruby}
## The merge request
The [merge request (MR)]( is the most important step in GitOps. This is the process to review all changes and see the impact of those changes. The MR is also a collaboration tool. Team members can weigh in on the MR and stakeholders can approve your changes before the final merge into master.
In the MR we define what will happen when we run the infrastructure as code. After the MR is created, the Terraform plan is uploaded to the MR.
After all changes have been reviewed and approved, we click the `merge` button. This will merge the changes into the `master` branch. Once the code changes are merged into `master`, all the changes will be deployed into production.
And that’s how we follow good GitOps procedure to deploy infrastructure as code using Terraform for automation and GitLab as the single source of truth (and CI). In part three of our blog series, we’ll show application developers how to [deploy to any cloud service using GitLab](/blog/2019/11/06/gitlab-ci-cd-is-for-multi-cloud/).
_Want more infrastructure as code? Read on to learn [how GitLab works with Ansible to create infrastructure as code](/blog/2019/07/01/using-ansible-and-gitlab-as-infrastructure-for-code/)._