Update Constructs authored by Irina Averin's avatar Irina Averin
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
# Deployments # Deployments
A deployment describes a desired state, and the deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments. A deployment describes a desired state, and the deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
below is the yaml we use to create the deployment. the amount of replicas we require is set here but it can be changed later without having to re deploy the deployment. the image we declare is the Gitlab image we create of the application. ImagePullsecrets refers to our Kubernetes secret wish we discuss in this section [Secrets](/Kubernetes/Constructs#secrets) Below is the YAML file we use to create the deployment. The required amount of replicas is set here, but it can be changed later without having to re-deploy the deployment. The image we declare is the Gitlab image we create of the application. Imagepullsecrets refers to our Kubernetes secret, which we discuss in this section [Secrets](/Kubernetes/Constructs#secrets)
``` ```
apiVersion: apps/v1 apiVersion: apps/v1
...@@ -47,33 +47,34 @@ spec: ...@@ -47,33 +47,34 @@ spec:
``` ```
we can implement this using kubectl apply We can implement this using `kubectl apply`
``` ```
kubectl apply -f deploy.yaml kubectl apply -f deploy.yaml
``` ```
we can view our deployments and see if its running using kubectl get We can view our deployments and see if its running using `kubectl get`
``` ```
kubectl get deploy kubectl get deploy
``` ```
this returns all deployments however if we are just concerned with one deployment we can perfrom This returns all deployments however if we are just concerned with one deployment we can perform the following command:
``` ```
Kubectl get deploy kubemoviewebfront kubectl get deploy kubemoviewebfront
``` ```
if we wanted to get more information on the deployments then we can use kubectl describe If we wanted to get more information on the deployments then we can use `kubectl describe`
``` ```
kubectl describe deploy kubemoviewebfront kubectl describe deploy kubemoviewebfront
``` ```
if we wanted to check a certain pod of our deployment we could use logs If we wanted to check a certain pod of our deployment we could use logs
``` ```
kubectl logs {podname} kubectl logs {podname}
``` ```
**note** **Note**
in our project, services and deployments and included in the same yaml file. they are separated in yaml file using three dashes --- to indicate where one begins and the other ends. We do this to unclutter the repository and because deployments and services are very closely linked.
In our project, services and deployments and included in the same YAML file. They are separated in the YAML file using three dashes --- to indicate where one begins and the other ends. We do this to unclutter the repository and because deployments and services are very closely linked.
# Ingress # Ingress
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. an ingress will connect to a service and then expose that service externally on a public IP. Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. An ingress will connect to a service and then expose that service externally on a public IP address.
Annotations in an ingress let Kubernetes know specific environment setups needed for the ingress. our web front end is angular project that is built in an Nginx container. thus *nginx.ingress.kube...* tells the Ngnix environment where URI traffic must be redirected to. we also provide our ingress with a host this is a DNS address. Our DNS host name is a Azure virtual machine DNS. The backend refers to a kubernetes service we have created that is exposed on port 80. Annotations in Ingress let Kubernetes know specific environment setups needed for the Ingress. Our web front-end is Angular project, which is built in an Nginx container. *nginx.ingress.kube...* tells the Nginx environment where URI traffic must be re-directed to. We also provide our Ingress with a host, which is a DNS address. Our DNS host name is an Azure virtual machine DNS. The backend refers to a kubernetes service we have created that is exposed on port 80.
``` ```
apiVersion: networking.k8s.io/v1beta1 apiVersion: networking.k8s.io/v1beta1
...@@ -93,26 +94,25 @@ spec: ...@@ -93,26 +94,25 @@ spec:
servicePort: 80 servicePort: 80
``` ```
if we want to see our ingress we can use kubectl get If we want to see our Ingress we can use `kubectl get`
``` ```
kubectl get ingress kubectl get ingress
``` ```
if we want to view a particular ingress If we want to view a particular Ingress
``` ```
kubectl get ingress frontend-ingress kubectl get Ingress frontend-ingress
``` ```
if you want to view a describe of the ingress If you want to view a description of the Ingress
``` ```
kubectl describe ingress frontend-ingress kubectl describe ingress frontend-ingress
``` ```
# Jobs # Jobs
A Job creates one or more Pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the Job is complete. unlike deployments instead of having a ready status we expect to see a completed or terminated status. think of Jobs like they exist to execute one command and then there finished. A Job creates one or more Pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the Job is complete. Unlike deployments instead of having a ready status we expect to see a completed or terminated status. Think of Jobs in a context like they exist to execute one command and then there finished.
In our project we found this useful for testing our controllers database connection. rather then having a fully functional controller application, we built a test program that connected to our cluster, performed a simple selection and printed results. The test program was best suited as a job because we did not want the pod trying to select and print continuously, we just needed it to run once.
below is the yaml we use to create a Job. As in the deployment we use the image from the Gitlab pipeline and we have declared the Kubernetes secret. the main difference from a deployment is the restartPolicy being set to Never and a BackoffLimit is set to 0 to ensure the job runs once. we declare a command to execute a run of the program in *command: ["dotnet", "postgrestest.dll"]* so the program is triggered open creation like hitting the run button in ide. In our project, we found this useful for testing our controllers database connection. Rather than having a fully functional controller application, we built a test program that connected to our cluster, performed a simple selection and printed results. The test program was best suited as a job because we did not want the pod trying to select and print continuously, we just needed it to run once.
Below is the YAML file, which we use to create a Job. As in the deployment we use the image from the Gitlab pipeline and we have declared the Kubernetes secret. The main difference from a deployment is the `restartpolicy` being set to Never and a `backofflimit` is set to 0 to ensure the job runs once. We declare a command to execute a run of the program in *command: ["dotnet", "postgrestest.dll"]* so the program is triggered open creation like hitting the run button in IDE.
``` ```
...@@ -135,19 +135,19 @@ spec: ...@@ -135,19 +135,19 @@ spec:
- name: gitcreds - name: gitcreds
backoffLimit: 0 backoffLimit: 0
``` ```
we can implement this using kubectl apply We can implement this using `kubectl apply`
``` ```
kubectl apply -f job.yaml kubectl apply -f job.yaml
``` ```
we can view our jobs and see if its running using kubectl get We can view our jobs and see if its running using `kubectl get`
``` ```
kubectl get jobs kubectl get jobs
``` ```
this returns all deployments however if we are just concerned with one deployment we can perform This returns all deployments however if we are just concerned with one deployment we can perform
``` ```
Kubectl get job kubemoviejob kubectl get job kubemoviejob
``` ```
if we wanted to get more information on the jobs then we can use kubectl logs If we wanted to get more information on the jobs then we can use kubectl logs
``` ```
kubectl logs kubemoviejob kubectl logs kubemoviejob
``` ```
...@@ -159,15 +159,15 @@ An operator is used to ...@@ -159,15 +159,15 @@ An operator is used to
- restoring backs ups of the applications state - restoring backs ups of the applications state
- chooses a leader for a disrupted application - chooses a leader for a disrupted application
- can simulate failures to test cluster resistance - can simulate failures to test cluster resistance
- publishing a service to applications that dont support Kubernetes API's to discover them - publishing a service to applications, that dont support Kubernetes API's to discover them
There are a lot of operators that cover a broad range of not just SQL languages but all types of applications. we discovered multiple controllers for Postgresql and Cassandra. the criteria for selection were they must be open source projects with good community support. Postgressql-operator and CassKop meet our criteria and they were used going forward. These operators could be deployed manually by downloading there yamls from respective git repositories and you could edit them and create customization to them. the standard deployment of those operators meet are requirements so no adjustment was made to them. Many operators cover a broad range of not just SQL languages but also all types of applications. We discovered multiple controllers for PostgreSQL and Cassandra. The criteria for selection was - they operator must be an open source projects with good community support. Postgressql-operator and casskop meet our criteria and they were used going forward. These operators could be deployed manually by downloading there YML files from respective Git repositories and you could edit them and create customization to them. The standard deployment of those operators meets are requirements so no adjustment was made to them.
we deployed our operators using kubectl apply We deployed our operators using `kubectl apply`
``` ```
$ kubectl apply -k github.com/zalando/postgres-operator/manifests $ kubectl apply -k github.com/zalando/postgres-operator/manifests
``` ```
this references an external link were the correct yamls files are acquired and used to create our Postgresql operator. This references an external link were the correct YML files are acquired and used to create our PostgreSQL operator.
CassKop uses Helm to assist it install the correct files to create an operator CassKop uses Helm to assist it install the correct files to create an operator
...@@ -181,7 +181,7 @@ $ helm repo add orange-incubator https://orange-kubernetes-charts-incubator.stor ...@@ -181,7 +181,7 @@ $ helm repo add orange-incubator https://orange-kubernetes-charts-incubator.stor
$ helm install casskop orange-incubator/cassandra-operator $ helm install casskop orange-incubator/cassandra-operator
``` ```
once these operator are successful created they will have a pod and a service. Once these operator are successful created they will have a pod and a service.
``` ```
$ kubectl get pods $ kubectl get pods
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
...@@ -192,11 +192,11 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP POR ...@@ -192,11 +192,11 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP POR
postgres-operator ClusterIP 10.152.183.37 <none> 8080/TCP 19d postgres-operator ClusterIP 10.152.183.37 <none> 8080/TCP 19d
``` ```
in the database connections we use the name of the operator service *postgres-operator* as the host address to connect to the cluster. In the database connections we use the name of the operator service *postgres-operator* as the host address to connect to the cluster.
# Namespaces # Namespaces
Kubernetes provides namespaces which act as different environments. you can create and name namespace using a yaml file. the yaml file below will create a name space of name staging Kubernetes provides namespaces, which act as different environments. You can create and namespace using a YAML file. The YAML file below will create a name space of name **staging**
``` ```
{ {
...@@ -212,28 +212,28 @@ Kubernetes provides namespaces which act as different environments. you can crea ...@@ -212,28 +212,28 @@ Kubernetes provides namespaces which act as different environments. you can crea
``` ```
then we apply the yaml using Kubernetes Then we apply the YAML using Kubernetes
``` ```
kubectl apply -f namespace-staging.yaml kubectl apply -f namespace-staging.yaml
``` ```
with the name space created we need provide context. providing it with the Kubernetes cluster name and the user With the namespace created, we need provide context. Providing it with the Kubernetes cluster name and the user
``` ```
kubectl config set-context staging --namespace=staging --cluster=microk8s-cluster --user=admin kubectl config set-context staging --namespace=staging --cluster=microk8s-cluster --user=admin
``` ```
once the context is set we then need to move into our new new enviroment Once the context is set we then need to move into our new new environment
``` ```
kubectl config use-context staging kubectl config use-context staging
``` ```
inside this name space everything we create here will automatically be created in staging namespace. if we want to specifically call a namespace when we deploy or exec a command in Kubernetes, simply add the namespace into the command to target that environment like in this example Inside this namespace everything we create here will automatically be created in `staging` namespace. If we want to specifically call a namespace when we deploy or exec a command in Kubernetes, simply add the namespace into the command to target that environment like in this example
``` ```
kubectl exec --namespace staging acid-minimal-cluster-0 su postgres bash -- ./createdb.sh kubectl exec --namespace staging acid-minimal-cluster-0 su postgres bash -- ./createdb.sh
``` ```
we can use kubectl get to see all the namespaces that exist We can use `kubectl get` to see all the namespaces that exist
``` ```
kubectl get namespaces kubectl get namespaces
``` ```
if we wish to see the current namespace that we are in we use If we wish to see the current namespace that we are in we use
``` ```
kubectl config current-context kubectl config current-context
``` ```
...@@ -241,16 +241,20 @@ kubectl config current-context ...@@ -241,16 +241,20 @@ kubectl config current-context
# Secrets # Secrets
Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image. Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image.
We used secrets to create secure connection between our environment in Gitlab and Azure. both were created int he same manner but with different details. We used secrets to create secure connection between our environment in Gitlab and Azure. Both were created in the same manner but with different details.
## Azure ## Azure
connecting to an Azure image registry requires two steps creating the credentials in Azure portal then create our kuberets secret in Azure . Connecting to an Azure image registry requires two steps:
- create the credentials in Azure portal
- create our Kubernetes secret in Azure
### Azure portal (creating the credentials linking to azure image repository) ### Azure portal (creating the credentials linking to azure image repository)
this is created by navigating in the azure portal to your container repository which is named imgregdocker This is created by navigating in the Azure portal to your container repository, which is named `imgregdocker`.
using the Azure cli with bash style command line (create an azure storage unit if needed) Using the Azure CLI with bash style command line (create an azure storage unit if needed)
using the cli perform these commands Using the CLI perform these commands.
set up variables for commands later
Set up variables, which will be used for commands later
``` ```
ACR_NAME=imgregdocker ACR_NAME=imgregdocker
SERVICE_PRINCIPAL_NAME=acr-service-principal-sr SERVICE_PRINCIPAL_NAME=acr-service-principal-sr
...@@ -261,21 +265,21 @@ ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv) ...@@ -261,21 +265,21 @@ ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv)
SP_PASSWD=$(az ad sp create-for-rbac --name http://$SERVICE_PRINCIPAL_NAME --scopes $ACR_REGISTRY_ID --role owner --query password --output tsv) SP_PASSWD=$(az ad sp create-for-rbac --name http://$SERVICE_PRINCIPAL_NAME --scopes $ACR_REGISTRY_ID --role owner --query password --output tsv)
SP_APP_ID=$(az ad sp show --id http://$SERVICE_PRINCIPAL_NAME --query appId --output tsv) SP_APP_ID=$(az ad sp show --id http://$SERVICE_PRINCIPAL_NAME --query appId --output tsv)
``` ```
once created echo them back so you can copy them as we need to add them in the secret Once created echo them back so you can copy them, as we need to add them in the secret
``` ```
simon_read@Azure:~$ echo "Service principal ID: $SP_APP_ID" simon_read@Azure:~$ echo "Service principal ID: $SP_APP_ID"
Service principal ID: 13d4b666-8d7f-4726-8dea-23b86a8bfc Service principal ID: 13d4b666-8d7f-4726-8dea-23b86a8bfc
simon_read@Azure:~$ echo "Service principal password: $SP_PASSWD" simon_read@Azure:~$ echo "Service principal password: $SP_PASSWD"
Service principal password: b4430591-5662-4e55-93c8-9ea8121d82 Service principal password: b4430591-5662-4e55-93c8-9ea8121d82
``` ```
now that azure container is setup we create the kubernetes secret Now that Azure container is setup we create the Kubernetes secret
### Kubernetes Secret creation ### Kubernetes Secret creation
create kubectl secret Create kubectl secret
``` ```
kubectl create secret docker-registry regcred --docker-server=imgregdocker.azurecr.io --docker-username=13d4b666-8d7f-4726-8dea-23b86a8bfc --docker-password=b4430591-5662-4e55-93c8-9ea8121d82 --docker-email=simon.read@au.fujistu.com kubectl create secret docker-registry regcred --docker-server=imgregdocker.azurecr.io --docker-username=13d4b666-8d7f-4726-8dea-23b86a8bfc --docker-password=b4430591-5662-4e55-93c8-9ea8121d82 --docker-email=simon.read@au.fujistu.com
``` ```
### command explained ### Command explained
- *regcred* will become the name of our secret and it was what we reference in yamls as the imagepullsecret - *regcred* will become the name of our secret and it was what we reference in yamls as the imagepullsecret
- *--docker-server=* is the public dns name of our image Azure Image Registry - *--docker-server=* is the public dns name of our image Azure Image Registry
- *--docker-username=* is the Service principal ID from Azure - *--docker-username=* is the Service principal ID from Azure
...@@ -283,22 +287,25 @@ kubectl create secret docker-registry regcred --docker-server=imgregdocker.azure ...@@ -283,22 +287,25 @@ kubectl create secret docker-registry regcred --docker-server=imgregdocker.azure
- *email=* is the email address of the azure account - *email=* is the email address of the azure account
## Gitlab ## Gitlab
connect to Gitlab Image repository requires two steps creation of Gitlab token in Gitlab and then suing that token to create our secret in Kubernetes. Connect to Gitlab Image repository requires two steps
- creating of Gitlab token in Gitlab
- issuing that token to create our secret in Kubernetes.
### Gitlab personal access token creation ### Gitlab personal access token creation
In Gitlab head to you account settings by clicking your portrait in top right hand corner > settings - In Gitlab head to your account settings by clicking your portrait in top right hand corner > settings
on the left hand side there is a menu, select the personal access tokens option - On the left hand side there is a menu, select the personal access tokens option
select all the scopes by ticking the boxes and click create a personnel access token - Select all the scopes by ticking the boxes and click create a personnel access token
Gitlab will then display your personal access token be sure to save this somewhere as this is the only time you will see it. after this you will not be able to view the key again and need to create another key if you loose it - Gitlab will then display your personal access token (be sure to save this somewhere as this is the only time you will see it). After this, you will not be able to view the key again and need to create another key if you lose it
### Kubernetes Secret creation ### Kubernetes Secret creation
to create our secret we need to execute a command that contains our Gitlab credentials with our personal access To create our secret we need to execute a command that contains our Gitlab credentials with our personal access
``` ```
kubectl create secret docker-registry gitcreds --docker-server=registry.gitlab.com/cnad --docker-username=simonread00 --docker-password=vWxriePxiLvjCcZF5 kubectl create secret docker-registry gitcreds --docker-server=registry.gitlab.com/cnad --docker-username=simonread00 --docker-password=vWxriePxiLvjCcZF5
``` ```
### command explained ### Command explained
- *gitcreds* will become the name of our secret and it was what we reference in yamls as the imagepullsecret - *gitcreds* will become the name of our secret and it was what we reference in yamls as the imagepullsecret
- *--docker-server=* is the address in Gitlab our project is contained in - *--docker-server=* is the address in Gitlab our project is contained in
...@@ -306,9 +313,9 @@ kubectl create secret docker-registry gitcreds --docker-server=registry.gitlab.c ...@@ -306,9 +313,9 @@ kubectl create secret docker-registry gitcreds --docker-server=registry.gitlab.c
- *--docker-password=* is the Gitlab personal access token - *--docker-password=* is the Gitlab personal access token
# Services # Services
A service is an abstract way to expose an application running on a set of Pods as a network service. it connects to our deployment and expose them on the ports specified. there are different types of selectors you can choose from and deepening on your choice Kubernetes will provision the specified type. We use load balancer as when it is hosted in a cloud environment it uses the settings from that environment whether as NodePort you must declare these settings. A service is an abstract way to expose an application running on a set of Pods as a network service. It connects to our deployment and expose them on the ports specified. There are different types of selectors you can choose from and deepening on your choice Kubernetes will provision the specified type. We use `LoadBalancer` as when it is hosted in a cloud environment it uses the settings from that environment whether as nodeport you must declare these settings.
below is the yaml we use to create services with our project the name and app selector are important as the help link to our deployment. we have specified our target port as 80 and we use the TCP procotol. Below is the YAML file, which we use to create services with our project. The name and app selector are important as they help link to our deployment. We have specified our target port as 80 and we use the TCP protocol.
``` ```
apiVersion: v1 apiVersion: v1
...@@ -327,26 +334,22 @@ spec: ...@@ -327,26 +334,22 @@ spec:
type: LoadBalancer type: LoadBalancer
``` ```
we can implement this using kubectl apply We can implement this using `kubectl apply`
``` ```
kubectl apply -f service.yaml kubectl apply -f service.yaml
``` ```
we can view our services and see if its running using kubectl get We can view our services and see if its running using `kubectl get`
``` ```
kubectl get services kubectl get services
``` ```
this returns all services however if we are just concerned with one service we can perfrom This returns all services however if we are just concerned with one service we can perform
``` ```
Kubectl get services kubemoviewebfront kubectl get services kubemoviewebfront
``` ```
if we wanted to get more information on the service then we can use kubectl describe If we want to get more information on the service then we can use `kubectl describe`
``` ```
kubectl describe services kubemoviewebfront kubectl describe services kubemoviewebfront
``` ```
**note** **Note**
in our project, services and deployments and included in the same yaml file. they are separated in yaml file using three dashes --- to indicate where one begins and the other ends. We do this to unclutter the repository and because deployments and services are very closely linked.
In our project, services and deployments and included in the same YAML file. They are separated in YAML file using three dashes --- to indicate where one begins and the other ends. We do this to unclutter the repository and because deployments and services are very closely linked.