Update Constructs authored by Simon Read's avatar Simon Read
[Deployments](Kubernetes/Constructs/Deployments)
[Ingress](Kubernetes/Constructs/Ingress)
[Jobs](Kubernetes/Constructs/Jobs)
[Namespaces](Kubernetes/Constructs/Namespaces)
[Operators](Kubernetes/Constructs/Operators)
[Secrets](Kubernetes/Constructs/Secrets)
[Services](Kubernetes/Constructs/Services)
| **Construct** |
|--|
| [Deployments](#deployments) |
| [Ingress](#ingress) |
| [Jobs](#jobs) |
| [Namespaces](#namespaces) |
| [Operators](#operators) |
| [Secrets](#secrets) |
| [Services](#services) |
# Deployments
A deployment describes a desired state, and the deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
below is the yaml we use to create the deployment. the amount of replicas we require is set here but it can be changed later without having to re deploy the deployment. the image we declare is the Gitlab image we create of the application. ImagePullsecrets refers to our Kubernetes secret wish we discuss in this section {insert link}
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubemoviewebfront
labels: # Labels that will be applied to this resource
app: kubemoviewebfront
spec:
replicas: 1
selector:
matchLabels:
app: kubemoviewebfront
template:
metadata:
labels:
app: kubemoviewebfront
spec:
containers:
- name: kubemoviewebfront
image: registry.gitlab.com/cnad/kubemovieangular:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
imagePullSecrets:
- name: gitcreds
```
we can implement this using kubectl apply
```
kubectl apply -f deploy.yaml
```
we can view our deployments and see if its running using kubectl get
```
kubectl get deploy
```
this returns all deployments however if we are just concerned with one deployment we can perfrom
```
Kubectl get deploy kubemoviewebfront
```
if we wanted to get more information on the deployments then we can use kubectl logs
```
kubectl logs deploy kubemoviewebfront
```
### note
in our project, services and deployments and included in the same yaml file. they are separated in yaml file using three dashes --- to indicate where one begins and the other ends. We do this to unclutter the repository and because deployments and services are very closely linked.
# Ingress
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. an ingress will connect to a service and then expose that service externally on a public IP.
Annotations in an ingress let Kubernetes know specific environment setups needed for the ingress. our web front end is angular project that is built in an Nginx container. thus *nginx.ingress.kube...* tells the Ngnix environment where URI traffic must be redirected to. we also provide our ingress with a host this is a DNS address. Our DNS host name is a Azure virtual machine DNS. The backend refers to a kubernetes service we have created that is exposed on port 80.
```
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontend-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: devk8subuntu.eastus.cloudapp.azure.com
http:
paths:
- path: /
backend:
serviceName: kubemoviewebfront
servicePort: 80
```
if we want to see our ingress we can use kubectl get
```
kubectl get ingress
```
if we want to view a particular ingress
```
kubectl get ingress frontend-ingress
```
if you want to view a log of the ingress
```
kubectl logs ingress frontend-ingress
```
# Jobs
A Job creates one or more Pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the Job is complete. unlike deployments instead of having a ready status we expect to see a completed or terminated status. think of Jobs like they exist to execute one command and then there finished.
In our project we found this useful for testing our controllers database connection. rather then having a fully functional controller application, we built a test program that connected to our cluster, performed a simple selection and printed results. The test program was best suited as a job because we did not want the pod trying to select and print continuously, we just needed it to run once.
below is the yaml we use to create a Job. As in the deployment we use the image from the Gitlab pipeline and we have declared the Kubernetes secret. the main difference from a deployment is the restartPolicy being set to Never and a BackoffLimit is set to 0 to ensure the job runs once. we declare a command to execute a run of the program in *command: ["dotnet", "postgrestest.dll"]* so the program is triggered open creation like hitting the run button in ide.
```
apiVersion: batch/v1
kind: Job
metadata:
name: kubemoviejob
spec:
template:
metadata:
labels:
app: kubemoviejob
spec:
containers:
- name: kubemoviejob
image: registry.gitlab.com/cnad/postgresstest:latest
command: ["dotnet", "postgrestest.dll"]
restartPolicy: Never
imagePullSecrets:
- name: gitcreds
backoffLimit: 0
```
we can implement this using kubectl apply
```
kubectl apply -f job.yaml
```
we can view our jobs and see if its running using kubectl get
```
kubectl get jobs
```
this returns all deployments however if we are just concerned with one deployment we can perform
```
Kubectl get job kubemoviejob
```
if we wanted to get more information on the jobs then we can use kubectl logs
```
kubectl logs job kubemoviejob
```
# Namespaces
Kubernetes provides namespaces which act as different environments. you can create and name namespace using a yaml file. the yaml file below will create a name space of name staging
```
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "staging",
"labels": {
"name": "staging"
}
}
}
```
then we apply the yaml using Kubernetes
```
kubectl apply -f namespace-staging.yaml
```
with the name space created we need provide context. providing it with the Kubernetes cluster name and the user
```
kubectl config set-context staging --namespace=staging --cluster=microk8s-cluster --user=admin
```
once the context is set we then need to move into our new new enviroment
```
kubectl config use-context staging
```
inside this name space everything we create here will automatically be created in staging namespace. if we want to specifically call a namespace when we deploy or exec a command in Kubernetes, simply add the namespace into the command to target that environment like in this example
```
kubectl exec --namespace staging acid-minimal-cluster-0 su postgres bash -- ./createdb.sh
```
we can use kubectl get to see all the namespaces that exist
```
kubectl get namespaces
```
if we wish to see the current namespace that we are in we use
```
kubectl config current-context
```
# Secrets
Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image.
We used secrets to create secure connection between our environment in Gitlab and Azure. both were created int he same manner but with different details.
## Azure
connecting to an Azure image registry requires two steps creating the credentials in Azure portal then create our kuberets secret in Azure .
### Azure portal (creating the credentials linking to azure image repository)
this is created by navigating in the azure portal to your container repository which is named imgregdocker
using the Azure cli with bash style command line (create an azure storage unit if needed)
using the cli perform these commands
set up variables for commands later
```
ACR_NAME=imgregdocker
SERVICE_PRINCIPAL_NAME=acr-service-principal-sr
```
create the username and password
```
ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv)
SP_PASSWD=$(az ad sp create-for-rbac --name http://$SERVICE_PRINCIPAL_NAME --scopes $ACR_REGISTRY_ID --role owner --query password --output tsv)
SP_APP_ID=$(az ad sp show --id http://$SERVICE_PRINCIPAL_NAME --query appId --output tsv)
```
once created echo them back so you can copy them as we need to add them in the secret
```
simon_read@Azure:~$ echo "Service principal ID: $SP_APP_ID"
Service principal ID: 13d4b666-8d7f-4726-8dea-23b88f6a8bfc
simon_read@Azure:~$ echo "Service principal password: $SP_PASSWD"
Service principal password: b4430591-5662-4e55-93c8-9ea828121d82
```
now that azure container is setup we create the kubernetes secret
### Kubernetes Secret creation
create kubectl secret
```
kubectl create secret docker-registry regcred --docker-server=imgregdocker.azurecr.io --docker-username=13d4b666-8d7f-4726-8dea-23b88f6a8bfc --docker-password=b4430591-5662-4e55-93c8-9ea828121d82 --docker-email=simon.read@au.fujistu.com
```
### command explained
- *regcred* will become the name of our secret and it was what we reference in yamls as the imagepullsecret
- *--docker-server=* is the public dns name of our image Azure Image Registry
- *--docker-username=* is the Service principal ID from Azure
- *--docker-password=* is the Service principal password from Azure
- *email=* is the email address of the azure account
##Gitlab
connect to Gitlab Image repository requires two steps creation of Gitlab token in Gitlab and then suing that token to create our secret in Kubernetes.
### Gitlab personal access token creation
In Gitlab head to you account settings by clicking your portrait in top right hand corner > settings
on the left hand side there is a menu, select the personal access tokens option
select all the scopes by ticking the boxes and click create a personnel access token
Gitlab will then display your personal access token be sure to save this somewhere as this is the only time you will see it. after this you will not be able to view the key again and need to create another key if you loose it
### Kubernetes Secret creation
to create our secret we need to execute a command that contains our Gitlab credentials with our personal access
```
kubectl create secret docker-registry gitcreds --docker-server=registry.gitlab.com/cnad --docker-username=simonread00 --docker-password=vWxriePxiSwsLvjCcZF5
```
### command explained
- *gitcreds* will become the name of our secret and it was what we reference in yamls as the imagepullsecret
- *--docker-server=* is the address in Gitlab our project is contained in
- *--docker-username=* is the Gitlab username
- *--docker-password=* is the Gitlab personal access token
# Services
A service is an abstract way to expose an application running on a set of Pods as a network service. it connects to our deployment and expose them on the ports specified. there are different types of selectors you can choose from and deepening on your choice Kubernetes will provision the specified type. We use load balancer as when it is hosted in a cloud environment it uses the settings from that environment whether as NodePort you must declare these settings.
below is the yaml we use to create services with our project the name and app selector are important as the help link to our deployment. we have specified our target port as 80 and we use the TCP procotol.
```
apiVersion: v1
kind: Service
metadata:
name: kubemoviewebfront
labels:
app: kubemoviewebfront
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: kubemoviewebfront
type: LoadBalancer
```
we can implement this using kubectl apply
```
kubectl apply -f service.yaml
```
we can view our services and see if its running using kubectl get
```
kubectl get services
```
this returns all services however if we are just concerned with one service we can perfrom
```
Kubectl get services kubemoviewebfront
```
if we wanted to get more information on the service then we can use kubectl logs
```
kubectl logs services kubemoviewebfront
```
### note
in our project, services and deployments and included in the same yaml file. they are separated in yaml file using three dashes --- to indicate where one begins and the other ends. We do this to unclutter the repository and because deployments and services are very closely linked.