[v2 Pre-release] feat: Canary Ingress [Take 2]
This MR recreates the previous MR by the @hfyngvason's suggestion
Description
For the ~"Release::P1" %13.4 issue, we need to introduce Canary Ingress into Auto Deploy architecture. This special type of ingress is used for the advanced traffic routing.
This MR introduces the following changes to achieve the goal:
- Each namespace creates one Ingress for the stable track. (Same)
- Each namespace creates one Canary Ingress for the canary track.
- Each track has a dedicated service. Previously, one service was shared in different tracks in order to control traffic routing by changing the replica count of deployments. This feature is replaced by the Canary Ingress's weight control. By the same reason, percentageargument is removed.
- 
rollouttrack is deleted as it's covered bycanarytrack.
- For more details, please see the following diagrams
This new auto-deploy-image requires a change in a template side. Please check gitlab-org/gitlab!39438 (merged) for the change.
The the previous chart is not compatible with this chart, hence it's marked as BREAKING CHANGE in the commit body.
V1 resource architecture
graph TD;
subgraph gl-managed-app
Z[Nginx Ingress]
end
Z[Nginx Ingress] --> A(Ingress);
Z[Nginx Ingress] --> B(Ingress);
subgraph stg namespace
B[Ingress] --> H(...);
end
subgraph prd namespace
A[Ingress] --> D(Service);
D[Service] --> E(Deployment:Pods:app:stable);
D[Service] --> F(Deployment:Pods:app:canary);
D[Service] --> I(Deployment:Pods:app:rollout);
E(Deployment:Pods:app:stable)---id1[(Pods:Postgres)]
F(Deployment:Pods:app:canary)---id1[(Pods:Postgres)]
I(Deployment:Pods:app:rollout)---id1[(Pods:Postgres)]
endProposal: V2 resource architecture
graph TD;
subgraph gl-managed-app
Z[Nginx Ingress]
end
Z[Nginx Ingress] --> A(Ingress);
Z[Nginx Ingress] --> B(Ingress);
Z[Nginx Ingress] --> |If canary=true|J(Canary Ingress);
subgraph stg namespace
B[Ingress] --> H(...);
end
subgraph prd namespace
subgraph stable track
A[Ingress] --> D[Service];
D[Service] --> E(Deployment:Pods:app:stable);
end
subgraph canary track
J(Canary Ingress) --> K[Service]
K[Service] --> F(Deployment:Pods:app:canary);
end
E(Deployment:Pods:app:stable)---id1[(Pods:Postgres)]
F(Deployment:Pods:app:canary)---id1[(Pods:Postgres)]
endRelated
Manual QA
Test Conditions
- Date: Sep 10th, 2020
- Auto DevOps project: https://gitlab.com/dosuken123/new-sentimentality
- .gitlab-ci.yml: https://gitlab.com/dosuken123/new-sentimentality/-/blob/master/.gitlab-ci.yml
Subject: Auto Deploy pipeline (auto-deploy-image)
Context: When there are no deployments exist in production environment
Context: When deploy stable track to the production environment
It: creates production and production-postgres releases => 
shinya@shinya-MS-7A34:~/workspace/auto-deploy-image/src/bin$ helm ls -n new-sentimentality-19561312-production
NAME                 	NAMESPACE                             	REVISION	UPDATED                                	STATUS  	CHART                       	APP VERSION
production           	new-sentimentality-19561312-production	1       	2020-09-10 07:46:00.306473602 +0000 UTC	deployed	auto-deploy-app-2.0.0-beta.2	           
production-postgresql	new-sentimentality-19561312-production	1       	2020-09-10 07:45:18.953107725 +0000 UTC	deployed	postgresql-8.2.1            	11.6.0   It: lets user access to the production environment. => 
shinya@shinya-MS-7A34:~/workspace/auto-deploy-image/src/bin$ curl -s --insecure https://dosuken123-new-sentimentality.35.185.182.68.nip.io/ | grep '<p>'
    <p>Sep 10!</p>It: shows two stable pods in the deploy board => 
Context: When deploy canary track to the production environment
It: creates production-canary release => 
shinya@shinya-MS-7A34:~/workspace/auto-deploy-image/src/bin$ helm ls -n new-sentimentality-19561312-production
NAME                 	NAMESPACE                             	REVISION	UPDATED                                	STATUS  	CHART                       	APP VERSION
production           	new-sentimentality-19561312-production	1       	2020-09-10 07:46:00.306473602 +0000 UTC	deployed	auto-deploy-app-2.0.0-beta.2	           
production-canary    	new-sentimentality-19561312-production	2       	2020-09-10 07:53:30.200913996 +0000 UTC	deployed	auto-deploy-app-2.0.0-beta.2	           
production-postgresql	new-sentimentality-19561312-production	2       	2020-09-10 07:52:36.535055204 +0000 UTC	deployed	postgresql-8.2.1            	11.6.0     It: sets annotations for canary ingress => 
shinya@shinya-MS-7A34:~/workspace/auto-deploy-image/src/bin$ k describe ingress production-canary -n new-sentimentality-19561312-production
Name:             production-canary-auto-deploy
Namespace:        new-sentimentality-19561312-production
Address:          35.240.221.142
Default backend:  default-http-backend:80 (10.12.1.5:8080)
TLS:
  production-canary-auto-deploy-tls terminates le-19561312.35.185.182.68.nip.io,dosuken123-new-sentimentality.35.185.182.68.nip.io
Rules:
  Host                                                Path  Backends
  ----                                                ----  --------
  dosuken123-new-sentimentality.35.185.182.68.nip.io  
                                                      /   production-canary:5000 (10.12.0.46:5000,10.12.1.41:5000)
  le-19561312.35.185.182.68.nip.io                    
                                                      /   production-canary:5000 (10.12.0.46:5000,10.12.1.41:5000)
Annotations:                                          kubernetes.io/ingress.class: nginx
                                                      kubernetes.io/tls-acme: true
                                                      meta.helm.sh/release-name: production-canary
                                                      meta.helm.sh/release-namespace: new-sentimentality-19561312-production
                                                      nginx.ingress.kubernetes.io/canary: true
                                                      nginx.ingress.kubernetes.io/canary-by-header: canary
                                                      nginx.ingress.kubernetes.io/canary-weight: 50
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  CREATE  78s                nginx-ingress-controller  Ingress new-sentimentality-19561312-production/production-canary-auto-deploy
  Normal  UPDATE  39s (x2 over 50s)  nginx-ingress-controller  Ingress new-sentimentality-19561312-production/production-canary-auto-deployIt: lets user access to the stable track by roughly 50% chance with canary weight control. => 
shinya@shinya-MS-7A34:~/workspace/auto-deploy-image/src/bin$ for i in {1..100}; do curl -s --insecure https://dosuken123-new-sentimentality.35.185.182.68.nip.io/ | grep '<p>Sep 10!</p>'; done | wc -l
45It: lets user access to the canary track by roughly 50% chance with canary weight control. => 
shinya@shinya-MS-7A34:~/workspace/auto-deploy-image/src/bin$ for i in {1..100}; do curl -s --insecure https://dosuken123-new-sentimentality.35.185.182.68.nip.io/ | grep '<p>Sep 11!</p>'; done | wc -l
54It: lets user access to the canary track by 100% chance when user specifies the canary header. => 
shinya@shinya-MS-7A34:~/workspace/auto-deploy-image/src/bin$ for i in {1..100}; do curl -H 'canary: always' -s --insecure https://dosuken123-new-sentimentality.35.185.182.68.nip.io/ | grep '<p>Sep 11!</p>'; done | wc -l
100It: shows two canary pods in the deploy board => 
Context: When promote canary track to the the stable track
It: deletes production-canary release => 
shinya@shinya-MS-7A34:~/workspace/auto-deploy-image/src/bin$ helm ls -n new-sentimentality-19561312-production
NAME                 	NAMESPACE                             	REVISION	UPDATED                                	STATUS  	CHART                       	APP VERSION
production           	new-sentimentality-19561312-production	2       	2020-09-10 07:58:42.468964104 +0000 UTC	deployed	auto-deploy-app-2.0.0-beta.2	           
production-postgresql	new-sentimentality-19561312-production	3       	2020-09-10 07:58:26.831192766 +0000 UTC	deployed	postgresql-8.2.1            	11.6.0 It: lets user access to the stable track by 100% chance. => 
shinya@shinya-MS-7A34:~/workspace/auto-deploy-image/src/bin$ for i in {1..100}; do curl -s --insecure https://dosuken123-new-sentimentality.35.185.182.68.nip.io/ | grep '<p>Sep 11!</p>'; done | wc -l
100It: shows two stable pods in the deploy board => 
Context: When stable track already exists in the production and it was deployed with the v1 chart.
Context: When deploy stable track to the production environment
It: fails with a warning message to upgrade their deployment for the v2 chart. => 
It failed because of the Helm3 compatibility issue, this will be handled in the by helm-2to3, hence it's out of context. If it doesn't have the issue, it should show a Major version mismatch warning
$ auto-deploy deploy
Error: release: not found
Release "production-postgresql" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Secret "production-postgresql" in namespace "new-sentimentality-19561312-production" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "production-postgresql"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "new-sentimentality-19561312-production"https://gitlab.com/dosuken123/new-sentimentality/-/jobs/720751582


