Commit 0f7d20f5 authored by Justin Gauthier's avatar Justin Gauthier

Merge branch 'nginx-ingress' into 'master'

Nginx ingress

Moves from Traefik to Nginx-Ingress with Cert-Manager and starts adding support for OIDC.

See merge request !1
parents 94e44e93 620883c7
......@@ -17,7 +17,12 @@ Kubernetes Cluster configuration and services documentation, with example source
* [Install NFS-Client](docs/configuration/configuration.md#install-nfs-client)
* [Install MetalLB](docs/configuration/configuration.md#install-metallb)
* [Install Consul](docs/configuration/configuration.md#install-consul)
* [Install Traefik](docs/configuration/configuration.md#install-traefik)
* ~~[Install Traefik](docs/configuration/configuration.md#install-traefik)~~
* [Install Nginx-Ingress](docs/configuration/configuration.md#install-nginx-ingress)
* [Install Cert-Manager](docs/configuration/configuration.md#install-cert-manager)
* [Create Production Issuer](docs/configuration/configuration.md#create-production-issuer)
* [Create Cloudflare API Key Secret](docs/configuration/configuration.md#create-cloudflare-api-key-secret)
* [Create Default Certificate](docs/configuration/configuration.md#create-default-certificate)
* [Create ingress for Consul and Dashboard](docs/configuration/configuration.md#create-ingress-for-consul-and-dashboard)
* [Next Steps](docs/configuration/configuration.md#next-steps)
......@@ -31,6 +36,7 @@ Kubernetes Cluster configuration and services documentation, with example source
* [Atlassian Confluence](docs/services/services.md#atlassian-confluence)
* [Nextcloud](docs/services/services.md#nextcloud)
* [Plex](docs/services/services.md#plex)
* [OAuth2-Proxy](docs/services/services.md#oauth2-proxy)
* [Next Steps](docs/services/services.md#next-steps)
* [Current Services](https://gitlab.com/just.insane/kubernetes/blob/master/docs/services.md#current-services-in-lab)
* [New Services](docs/services/services.md#new-services)
\ No newline at end of file
......@@ -6,13 +6,17 @@
1. Download Helm
* copy [rbac-config.yaml](../../src/configuration/rbac-config.yaml) into the working directory
2. Run `kubectl create -f rbac-config.yaml` to create a tiller service account with cluster-admin role
3. Run `helm init --service-account tiller` to install helm with the above service account
## Install NFS-Client
1. Install NFS-Client to use NFS share for persistent storage
* `helm install stable/nfs-client-provisioner --set nfs.server=10.0.40.5 --set nfs.path=/mnt/Shared/kubernetes`
2. Get the status of nfs-client with `helm status nfs-client`
## Install MetalLB
......@@ -27,42 +31,119 @@
## Install Consul
1. Clone the consul-helm repostiory found [here](https://github.com/hashicorp/consul-helm)
1. Clone the consul-helm repository found [here](https://github.com/hashicorp/consul-helm)
* `git clone https://github.com/hashicorp/consul-helm.git`
2. Change to the consul-helm directory.
* `cd consul-helm`
3. Checkout the latest tag (we will be using v0.5.0)
* `git checkout v0.5.0`
4. Make any changes to [values.yaml](../../src/configuration/consul/values.yaml) as needed
* Currently setup to expose the UI on a NodePort change to ClusterIP when using an ingress
* Currently setup to expose the UI on a NodePort change to ClusterIP when using an ingress
5. Perform a dry-run of the Consul installation
* `helm install --dry-run ./`
6. Install Consul by running
* `helm install -n consul ./`
## Install Traefik
## ~~Install Traefik~~
**Does not currently use Consul as a provider**
NOTE: Deprecated in favour of [Nginx-Ingress](#install-nginx-ingress)
NOTE: Does not currently use Consul as a provider
1. Make any changes to [values.yaml](../../src/configuration/traefik/values.yaml)
2. Install Traefik
* `helm install stable/traefik --name traefik --namespace kube-system --values values.yaml`
Notes:
- Traefik helm chart does not support resolvers
- Traefik (lego) requires SOA records to issue domains
- Have to point the DNS servers to external (eg. 1.1.1.1) for *.corp.justin-tech.com wildcards (see [coredns configmap](../../src/configuration/coredns))
* Traefik helm chart does not support resolvers
* Traefik (lego) requires SOA records to issue domains
* Have to point the DNS servers to external (eg. 1.1.1.1) for *.corp.justin-tech.com wildcards (see [coredns configmap](../../src/configuration/coredns))
## Install Nginx-Ingress
1. Review documentation for [nginx-ingress](https://github.com/helm/charts/tree/master/stable/nginx-ingress)
2. Make any changes to [values.yaml](../../src/configuration/nginx-ingress/values.yaml)
3. Deploy the Helm Chart
* `helm install --name nginx-ingress -f values.yaml stable/nginx-ingress --namespace nginx-ingress`
## Install Cert-Manager
1. Review documentation for [cert-manager](https://github.com/jetstack/cert-manager/tree/master/deploy/charts/cert-manager)
2. Install the CustomResourceDefinition resources separately
* `kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/00-crds.yaml`
3. Create the namespace for cert-manager
* `kubectl create namespace cert-manager`
4. Label the cert-manager namespace to disable resource validation
* `kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true`
5. Add the Jetstack Helm repository
* `helm repo add jetstack https://charts.jetstack.io`
6. Update your local Helm chart repository cache
* `helm repo update`
7. Make any changes to [values.yaml](../../src/configuration/cert-manager/values.yaml)
8. Deploy the Helm Chart
* `helm install --name cert-manager --namespace cert-manager -f values.yaml jetstack/cert-manager`
9. [Verify the installation](https://docs.cert-manager.io/en/latest/getting-started/install.html#verifying-the-installation)
### Create Production Issuer
* ~~NOTE: Helm-Vault does not currently support the nested dictionary found in letsencrypt.yaml~~ This has been resolved in Helm-Vault v0.1.1
1. Apply [letsencrypt.yaml](../../src/configuration/cert-manager/letsencrypt.yaml)
* `kubectl apply -f letsencrypt.yaml`
### Create Cloudflare API Key Secret
1. Apply [cloudflare-secret.yaml](../../src/configuration/cert-manager/cloudflare-secret.yaml)
* `kubectl apply -f cloudflare-secret.yaml`
### Create Default Certificate
1. Apply [certificate.yaml](../../src/configuration/cert-manager/certificate.yaml)
* `kubectl apply -f certificate.yaml`
### Create ingress for Consul and Dashboard
1. Apply ingress.yaml files ([Consul](../../src/configuration/consul/ingress.yaml) and [Dashboard](../src/configuration/dashboard/ingress.yaml))
1. Apply ingress.yaml files ([Consul](../../src/configuration/consul/ingress.yaml) and [Dashboard](../../src/configuration/dashboard/ingress.yaml))
* `kubectl apply -f ingress.yaml`
2. The default locations are at [https://consul.corp.justin-tech.com:30443/](https://consul.corp.justin-tech.com:30443/) and [https://kubernetes.corp.justin-tech.com:30443/](https://kubernetes.corp.justin-tech.com:30443/)
2. The default locations are at [https://consul.corp.justin-tech.com](https://consul.corp.justin-tech.com) and [https://kubernetes.corp.justin-tech.com](https://kubernetes.corp.justin-tech.com)
## Next Steps
* Look at setting up OAuth (SAML/OIDC) authentication for Kubernetes services (I suspect this may require Nginx Ingress) (See #39)
* ~~Look at setting up OAuth (SAML/OIDC) authentication for Kubernetes services (I suspect this may require Nginx Ingress)~~ (See #39 - implemented in [Oauth2-Proxy](../../src/services/oauth2-proxy))
* ~~Setup MetalLB for load balancing (would allow Traefik/Nginx to run on standard ports)~~ (See [Install MetalLB](#install-metallb))
* Configure Consul Connect to secure communications between services
\ No newline at end of file
......@@ -25,7 +25,7 @@ or the [upstream](https://github.com/prabhatsharma/apache-guacamole-helm-chart)
3. Create an [ingress](../../src/services/keycloak/ingress.yaml)
* `kubectl create -f ingress.yaml`
* `kubectl apply -f ingress.yaml`
4. Get the default password for the `keycloak` user.
......@@ -187,6 +187,10 @@ or the [upstream](https://github.com/prabhatsharma/apache-guacamole-helm-chart)
7. Connect to [http://localhost:5000/web](http://localhost:5000/web) to access and setup Plex
## OAuth2-Proxy
* NOTE: Documentation not yet completed
## Next Steps
### Current Services (in lab)
......
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: corp-justin-tech-com
namespace: nginx-ingress
spec:
secretName: corp-justin-tech-com
issuerRef:
name: letsencrypt-production
kind: ClusterIssuer
commonName: 'justin-tech.com'
dnsNames:
- justin-tech.com
- "*.justin-tech.com"
- "*.corp.justin-tech.com"
keySize: 4096
keyAlgorithm: rsa
organization:
- "Justin-Tech"
acme:
config:
- dns01:
provider: cf-dns
domains:
- justin-tech.com
- "*.justin-tech.com"
- "*.corp.justin-tech.com"
\ No newline at end of file
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-key
namespace: cert-manager
type: Opaque
data:
api-key: changeme # has to be BASE64 encoded "echo -n <API_KEY>|base64" should work on MacOS
\ No newline at end of file
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: changeme
privateKeySecretRef:
name: letsencrypt-production
dns01:
providers:
- name: cf-dns
cloudflare:
email: changeme
apiKeySecretRef:
name: cloudflare-api-key
key: api-key
\ No newline at end of file
apiVersion: v1
kind: Namespace
metadata:
name: cert-manager-test
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: test-selfsigned
namespace: cert-manager-test
spec:
selfSigned: {}
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: selfsigned-cert
namespace: cert-manager-test
spec:
commonName: example.com
secretName: selfsigned-cert-tls
issuerRef:
name: test-selfsigned
# Default values for cert-manager.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
global:
## Reference to one or more secrets to be used when pulling images
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
imagePullSecrets: []
# - name: "image-pull-secret"
# Optional priority class to be used for the cert-manager pods
priorityClassName: ""
rbac:
create: true
leaderElection:
# Override the namespace used to store the ConfigMap for leader election
namespace: ""
replicaCount: 1
strategy: {}
# type: RollingUpdate
# rollingUpdate:
# maxSurge: 0
# maxUnavailable: 1
image:
repository: quay.io/jetstack/cert-manager-controller
tag: v0.7.0
pullPolicy: IfNotPresent
# Override the namespace used to store DNS provider credentials etc. for ClusterIssuer
# resources. By default, the same namespace as cert-manager is deployed within is
# used. This namespace will not be automatically created by the Helm chart.
clusterResourceNamespace: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
# Optional additional arguments
extraArgs: []
# Use this flag to set a namespace that cert-manager will use to store
# supporting resources required for each ClusterIssuer (default is kube-system)
# - --cluster-resource-namespace=kube-system
extraEnv: []
# - name: SOME_VAR
# value: 'some value'
resources: {}
# requests:
# cpu: 10m
# memory: 32Mi
# Pod Security Context
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext:
enabled: false
fsGroup: 1001
runAsUser: 1001
podAnnotations: {}
podLabels: {}
# Optional DNS settings, useful if you have a public and private DNS zone for
# the same domain on Route 53. What follows is an example of ensuring
# cert-manager can access an ingress or DNS TXT records at all times.
# NOTE: This requires Kubernetes 1.10 or `CustomPodDNS` feature gate enabled for
# the cluster to work.
podDnsPolicy: "None"
podDnsConfig:
nameservers:
- 1.1.1.1
- 8.8.8.8
nodeSelector: {}
ingressShim: {}
defaultIssuerName: "letsencrypt-production"
defaultIssuerKind: "ClusterIssuer"
# defaultACMEChallengeType: ""
# defaultACMEDNS01ChallengeProvider: ""
webhook:
enabled: true
cainjector:
enabled: true
# Use these variables to configure the HTTP_PROXY environment variables
# http_proxy: "http://proxy:8080"
# http_proxy: "http://proxy:8080"
# no_proxy: 127.0.0.1,localhost
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#affinity-v1-core
# for example:
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: foo.bar.com/role
# operator: In
# values:
# - master
affinity: {}
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#toleration-v1-core
# for example:
# tolerations:
# - key: foo.bar.com/role
# operator: Equal
# value: master
# effect: NoSchedule
tolerations: []
\ No newline at end of file
......@@ -5,7 +5,15 @@ metadata:
app: consul-ui
name: consul-ui
namespace: default
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: "letsencrypt-production"
spec:
tls:
- hosts:
- consul.corp.justin-tech.com
rules:
- host: consul.corp.justin-tech.com
http:
......@@ -14,4 +22,4 @@ spec:
serviceName: consul-ui
servicePort: 80
status:
loadBalancer: {}
loadBalancer: {}
\ No newline at end of file
......@@ -5,7 +5,16 @@ metadata:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: "letsencrypt-production"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- kubernetes.corp.justin-tech.com
rules:
- host: kubernetes.corp.justin-tech.com
http:
......
## nginx configuration
## Ref: https://github.com/kubernetes/ingress/blob/master/controllers/nginx/configuration.md
##
controller:
name: controller
image:
repository: quay.io/kubernetes-ingress-controller/nginx-ingress-controller
tag: "0.23.0"
pullPolicy: IfNotPresent
# www-data -> uid 33
runAsUser: 33
config: {}
# Will add custom header to Nginx https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/customization/custom-headers
headers: {}
# Required for use with CNI based kubernetes installations (such as ones set up by kubeadm),
# since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920
# is merged
hostNetwork: false
# Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true'.
# By default, while using host network, name resolution uses the host's DNS. If you wish nginx-controller
# to keep resolving names inside the k8s network, use ClusterFirstWithHostNet.
dnsPolicy: ClusterFirst
## Use host ports 80 and 443
daemonset:
useHostPort: false
hostPorts:
http: 80
https: 443
## healthz endpoint
stats: 18080
## Required only if defaultBackend.enabled = false
## Must be <namespace>/<service_name>
##
defaultBackendService: ""
## Election ID to use for status update
##
electionID: ingress-controller-leader
## Name of the ingress class to route through this controller
##
ingressClass: nginx
# labels to add to the pod container metadata
podLabels: {}
# key: value
## Allows customization of the external service
## the ingress will be bound to via DNS
publishService:
enabled: false
## Allows overriding of the publish service to bind to
## Must be <namespace>/<service_name>
##
pathOverride: ""
## Limit the scope of the controller
##
scope:
enabled: false
namespace: "" # defaults to .Release.Namespace
## Additional command line arguments to pass to nginx-ingress-controller
## E.g. to specify the default SSL certificate you can use
## extraArgs:
## default-ssl-certificate: "<namespace>/<secret_name>"
extraArgs:
default-ssl-certificate: "nginx-ingress/corp-justin-tech-com"
## Additional environment variables to set
extraEnvs: []
# extraEnvs:
# - name: FOO
# valueFrom:
# secretKeyRef:
# key: FOO
# name: secret-resource
## DaemonSet or Deployment
##
kind: Deployment
# The update strategy to apply to the Deployment or DaemonSet
##
updateStrategy: {}
# rollingUpdate:
# maxUnavailable: 1
# type: RollingUpdate
# minReadySeconds to avoid killing pods before we are ready
##
minReadySeconds: 0
## Node tolerations for server scheduling to nodes with taints
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
##
tolerations: []
# - key: "key"
# operator: "Equal|Exists"
# value: "value"
# effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"
affinity: {}
## Node labels for controller pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## Liveness and readiness probe values
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
port: 10254
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
port: 10254
## Annotations to be added to controller pods
##
podAnnotations: {}
replicaCount: 1
minAvailable: 1
resources: {}
# limits:
# cpu: 100m
# memory: 64Mi
# requests:
# cpu: 100m
# memory: 64Mi
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 11
targetCPUUtilizationPercentage: 50
targetMemoryUtilizationPercentage: 50
## Override NGINX template
customTemplate:
configMapName: ""
configMapKey: ""
service:
annotations: {}
labels: {}
clusterIP: ""
## List of IP addresses at which the controller services are available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
enableHttp: true
enableHttps: true
## Set external traffic policy to: "Local" to preserve source IP on
## providers supporting it
## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
externalTrafficPolicy: ""
healthCheckNodePort: 0
targetPorts:
http: http
https: https
type: LoadBalancer
# type: NodePort
# nodePorts:
# http: 32080
# https: 32443
nodePorts:
http: ""
https: ""
extraContainers: []
## Additional containers to be added to the controller pod.
## See https://github.com/lemonldap-ng-controller/lemonldap-ng-controller as example.
# - name: my-sidecar
# image: nginx:latest
# - name: lemonldap-ng-controller
# image: lemonldapng/lemonldap-ng-controller:0.2.0
# args:
# - /lemonldap-ng-controller
# - --alsologtostderr
# - --configmap=$(POD_NAMESPACE)/lemonldap-ng-configuration
# env:
# - name: POD_NAME
# valueFrom:
# fieldRef:
# fieldPath: metadata.name
# - name: POD_NAMESPACE
# valueFrom:
# fieldRef:
# fieldPath: metadata.namespace
# volumeMounts:
# - name: copy-portal-skins
# mountPath: /srv/var/lib/lemonldap-ng/portal/skins
extraVolumeMounts: []
## Additional volumeMounts to the controller main container.
# - name: copy-portal-skins
# mountPath: /var/lib/lemonldap-ng/portal/skins
extraVolumes: []
## Additional volumes to the controller pod.
# - name: copy-portal-skins
# emptyDir: {}
extraInitContainers: []
## Containers, which are run before the app containers are started.
# - name: init-myservice
# image: busybox
# command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
stats:
enabled: true
service:
annotations: {}
clusterIP: ""
## List of IP addresses at which the stats service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
servicePort: 18080
type: ClusterIP
## If controller.stats.enabled = true and controller.metrics.enabled = true, Prometheus metrics will be exported
##
metrics:
enabled: true
service:
annotations: {}
# prometheus.io/scrape: "true"
# prometheus.io/port: "10254"
clusterIP: ""
## List of IP addresses at which the stats-exporter service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
servicePort: 9913
type: ClusterIP
serviceMonitor:
enabled: false
additionalLabels: {}
namespace: ""
lifecycle: {}
priorityClassName: ""
## Rollback limit
##
revisionHistoryLimit: 10
## Default 404 backend
##
defaultBackend:
## If false, controller.defaultBackendService must be provided
##
enabled: true
name: default-backend
image:
repositor