Unverified Commit 2168dda3 authored by Justin Gauthier's avatar Justin Gauthier
Browse files

update with test objects

parent 305e0aac
......@@ -3,4 +3,9 @@ admin.conf
.DS_Store
*.dec
kubectl
working/
\ No newline at end of file
working/
*.pem
*.key
istio-*
consul-helm/
src/configuration/bank-vaults/vault-cluster.yaml
......@@ -51,3 +51,7 @@ Kubernetes Cluster configuration and services documentation, with example source
* [Next Steps](docs/services/README.md#next-steps)
* [Current Services](docs/services/README.md#current-services-in-lab)
* [New Services](docs/services/README.md#new-services)
[redeployment](docs/configuration/redeployment.md) contains ideas for redeploying Kubernetes. This is being worked on in branch v2.
......@@ -21,6 +21,7 @@
- [Create ingress for Consul and Dashboard](#create-ingress-for-consul-and-dashboard)
- [Install Prometheus-Operator](#install-prometheus-operator)
- [Install Weave-Scope](#install-weave-scope)
- [Bank-Vaults](#bank-vaults)
- [Next Steps](#next-steps)
<!-- /code_chunk_output -->
......@@ -193,6 +194,37 @@
* `helm install weave-scope stable/weave-scope -f values.yaml --namespace monitoring`
## Bank-Vaults
[Walkthrough](https://medium.com/@jackalus/deploying-vault-with-etcd-backend-in-kubernetes-d89f9a0068bf)
1. Deploy the storage backend, we are using Consul
2. Add the BanzaiCloud Helm repo
3. Install the operator
4. `kubectl apply -f https://raw.githubusercontent.com/banzaicloud/bank-vaults/master/operator/deploy/rbac.yaml`
5. Apply `vault-cluster.yaml`
1. Once setup, setup ldap/oidc users, ect
6. Apply `clusterrole.yaml`
7. Follow steps from https://medium.com/@jackalus/deploying-vault-with-etcd-backend-in-kubernetes-d89f9a0068bf > Let's get in
8. Deploy the webhook
1. `kubectl create ns vswh`
2. helm install with `values.yaml`
9. Setup test secrets
1. `vault kv put secret/accounts/aws AWS_SECRET_ACCESS_KEY=s3cr3t`
10. Deploy test app
1. `kubectl apply -f test-deployment.yaml`
**NOTE:** To get all metrics, statsd seems to be required. Direct prometheus stats are limited.
## Next Steps
* ~~Look at setting up OAuth (SAML/OIDC) authentication for Kubernetes services (I suspect this may require Nginx Ingress)~~ (See #39 - implemented in [Oauth2-Proxy](../../src/services/oauth2-proxy))
......
# Redeployment
## After Installation
1. Install Helm
2. Install NFS-Client
3. Install MetalLB
4. Install Cert-Manager (long lived certificates & Let's Encrypt)
5. Install [Prometheus-Operator](https://github.com/helm/charts/tree/master/stable/prometheus-operator)
1. Prometheus
2. Alertmanager
3. node-exporter
4. kube-state-metrics
5. grafana
6. scraping for Kubernetes cluster
6. Install [Bank-Vaults](https://github.com/banzaicloud/bank-vaults)
1. Install [Consul](https://github.com/hashicorp/consul-helm)
1. Not using Connect
2. Not using MeshGateway
3. Purely K/V store for Vault
7. Install [Elastic-Cloud-Operator](https://www.elastic.co/elasticsearch-kubernetes)
1. Elasticsearch
2. Kibana
8. Install [Logging-Operator](https://github.com/banzaicloud/logging-operator)
1. Fluentd
2. Fluentbit
**NOTE:** To get logging, you can add the config via [Helm](https://github.com/banzaicloud/logging-operator/blob/master/charts/nginx-logging-demo/templates/logging.yaml)
9. Install [Istio](https://istio.io/docs/setup/install/helm/) or [Istio-Operator](https://github.com/banzaicloud/istio-operator)
1. Install Tracing
1. [Jaeger-Operator](https://github.com/helm/charts/tree/master/stable/jaeger-operator)
2. Deploy all-in-one as trace history needed needed
1. See trace history issue with dependencies.
1. [Docs](https://www.jaegertracing.io/docs/1.14/operator/#elasticsearch-storage)
2. [Github Issue](https://github.com/jaegertracing/jaeger-operator/issues/294)
2. Install [Kiali](https://www.kiali.io/documentation/getting-started/#_install_kiali_latest)
3. Grafana via Prometheus-Operator
1. Do not automatically deploy dashboards, import the JSON manually for best experience
4. Prometheus via Prometheus-Operator
5. Fluentd via Logging-Operator
10. Install [Anchore Image Validator](https://github.com/banzaicloud/anchore-image-validator) - pod security
1. Install Anchore Engine
11. Install Keycloak (install stage might change depending on OIDC use in above items)
12. Install OAuth2-Proxy
1. Allows authorizing and authenticating ingress traffic into the environment
## Potential apps
1. Install Gitlab-CE
1. CI/CD environment
2. [Helm Chart Repository](https://tobiasmaier.info/posts/2018/03/13/hosting-helm-repo-on-gitlab-pages.html)
1. Alternatively, look at [ChartMuseum](https://github.com/helm/chartmuseum)
3. Docker Repository
2. [Sonarqube](https://github.com/banzaicloud/banzai-charts/tree/master/sonarqube)
3. [Chaoskube](https://github.com/helm/charts/tree/master/stable/chaoskube)
4. [ClamAV](https://github.com/helm/charts/tree/master/stable/clamav)
5. [Elastalert](https://github.com/helm/charts/tree/master/stable/elastalert)
6. [Falco](https://github.com/helm/charts/tree/master/stable/falco)
7. [helm-exporter](https://github.com/helm/charts/tree/master/stable/helm-exporter)
8. [Katafygio](https://github.com/helm/charts/tree/master/stable/katafygio)
9. [kube-hunter](https://github.com/aquasecurity/kube-hunter)
10. [Kubedb](https://github.com/kubedb/installer/tree/v0.13.0-rc.0/chart/kubedb)
11. [kuberhealthy](https://github.com/helm/charts/tree/master/stable/kuberhealthy)
12. [locust](https://github.com/helm/charts/tree/master/stable/locust)
13. [Minio](https://github.com/helm/charts/tree/master/stable/minio)
14. [openvpn](https://github.com/helm/charts/tree/master/stable/openvpn)
15. [Velero](https://github.com/helm/charts/tree/master/stable/velero)
\ No newline at end of file
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: token-review
namespace: vault-infra
subjects:
- kind: ServiceAccount
name: vault # Name is case sensitive
namespace: vault-infra
roleRef:
kind: ClusterRole #this must be Role or ClusterRole
name: system:auth-delegator # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: vault-prometheus
namespace: vault-infra
labels:
release: prometheus
spec:
selector:
matchLabels:
app: vault
endpoints:
- interval: 30s
path: /metrics
params:
format:
- prometheus
port: metrics
scheme: http
scrapeTimeout: 30s
bearerTokenFile: '/etc/prometheus/config_out/.vault-token'
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-secrets
spec:
replicas: 1
selector:
matchLabels:
app: hello-secrets
template:
metadata:
labels:
app: hello-secrets
annotations:
vault.security.banzaicloud.io/vault-addr: "https://vault.vault-infra:8200"
vault.security.banzaicloud.io/vault-tls-secret: "vault-tls"
spec:
serviceAccountName: default
containers:
- name: alpine
image: alpine
command: ["sh", "-c", "echo $AWS_SECRET_ACCESS_KEY && echo going to sleep... && sleep 10000"]
env:
- name: AWS_SECRET_ACCESS_KEY
value: "vault:secret/data/accounts/aws#AWS_SECRET_ACCESS_KEY"
\ No newline at end of file
# Default values for vault-secrets-webhook.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 2
debug: true
image:
repository: banzaicloud/vault-secrets-webhook
tag: 0.5.1
pullPolicy: IfNotPresent
imagePullSecrets: []
service:
name: vault-secrets-webhook
type: ClusterIP
externalPort: 443
internalPort: 8443
env:
VAULT_IMAGE: vault:latest
VAULT_ENV_IMAGE: banzaicloud/vault-env:latest
# VAULT_CAPATH: /vault/tls
# used when the pod that should get secret injected does not specify
# an imagePullSecret
# DEFAULT_IMAGE_PULL_SECRET:
# DEFAULT_IMAGE_PULL_SECRET_NAMESPACE:
metrics:
enabled: true
port: 8443
serviceMonitor:
enabled: true
scheme: https
tlsConfig:
insecureSkipVerify: true
volumes: []
# - name: vault-tls
# secret:
# secretName: vault-tls
volumeMounts: []
# - name: vault-tls
# mountPath: /vault/tls
podAnnotations: {}
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
## Assign a PriorityClassName to pods if set
priorityClassName: ""
rbac:
enabled: true
psp:
enabled: false
# This can cause issues when used with Helm, so it is not enabled by default
configMapMutation: false
configmapFailurePolicy: Ignore
podsFailurePolicy: Ignore
secretsFailurePolicy: Ignore
namespaceSelector:
matchExpressions:
- key: name
operator: NotIn
values:
- kube-system
# matchLabels:
# vault-injection: enabled
podDisruptionBudget:
enabled: true
minAvailable: 1
\ No newline at end of file
......@@ -2,9 +2,9 @@ apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: consul-ui
app: consul-consul-ui
name: consul-ui
namespace: default
namespace: consul
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/tls-acme: "true"
......@@ -19,7 +19,7 @@ spec:
http:
paths:
- backend:
serviceName: consul-ui
servicePort: 80
serviceName: consul-consul-ui
servicePort: 8500
status:
loadBalancer: {}
\ No newline at end of file
# Available parameters and their default values for the Consul chart.
# Server, when enabled, configures a server cluster to run. This should
# be disabled if you plan on connecting to a Consul cluster external to
# the Kube cluster.
global:
# enabled is the master enabled switch. Setting this to true or false
# will enable or disable all the components within this chart by default.
# Each component can be overridden using the component-specific "enabled"
# value.
enabled: true
# Domain to register the Consul DNS server to listen for.
domain: consul
# Image is the name (and tag) of the Consul Docker image for clients and
# servers below. This can be overridden per component.
image: "consul:1.4.0"
# imageK8S is the name (and tag) of the consul-k8s Docker image that
# is used for functionality such as the catalog sync. This can be overridden
# per component below.
imageK8S: "hashicorp/consul-k8s:0.4.0"
# Datacenter is the name of the datacenter that the agents should register
# as. This shouldn't be changed once the Consul cluster is up and running
# since Consul doesn't support an automatic way to change this value
# currently: https://github.com/hashicorp/consul/issues/1858
datacenter: changeme
server:
enabled: "-"
image: null
replicas: 3
bootstrapExpect: 3 # Should <= replicas count
# storage and storageClass are the settings for configuring stateful
# storage for the server pods. storage should be set to the disk size of
# the attached volume. storageClass is the class of storage which defaults
# to null (the Kube cluster will pick the default).
storage: 10Gi
storageClass: nfs-client
# connect will enable Connect on all the servers, initializing a CA
# for Connect-related connections. Other customizations can be done
# via the extraConfig setting.
connect: true
# Resource requests, limits, etc. for the server cluster placement. This
# should map directly to the value of the resources field for a PodSpec,
# formatted as a multi-line string. By default no direct resource request
# is made.
resources: null
# updatePartition is used to control a careful rolling update of Consul
# servers. This should be done particularly when changing the version
# of Consul. Please refer to the documentation for more information.
updatePartition: 0
# disruptionBudget enables the creation of a PodDisruptionBudget to
# prevent voluntary degrading of the Consul server cluster.
disruptionBudget:
enabled: true
# maxUnavailable will default to (n/2)-1 where n is the number of
# replicas. If you'd like a custom value, you can specify an override here.
maxUnavailable: null
# extraConfig is a raw string of extra configuration to set with the
# server. This should be JSON.
extraConfig: |
{}
# extraVolumes is a list of extra volumes to mount. These will be exposed
# to Consul in the path `/consul/userconfig/<name>/`. The value below is
# an array of objects, examples are shown below.
extraVolumes: []
# - type: secret (or "configMap")
# name: my-secret
# load: false # if true, will add to `-config-dir` to load by Consul
# Affinity Settings
# Commenting out or setting as empty the affinity variable, will allow
# deployment to single node services such as Minikube
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "consul.name" . }}
release: "{{ .Release.Name }}"
component: server
topologyKey: kubernetes.io/hostname
# Client, when enabled, configures Consul clients to run on every node
# within the Kube cluster. The current deployment model follows a traditional
# DC where a single agent is deployed per node.
client:
enabled: "-"
image: null
join: null
# grpc should be set to true if the gRPC listener should be enabled.
# This should be set to true if connectInject is enabled.
grpc: true
# Resource requests, limits, etc. for the client cluster placement. This
# should map directly to the value of the resources field for a PodSpec,
# formatted as a multi-line string. By default no direct resource request
# is made.
resources: null
# extraConfig is a raw string of extra configuration to set with the
# server. This should be JSON.
extraConfig: |
{}
# extraVolumes is a list of extra volumes to mount. These will be exposed
# to Consul in the path `/consul/userconfig/<name>/`. The value below is
# an array of objects, examples are shown below.
extraVolumes: []
# - type: secret (or "configMap")
# name: my-secret
# load: false # if true, will add to `-config-dir` to load by Consul
# Configuration for DNS configuration within the Kubernetes cluster.
# This creates a service that routes to all agents (client or server)
# for serving DNS requests. This DOES NOT automatically configure kube-dns
# today, so you must still manually configure a `stubDomain` with kube-dns
# for this to have any effect:
# https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configure-stub-domain-and-upstream-dns-servers
dns:
enabled: "-"
ui:
# True if you want to enable the Consul UI. The UI will run only
# on the server nodes. This makes UI access via the service below (if
# enabled) predictable rather than "any node" if you're running Consul
# clients as well.
enabled: true
# True if you want to create a Service entry for the Consul UI.
#
# serviceType can be used to control the type of service created. For
# example, setting this to "LoadBalancer" will create an external load
# balancer (for supported K8S installations) to access the UI.
service:
enabled: true
type: NodePort
# syncCatalog will run the catalog sync process to sync K8S with Consul
# services. This can run bidirectional (default) or unidirectionally (Consul
# to K8S or K8S to Consul only).
#
# This process assumes that a Consul agent is available on the host IP.
# This is done automatically if clients are enabled. If clients are not
# enabled then set the node selection so that it chooses a node with a
# Consul agent.
syncCatalog:
# True if you want to enable the catalog sync. "-" for default.
enabled: true
image: null
default: true # true will sync by default, otherwise requires annotation
# toConsul and toK8S control whether syncing is enabled to Consul or K8S
# as a destination. If both of these are disabled, the sync will do nothing.
toConsul: true
toK8S: true
# k8sPrefix is the service prefix to prepend to services before registering
# with Kubernetes. For example "consul-" will register all services
# prepended with "consul-". (Consul -> Kubernetes sync)
k8sPrefix: null
# k8sTag is an optional tag that is applied to all of the Kubernetes services
# that are synced into Consul. If nothing is set, defaults to "k8s".
# (Kubernetes -> Consul sync)
k8sTag: null
# syncClusterIPServices syncs services of the ClusterIP type, which may
# or may not be broadly accessible depending on your Kubernetes cluster.
# Set this to false to skip syncing ClusterIP services.
syncClusterIPServices: true
# nodePortSyncType configures the type of syncing that happens for NodePort
# services. The valid options are: ExternalOnly, InternalOnly, ExternalFirst.
# - ExternalOnly will only use a node's ExternalIP address for the sync
# - InternalOnly use's the node's InternalIP address
# - ExternalFirst will preferentially use the node's ExternalIP address, but
# if it doesn't exist, it will use the node's InternalIP address instead.
nodePortSyncType: ExternalFirst
# ConnectInject will enable the automatic Connect sidecar injector.
connectInject:
enabled: true
image: null # image for consul-k8s that contains the injector
default: false # true will inject by default, otherwise requires annotation
# imageConsul and imageEnvoy can be set to Docker images for Consul and
# Envoy, respectively. If the Consul image is not specified, the global
# default will be used. If the Envoy image is not specified, an early
# version of Envoy will be used.
imageConsul: null
imageEnvoy: null
# namespaceSelector is the selector for restricting the webhook to only
# specific namespaces. This should be set to a multiline string.
namespaceSelector: null
# The certs section configures how the webhook TLS certs are configured.
# These are the TLS certs for the Kube apiserver communicating to the
# webhook. By default, the injector will generate and manage its own certs,
# but this requires the ability for the injector to update its own
# MutatingWebhookConfiguration. In a production environment, custom certs
# should probaly be used. Configure the values below to enable this.
certs:
# secretName is the name of the secret that has the TLS certificate and
# private key to serve the injector webhook. If this is null, then the
# injector will default to its automatic management mode that will assign
# a service account to the injector to generate its own certificates.
secretName: null
# caBundle is a base64-encoded PEM-encoded certificate bundle for the
# CA that signed the TLS certificate that the webhook serves. This must
# be set if secretName is non-null.
caBundle: ""
# certName and keyName are the names of the files within the secret for
# the TLS cert and private key, respectively. These have reasonable
# defaults but can be customized if necessary.
certName: tls.crt
keyName: tls.key
apiVersion: elasticsearch.k8s.elastic.co/v1alpha1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.2.0
nodes:
- nodeCount: 1
config:
node.master: true
node.data: true
node.ingest: true
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # note: elasticsearch-data must be the name of the Elasticsearch volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
#storageClassName: "nfs-client"
apiVersion: istio.banzaicloud.io/v1beta1
kind: Istio
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: istio-sample
spec:
version: "1.2.5"
mtls: false
includeIPRanges: "*"
excludeIPRanges: ""
autoInjectionNamespaces:
- "default"
- "consul"
- "guacamole"
- "monitoring"
- "nginx-ingress"
- "oauth2"
- "vault-infra"
- "vswh"
- "weave"
controlPlaneSecurityEnabled: false
defaultResources:
requests:
cpu: 10m
sds:
enabled: false
pilot:
enabled: true
image: "docker.io/istio/pilot:1.2.5"
replicaCount: 1
minReplicas: 1
maxReplicas: 5
traceSampling: 1.0
resources:
requests:
cpu: 500m
memory: 2048Mi
citadel:
enabled: true
image: "docker.io/istio/citadel:1.2.5"
galley:
enabled: true
image: "docker.io/istio/galley:1.2.5"
replicaCount: 1
gateways:
enabled: true