Commit 75f6d335 authored by Piotr Szlenk's avatar Piotr Szlenk

Features/multi network

parent 6ca187de
# Kubernetes with Calico on Cumulus VX as IP Fabric
# Kubernetes with Calico networking and Cumulus VX as IP Fabric
![IP Fabric](images/ipfabric.png "IP Fabric with K8s")
This repository contains demos for various deployment scenarios for Kubernetes with Calico and IP Fabric based on Cumulus VX.
## Prerequisites
The first step is to deploy IP Fabric, which is a common element for all scenarios. At minimum IP Fabric provides IP reachability between k8s nodes. In more advanced scenarios it peers with IP Fabric to exchange pod and service IP ranges.
Then in second step deploy one of the scenarios described below.
## High level diagram of the setup
![IP Fabric](images/ipfabric-generic.png "IP Fabric with K8s")
## Deploying IP Fabric with Cumulus VX
### Prerequisites
1. Install the latest Vagrant engine
2. Install Virtualbox
3. 16GB of RAM and 4CPUs are recommended to run this demo
4. Install python 2.7+ with virtualenv
## Bringing environment up
### Bringing IP Fabric up
1. Clone this repository and run following commands from inside of it
2. Bring all VMs with ```vagrant up``` command
3. Configure IP Fabric: ```sh fabric.deploy.sh```
4. Prepare K8s nodes and take note of token and cert digest: ```sh vagrant.prepare_nodes.sh```
5. Join K8s workers to the master: ``` sh vagrant.join_nodes.sh <token> <cert digest>```
6. Deploy calico: ```sh vagrant.calico.sh```
2. Bring all Cumulus VX VMs with ```vagrant up leaf1 spine1 leaf2 spine2``` command
3. IP Fabric: ```sh fabric.deploy.sh```
## Kubernetes and IP Fabric - deployment scenarios
#### Deploying cluster
1. Bring kubernetes VMs up with ```vagrant up k8s-master-l1-1 k8s-node-l1-1 k8s-node-l1-2 k8s-node-l2-1 k8s-node-l2-2``` command
2. Verify with ```vagrant status```
3. Prepare K8s nodes and take note of token and cert digest: ```sh vagrant.prepare_nodes.sh```
4. Join K8s workers to the master: ``` sh vagrant.join_nodes.sh <token> <cert digest>```
### Scenario 1 - Kubernetes nodes establish BGP peering with leaf switches, no overlay
In this scenario each kubernetes nodes establish single iBGP session with leaf switch to exchange pod and services prefixes. Workload traffic is a native IP traffic when it traverses IP-Fabric.
#### Highlevel diagram
![IP Fabric with BGP peering](images/ipfabric-with-peering.png)
#### Deploying networking
1. Deploy calico: ```sh vagrant.calico-with-peering.sh```
### Scenario 2 - Kubernetes nodes without BGP peering and VXLAN encapsulation in overlay
In this scenario each kubernetes nodes encapsulates workload traffic using VXLAN. No BGP peering is used.
#### Highlevel diagram
#### Deploying networking
1. Deploy calico: ```sh vagrant.calico-vxlan-overlay.sh```
## Testing and verification
## Deploying demo app
### Deploying demo app
1. SSH to k8s-master-l1-1 node: ```vagrant ssh k8s-master-l1-1```
2. Execute: ```kubectl -f apply demo/namespace.yaml```
3. Execute: ```kubectl -f apply demo/deployment.yaml```
4. Execute: ```kubectl -f apply demo/service.yaml```
2. Execute: ```kubectl apply -f demo/namespace.yaml```
3. Execute: ```kubectl apply -f demo/deployment.yaml```
4. Execute: ```kubectl apply -f demo/service.yaml```
## Checking BGP peering in IP Fabirc
### Checking BGP peering in IP Fabirc
1. Execute: ```vagrant ssh leaf1 -c "sudo net show bgp sum"```
2. Execute: ```vagrant ssh leaf2 -c "sudo net show bgp sum"```
3. Execute: ```vagrant ssh spine1 -c "sudo net show bgp sum"```
4. Execute: ```vagrant ssh spine2 -c "sudo net show bgp sum"```
## Checking BGP prefixes in IP Fabirc
### Checking BGP prefixes in IP Fabirc
1. Execute: ```vagrant ssh leaf1 -c "sudo net show bgp"```
2. Execute: ```vagrant ssh leaf2 -c "sudo net show bgp"```
3. Execute: ```vagrant ssh spine1 -c "sudo net show bgp"```
......
......@@ -74,7 +74,8 @@ Vagrant.configure("2") do |config|
k8smaster1.vm.provision "file", source: "k8s-provisioning/init.k8s-nodes.sh", destination: "$HOME/k8s-provisioning/04_init.k8s-nodes.sh"
k8smaster1.vm.provision "file", source: "k8s-provisioning/kubeadm-init.sh", destination: "$HOME/k8s-provisioning/05_kubeadm-init.sh"
k8smaster1.vm.provision "file", source: "k8s-provisioning/labels.k8s-nodes.sh", destination: "$HOME/k8s-provisioning/06_labels.k8s-nodes.sh"
k8smaster1.vm.provision "file", source: "calico/", destination: "$HOME/calico"
k8smaster1.vm.provision "file", source: "calico-with-peering/", destination: "$HOME/calico-with-peering"
k8smaster1.vm.provision "file", source: "calico-vxlan-overlay/", destination: "$HOME/calico-vxlan-overlay"
k8smaster1.vm.provision "file", source: "demo/", destination: "$HOME/demo"
end
......
This diff is collapsed.
# Calico Version v3.8.0
# https://docs.projectcalico.org/v3.8/releases#v3.8.0
# This manifest includes the following component versions:
# calico/ctl:v3.8.0
apiVersion: v1
kind: ServiceAccount
metadata:
name: calicoctl
namespace: kube-system
---
apiVersion: v1
kind: Pod
metadata:
name: calicoctl
namespace: kube-system
spec:
nodeSelector:
beta.kubernetes.io/os: linux
hostNetwork: true
serviceAccountName: calicoctl
containers:
- name: calicoctl
image: calico/ctl:v3.8.0
command: ["/bin/sh", "-c", "while true; do sleep 3600; done"]
env:
- name: DATASTORE_TYPE
value: kubernetes
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calicoctl
rules:
- apiGroups: [""]
resources:
- namespaces
- nodes
verbs:
- get
- list
- update
- apiGroups: [""]
resources:
- pods
- serviceaccounts
verbs:
- get
- list
- apiGroups: [""]
resources:
- pods/status
- nodes/status
verbs:
- update
- apiGroups: ["crd.projectcalico.org"]
resources:
- bgppeers
- bgpconfigurations
- clusterinformations
- felixconfigurations
- globalnetworkpolicies
- globalnetworksets
- ippools
- networkpolicies
- networksets
- hostendpoints
- ipamblocks
- blockaffinities
- ipamhandles
verbs:
- create
- get
- list
- update
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calicoctl
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calicoctl
subjects:
- kind: ServiceAccount
name: calicoctl
namespace: kube-system
#!/bin/sh
vagrant ssh k8s-master-l1-1 -c "kubectl apply -f calico-vxlan-overlay/calicoctl.yaml"
vagrant ssh k8s-master-l1-1 -c "kubectl apply -f calico-vxlan-overlay/calico.yaml"
#sleep 120
#vagrant ssh k8s-master-l1-1 -c "kubectl exec -i -n kube-system calicoctl -- calicoctl apply -f - < calico-vxlan-overlay/calico.nodes.yaml"
#vagrant ssh k8s-master-l1-1 -c "kubectl exec -i -n kube-system calicoctl -- calicoctl apply -f - < calico-vxlan-overlay/calico.bgpconfig.yaml"
#alias calicoctl="kubectl exec -i -n kube-system calicoctl /calicoctl -- "
......@@ -6,12 +6,12 @@ vagrant ssh k8s-master-l1-1 -c "kubectl label nodes k8s-node-l1-2 'asnum=65101
vagrant ssh k8s-master-l1-1 -c "kubectl label nodes k8s-node-l2-1 'asnum=65102'"
vagrant ssh k8s-master-l1-1 -c "kubectl label nodes k8s-node-l2-2 'asnum=65102'"
vagrant ssh k8s-master-l1-1 -c "kubectl apply -f calico/calicoctl.yaml"
vagrant ssh k8s-master-l1-1 -c "kubectl apply -f calico/calico.yaml"
vagrant ssh k8s-master-l1-1 -c "kubectl apply -f calico-with-peering/calicoctl.yaml"
vagrant ssh k8s-master-l1-1 -c "kubectl apply -f calico-with-peering/calico.yaml"
sleep 60
sleep 120
vagrant ssh k8s-master-l1-1 -c "kubectl exec -i -n kube-system calicoctl -- calicoctl apply -f - < calico/calico.nodes.yaml"
vagrant ssh k8s-master-l1-1 -c "kubectl exec -i -n kube-system calicoctl -- calicoctl apply -f - < calico/calico.bgpconfig.yaml"
vagrant ssh k8s-master-l1-1 -c "kubectl exec -i -n kube-system calicoctl -- calicoctl apply -f - < calico-with-peering/calico.nodes.yaml"
vagrant ssh k8s-master-l1-1 -c "kubectl exec -i -n kube-system calicoctl -- calicoctl apply -f - < calico-with-peering/calico.bgpconfig.yaml"
#alias calicoctl="kubectl exec -i -n kube-system calicoctl /calicoctl -- "
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment