> "The cloud is just someone else's computer — but the security misconfigurations are all yours." — Common infosec saying
Learning Objectives
- Understand cloud computing security models and shared responsibility
- Perform reconnaissance and enumeration against cloud environments
- Identify and exploit IAM misconfigurations and privilege escalation paths
- Discover and test cloud storage misconfigurations
- Assess serverless and container security in cloud environments
- Use cloud-native security testing tools effectively
In This Chapter
- 29.1 Cloud Security Fundamentals
- 29.2 Cloud Reconnaissance and Enumeration
- 29.3 IAM Misconfigurations and Privilege Escalation
- 29.4 Storage Misconfigurations
- 29.5 Serverless and Container Security
- 29.6 Cloud-Native Security Testing Tools
- 29.7 AWS-Specific Deep Dive
- 29.8 Azure and GCP Security Testing
- 29.9 Cloud Attack Chains
- 29.10 Credential Hunting and Secret Discovery
- 29.11 Testing Methodology and Reporting
- 29.12 Applying to MedSecure: Cloud Security Testing Strategy
- 29.13 Cloud Forensics and Evidence Collection
- 29.14 Emerging Cloud Security Challenges
- 29.15 Cloud Security Testing Lab Setup
- Summary
Chapter 29: Cloud Security Testing
"The cloud is just someone else's computer — but the security misconfigurations are all yours." — Common infosec saying
The migration to cloud computing represents one of the most significant shifts in IT infrastructure history. Organizations of every size have moved critical workloads, sensitive data, and core business logic into cloud environments offered by Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and other providers. This migration has fundamentally altered the attack surface that ethical hackers must evaluate.
Cloud security testing is not simply traditional penetration testing transplanted into a virtual environment. The cloud introduces entirely new categories of vulnerabilities — misconfigured identity policies, exposed storage buckets, overly permissive serverless functions, and complex trust relationships between services — that have no direct analogue in on-premises infrastructure. The Capital One breach of 2019, which exposed over 100 million customer records through a server-side request forgery (SSRF) vulnerability chained with IAM misconfigurations, demonstrated just how catastrophic cloud-specific attacks can be. That single incident resulted in an $80 million fine and reshaped how the industry thinks about cloud security.
In this chapter, we will systematically explore how to test cloud environments for security weaknesses. We will cover the fundamental security models that govern cloud computing, walk through reconnaissance and enumeration techniques specific to cloud infrastructure, examine IAM misconfigurations and privilege escalation paths, probe storage services for exposure, assess serverless and container deployments, and leverage cloud-native security tools. Throughout, we will apply these techniques to our running examples — MedSecure's AWS-hosted healthcare platform and ShopStack's cloud infrastructure — while maintaining the ethical boundaries that define our profession.
Caution — Authorization Is Non-Negotiable
Cloud security testing requires explicit written authorization that specifically covers cloud resources. Most cloud providers have acceptable use policies and require notification or pre-approval for penetration testing. AWS, for example, permits testing of certain services without prior approval but prohibits testing of others. Azure and GCP have their own policies. Always verify provider-specific rules, obtain written scope documents that enumerate cloud accounts, regions, and services in scope, and never test resources you do not own or have explicit permission to test.
29.1 Cloud Security Fundamentals
Before we can test cloud security, we must understand the architectural and responsibility models that govern cloud environments. Cloud computing is not a monolith — it spans multiple service models, each with distinct security implications.
29.1.1 Cloud Service Models
The three primary cloud service models define where the provider's responsibility ends and the customer's begins:
Infrastructure as a Service (IaaS) provides virtualized computing resources — virtual machines, networks, and storage. AWS EC2, Azure Virtual Machines, and Google Compute Engine are canonical examples. In IaaS, the customer is responsible for everything from the operating system upward: patching, configuration, application security, and data protection. The provider secures the physical infrastructure, hypervisor, and network fabric.
Platform as a Service (PaaS) abstracts away the operating system and runtime environment, allowing customers to deploy applications without managing servers. AWS Elastic Beanstalk, Azure App Service, and Google App Engine exemplify PaaS. The provider handles OS patching and runtime security, while the customer secures application code, data, and access controls.
Software as a Service (SaaS) delivers fully managed applications. Microsoft 365, Salesforce, and Google Workspace are SaaS offerings. The provider manages nearly everything; the customer is primarily responsible for data classification, access management, and configuration settings.
29.1.2 The Shared Responsibility Model
Every major cloud provider publishes a shared responsibility model that delineates security obligations. AWS describes it as "security of the cloud" (provider responsibility) versus "security in the cloud" (customer responsibility). This distinction is critical for penetration testers because it defines what we can and cannot test.
Provider responsibilities typically include: - Physical security of data centers - Hardware and firmware integrity - Network infrastructure - Hypervisor security - Managed service internals
Customer responsibilities typically include: - Identity and access management (IAM) - Data encryption (at rest and in transit) - Network configuration (security groups, NACLs, VPC design) - Application-level security - Operating system patching (for IaaS) - Logging and monitoring configuration
As ethical hackers, our testing focuses almost entirely on customer responsibilities. We are looking for mistakes in how organizations configure and use cloud services, not vulnerabilities in the cloud platforms themselves.
29.1.3 Multi-Cloud and Hybrid Complexity
Most enterprises today operate in multi-cloud or hybrid environments. MedSecure, for example, runs its primary patient portal on AWS but uses Azure Active Directory for identity management and maintains legacy on-premises systems that sync with cloud resources. This complexity creates seams — boundary areas where different security models meet — that attackers love to exploit.
Cross-cloud trust relationships, federated identity configurations, VPN tunnels between on-premises and cloud networks, and inconsistent security policies across providers all create opportunities for privilege escalation and lateral movement.
Blue Team Perspective — Cloud Security Posture Management
Defenders should implement Cloud Security Posture Management (CSPM) tools that continuously evaluate cloud configurations against security benchmarks like CIS Foundations Benchmarks. Tools like AWS Security Hub, Azure Security Center (now Defender for Cloud), and third-party solutions such as Prisma Cloud or Wiz provide automated configuration assessment. Understanding what these tools check — and what they miss — helps penetration testers focus on gaps that automated scanning overlooks.
29.2 Cloud Reconnaissance and Enumeration
Cloud reconnaissance differs significantly from traditional network reconnaissance. Instead of scanning IP ranges, we are often enumerating services, discovering cloud-specific endpoints, and mapping organizational cloud footprints through OSINT.
29.2.1 Identifying Cloud Presence
The first step in cloud security testing is determining which cloud providers an organization uses and what their cloud footprint looks like. Several techniques help:
DNS Analysis. Cloud-hosted services often have telltale DNS records. CNAME records pointing to *.amazonaws.com, *.azurewebsites.net, or *.googleapis.com immediately reveal cloud usage. Tools like dig, nslookup, and dnsenum can enumerate these records.
# Enumerate DNS records for the target domain
dig any medsecure-example.com
dig any +short medsecure-example.com CNAME
nslookup -type=CNAME portal.medsecure-example.com
# Look for cloud-specific CNAME records
# Example output: portal.medsecure-example.com -> d1234567.cloudfront.net
# This reveals AWS CloudFront usage
SSL/TLS Certificate Analysis. Certificate Transparency (CT) logs reveal subdomains and can expose cloud-hosted services. Tools like crt.sh, Censys, and Certspotter are invaluable.
IP Range Analysis. Cloud providers publish their IP ranges. AWS publishes its ranges at https://ip-ranges.amazonaws.com/ip-ranges.json. Cross-referencing target IP addresses against these published ranges confirms cloud hosting and identifies specific regions and services.
HTTP Headers and Responses. Cloud services often include identifying headers. The Server header might reveal AmazonS3, Microsoft-Azure-Application-Gateway, or Google Frontend. Custom headers like x-amz-request-id (AWS), x-ms-request-id (Azure), or x-guploader-uploadid (GCP) confirm specific cloud services.
29.2.2 S3 Bucket Discovery
Amazon S3 buckets are one of the most commonly targeted cloud resources. Discovering buckets belonging to a target organization is a critical reconnaissance step.
Naming Convention Patterns. Organizations often use predictable naming conventions for S3 buckets:
- companyname-backups
- companyname-logs
- companyname-dev
- companyname-staging
- companyname-assets
- companyname-data
Automated Enumeration. Tools like cloud_enum, S3Scanner, BucketFinder, and lazys3 automate the process of discovering buckets through common naming patterns.
# Using cloud_enum for multi-cloud enumeration
python3 cloud_enum.py -k medsecure -k medsecure-health -k medsecure-portal
# Using S3Scanner
python3 s3scanner.py --buckets bucket-names.txt
# Manual verification of a discovered bucket
aws s3 ls s3://medsecure-backups --no-sign-request
Source Code and Configuration Analysis. Applications often reference cloud storage in their source code, configuration files, or JavaScript bundles. Searching for patterns like s3.amazonaws.com, blob.core.windows.net, or storage.googleapis.com in client-side code can reveal bucket names.
29.2.3 Azure and GCP Enumeration
Azure and GCP have their own enumeration targets:
Azure Enumeration:
- Blob storage: <account>.blob.core.windows.net
- Azure AD tenant discovery: OpenID configuration endpoints
- Azure App Services: <app>.azurewebsites.net
- Azure SQL: <server>.database.windows.net
# Enumerate Azure blob storage
python3 cloud_enum.py -k medsecure --azure
# Check for Azure AD tenant
curl https://login.microsoftonline.com/medsecure.com/.well-known/openid-configuration
GCP Enumeration:
- Cloud Storage: storage.googleapis.com/<bucket> or <bucket>.storage.googleapis.com
- Firebase databases: <project>.firebaseio.com
- App Engine: <project>.appspot.com
29.2.4 Cloud Metadata Services
Cloud instances run metadata services that provide configuration information, temporary credentials, and instance details. These endpoints are a high-value target:
- AWS:
http://169.254.169.254/latest/meta-data/ - Azure:
http://169.254.169.254/metadata/instance?api-version=2021-02-01(requiresMetadata: trueheader) - GCP:
http://metadata.google.internal/computeMetadata/v1/(requiresMetadata-Flavor: Googleheader)
SSRF vulnerabilities that can reach these metadata endpoints are particularly dangerous because they can leak IAM credentials, network configurations, and user data scripts that may contain secrets.
# If you discover an SSRF vulnerability, these are high-value targets:
# AWS IMDSv1 (no authentication required)
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
# AWS IMDSv2 (requires token — more secure)
TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" \
-H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
curl "http://169.254.169.254/latest/meta-data/iam/security-credentials/" \
-H "X-aws-ec2-metadata-token: $TOKEN"
Blue Team Perspective — IMDSv2 and Metadata Protection
AWS introduced Instance Metadata Service version 2 (IMDSv2) specifically to mitigate SSRF-based credential theft. IMDSv2 requires a PUT request to obtain a session token before accessing metadata — a significant barrier for most SSRF exploits. Defenders should enforce IMDSv2 across all EC2 instances and disable IMDSv1. Azure and GCP have similar protections. During testing, check whether IMDSv1 is still enabled — its presence is itself a finding.
29.3 IAM Misconfigurations and Privilege Escalation
Identity and Access Management is the cornerstone of cloud security — and the most frequent source of catastrophic misconfigurations. IAM controls who can do what to which resources, and errors in IAM policies are the root cause of many cloud breaches.
29.3.1 AWS IAM Fundamentals for Testers
AWS IAM consists of several key components:
Users are individual identities with long-term credentials (access keys and passwords). Groups are collections of users that share permissions. Roles are identities assumed by services, applications, or users, providing temporary credentials. Policies are JSON documents that define permissions.
AWS IAM policies follow an evaluation logic: explicit deny always wins, then explicit allow is checked, and implicit deny is the default. Understanding this evaluation order is essential for identifying exploitable misconfigurations.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
The policy above grants full S3 access to all buckets — a common over-permissioning mistake. During testing, we look for policies like this that grant broader access than necessary.
29.3.2 Enumerating IAM Permissions
When you obtain AWS credentials during a test (through exposed keys, SSRF, or provided by the client), the first step is understanding what those credentials can do.
# Determine the identity of the credentials
aws sts get-caller-identity
# Enumerate attached policies
aws iam list-attached-user-policies --user-name compromised-user
aws iam list-user-policies --user-name compromised-user
# Get policy details
aws iam get-policy-version --policy-arn arn:aws:iam::123456789012:policy/MyPolicy \
--version-id v1
# Enumerate group memberships and group policies
aws iam list-groups-for-user --user-name compromised-user
aws iam list-attached-group-policies --group-name developers
# If you have the role name, check what it can assume
aws iam list-role-policies --role-name my-ec2-role
Tools like enumerate-iam can automate permission discovery by brute-forcing API calls to determine which actions are allowed:
# Using enumerate-iam to discover permissions through brute force
python3 enumerate-iam.py --access-key AKIA... --secret-key wJalrXUt...
29.3.3 IAM Privilege Escalation Techniques
Rhino Security Labs documented over 20 AWS IAM privilege escalation techniques. These exploit legitimate IAM features to elevate permissions beyond what was intended. Key techniques include:
Creating a New Policy Version. If a user has iam:CreatePolicyVersion permission, they can create a new version of an existing policy with elevated permissions and set it as the default version.
# Create a new policy version with admin access
aws iam create-policy-version --policy-arn arn:aws:iam::123456789012:policy/MyPolicy \
--policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":"*","Resource":"*"}]}' \
--set-as-default
Attaching a Policy to a User/Group/Role. If a user has iam:AttachUserPolicy, iam:AttachGroupPolicy, or iam:AttachRolePolicy, they can attach the AdministratorAccess managed policy to themselves.
Creating/Updating a Login Profile. If a user has iam:CreateLoginProfile or iam:UpdateLoginProfile, they can create or change console passwords for other users.
Passing a Role to a New Service. The iam:PassRole permission combined with service-specific permissions (like lambda:CreateFunction and lambda:InvokeFunction) allows creating a Lambda function with a high-privilege role, then invoking it to perform actions with those elevated privileges.
AssumeRole Chains. Complex environments may have chains of roles that can assume other roles. Mapping these trust relationships can reveal paths from low-privilege to high-privilege roles.
# Using Pacu for automated privilege escalation
# First, import the compromised keys
pacu
> import_keys compromised-user
> run iam__enum_permissions
> run iam__privesc_scan
29.3.4 Azure and GCP IAM Testing
Azure uses a role-based access control (RBAC) model layered with Azure AD roles. Key testing targets include: - Overly broad role assignments (Owner or Contributor at subscription level) - Custom role definitions with excessive permissions - Service principal credential exposure - Managed identity misconfigurations - Azure AD privileged roles (Global Administrator, Application Administrator)
GCP uses IAM policies bound at organization, folder, project, and resource levels. Testing focuses on: - Primitive roles (Owner, Editor, Viewer) that grant broad access - Service account key exposure - Service account impersonation - Cross-project access through shared service accounts
MedSecure Scenario — IAM Discovery
During the MedSecure engagement, the testing team discovered an exposed
.envfile on the staging server containing AWS access keys for a developer account. Usingenumerate-iam, they found the account hadiam:PassRolepermission andlambda:*permissions. By creating a Lambda function with theMedSecure-AdminRoleattached — a role intended only for emergency operations — they escalated to full administrative access across the AWS account, gaining access to patient data in RDS databases and S3 storage. This finding was rated Critical and led to a complete IAM restructuring.
29.4 Storage Misconfigurations
Cloud storage misconfigurations have been responsible for some of the largest data exposures in history. Misconfigured S3 buckets, Azure Blob containers, and GCP Cloud Storage buckets have leaked billions of records.
29.4.1 Amazon S3 Security Testing
S3 bucket security involves multiple layers:
Bucket Policies define who can access the bucket and its objects. A common misconfiguration is allowing public access:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::medsecure-patient-records/*"
}
]
}
Access Control Lists (ACLs) provide legacy access control. The AllUsers and AuthenticatedUsers groups are particularly dangerous — AuthenticatedUsers means any AWS account, not just accounts in the target organization.
Block Public Access Settings are account-level and bucket-level controls that override policies and ACLs to prevent public access. Testing should verify these are enabled.
# Test for public access
aws s3 ls s3://target-bucket --no-sign-request
aws s3 cp s3://target-bucket/test.txt . --no-sign-request
# Check bucket ACL
aws s3api get-bucket-acl --bucket target-bucket
# Check bucket policy
aws s3api get-bucket-policy --bucket target-bucket
# Check block public access settings
aws s3api get-public-access-block --bucket target-bucket
# Check for versioning (may reveal deleted sensitive files)
aws s3api list-object-versions --bucket target-bucket
# Check for server-side encryption configuration
aws s3api get-bucket-encryption --bucket target-bucket
S3 Object-Level Permissions. Even if a bucket is not public, individual objects may have permissive ACLs. Testing should sample object-level permissions, especially for sensitive data.
29.4.2 Azure Blob Storage Testing
Azure Blob Storage has similar misconfiguration risks:
# Check for public container access
curl "https://medsecuresa.blob.core.windows.net/patient-data?restype=container&comp=list"
# If the container allows anonymous list, all blobs will be enumerated
# Access levels: private, blob (anonymous read for blobs), container (anonymous read + list)
Azure Storage also supports Shared Access Signatures (SAS) tokens — time-limited URLs that grant access to specific resources. Testing should look for: - SAS tokens in source code, configuration files, or URLs - Overly permissive SAS tokens (full access, long expiration) - Account-level SAS tokens (versus service or object-level)
29.4.3 GCP Cloud Storage Testing
GCP Cloud Storage uses IAM policies and ACLs:
# Check for public access using gsutil
gsutil ls gs://target-bucket
gsutil iam get gs://target-bucket
# Check for allUsers or allAuthenticatedUsers access
gsutil acl get gs://target-bucket
29.4.4 Beyond Read Access
Storage testing goes beyond checking for public read access. More dangerous misconfigurations include:
Write Access. If an attacker can write to a bucket that serves static content (like a website or application assets), they can inject malicious JavaScript, replace legitimate files with malware, or modify configuration files.
Delete Access. Write access often implies delete access, enabling data destruction or ransomware scenarios.
Logging Bucket Poisoning. If an attacker can write to a logging bucket, they may be able to inject false log entries or overwrite legitimate logs to cover tracks.
Blue Team Perspective — S3 Security Hardening
Defenders should implement a layered S3 security strategy: enable S3 Block Public Access at the account level, use bucket policies with explicit deny statements for sensitive data, enable default encryption with AWS KMS customer-managed keys, enable versioning and MFA Delete for critical buckets, configure S3 access logging, and use VPC endpoints to restrict S3 access to specific VPCs. AWS Access Analyzer for S3 can continuously monitor for public or cross-account access.
29.5 Serverless and Container Security
Serverless computing and container orchestration represent the next evolution of cloud deployment — and they introduce their own security challenges.
29.5.1 Serverless Security Testing
Serverless functions (AWS Lambda, Azure Functions, GCP Cloud Functions) execute code without managing servers. Security testing focuses on:
Function Configuration Review. Check for overly permissive execution roles, environment variables containing secrets, excessive timeout and memory settings (cost and abuse potential), and VPC configuration (or lack thereof).
# Enumerate Lambda functions
aws lambda list-functions --region us-east-1
# Get function configuration (including environment variables)
aws lambda get-function-configuration --function-name medsecure-patient-lookup
# Check the function's execution role
aws iam get-role --role-name medsecure-lambda-role
aws iam list-attached-role-policies --role-name medsecure-lambda-role
# Download function code for review
aws lambda get-function --function-name medsecure-patient-lookup \
--query 'Code.Location' --output text | xargs curl -o function.zip
Event Injection. Serverless functions are triggered by events — API Gateway requests, S3 uploads, SQS messages, DynamoDB streams. Testing should verify that input from these event sources is properly validated. A common vulnerability is assuming that because an event comes from a trusted AWS service, its content is safe.
# Example: Lambda function vulnerable to injection
# The function processes S3 upload events
def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
# Dangerous: using the key directly in a system command
import os
os.system(f"file /tmp/{key}") # Command injection via crafted filename
Cold Start and Shared Execution Environment. Lambda functions may share execution environments across invocations. Sensitive data written to /tmp during one invocation may persist for subsequent invocations — even from different users if the function is shared.
29.5.2 Container Security in Cloud
Container orchestration services (Amazon ECS, EKS, Azure AKS, GCP GKE) are widespread. Security testing covers:
Container Image Security. Pull and analyze container images for vulnerabilities, embedded secrets, and unnecessary packages:
# List ECR repositories
aws ecr describe-repositories
# Pull and scan an image
aws ecr get-login-password --region us-east-1 | docker login --username AWS \
--password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
docker pull 123456789012.dkr.ecr.us-east-1.amazonaws.com/medsecure-api:latest
# Scan with Trivy
trivy image 123456789012.dkr.ecr.us-east-1.amazonaws.com/medsecure-api:latest
# Check for secrets in image layers
docker history --no-trunc 123456789012.dkr.ecr.us-east-1.amazonaws.com/medsecure-api:latest
Kubernetes-Specific Testing. If the target uses EKS, AKS, or GKE:
# Check for unauthenticated API server access
curl -k https://kubernetes-api-endpoint:6443/api/v1/namespaces
# If you have kubectl access, enumerate the cluster
kubectl auth can-i --list
kubectl get secrets --all-namespaces
kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[*].image}{"\n"}{end}'
# Check for pods running as privileged
kubectl get pods --all-namespaces -o json | jq '.items[] | select(.spec.containers[].securityContext.privileged==true) | .metadata.name'
# Check for service account token mounting
kubectl get pods -o json | jq '.items[] | select(.spec.automountServiceAccountToken!=false) | .metadata.name'
Container Escape. In a compromised container, test for escape to the underlying host:
- Check for privileged mode (--privileged)
- Check for mounted Docker socket (/var/run/docker.sock)
- Check for dangerous capabilities (SYS_ADMIN, SYS_PTRACE)
- Verify that the metadata service is accessible from within the container
29.5.3 Infrastructure as Code (IaC) Review
Modern cloud deployments are defined in code using tools like Terraform, CloudFormation, and ARM templates. Reviewing IaC for security misconfigurations before deployment is a proactive testing approach:
# Scan Terraform files with tfsec
tfsec ./terraform/
# Scan CloudFormation with cfn-nag
cfn_nag_scan --input-path ./cloudformation/medsecure-stack.yaml
# Scan with Checkov (supports multiple IaC frameworks)
checkov -d ./infrastructure/
Common IaC findings include security groups allowing 0.0.0.0/0 ingress, unencrypted databases and storage, disabled logging, and overly permissive IAM policies.
29.6 Cloud-Native Security Testing Tools
A robust cloud security testing toolkit includes both offensive tools and defensive assessment frameworks.
29.6.1 Pacu — AWS Exploitation Framework
Pacu, developed by Rhino Security Labs, is the Metasploit of AWS. It provides a modular framework for AWS exploitation:
# Install and launch Pacu
pip3 install pacu
pacu
# Set up a session and import keys
Pacu> set_keys
# Enter the access key ID and secret access key
# Run reconnaissance modules
Pacu> run iam__enum_users_roles_policies_groups
Pacu> run iam__enum_permissions
Pacu> run s3__bucket_finder -d medsecure
Pacu> run ec2__enum
# Check for privilege escalation paths
Pacu> run iam__privesc_scan
# Exploit identified escalation paths
Pacu> run iam__privesc_scan --exploit
# Enumerate Lambda functions and their configurations
Pacu> run lambda__enum
Pacu maintains a database of findings and supports session management, making it ideal for organized engagements.
29.6.2 ScoutSuite — Multi-Cloud Auditing
ScoutSuite provides a comprehensive assessment of cloud environments against security best practices:
# Install ScoutSuite
pip3 install scoutsuite
# Run against AWS (using configured credentials)
scout aws --profile medsecure-test
# Run against Azure
scout azure --cli
# Run against GCP
scout gcp --project-id medsecure-project
# ScoutSuite generates an HTML report with findings categorized by severity
ScoutSuite checks hundreds of configuration items across IAM, networking, storage, logging, and compute services. Its reports are excellent for communicating findings to clients.
29.6.3 Prowler — AWS Security Assessment
Prowler is an AWS security assessment tool that checks against CIS benchmarks and additional best practices:
# Run Prowler
./prowler aws
# Run specific checks
./prowler aws --check-group iam
./prowler aws --check-group s3
./prowler aws --check-group logging
# Output in various formats
./prowler aws -M csv,json,html
# Check specific regions
./prowler aws -r us-east-1 -r eu-west-1
Prowler covers over 300 checks and maps findings to compliance frameworks (CIS, GDPR, HIPAA, PCI DSS) — particularly valuable for MedSecure's healthcare compliance requirements.
29.6.4 Additional Tools
CloudMapper by Duo Security creates network diagrams of AWS environments, identifying publicly exposed services and overly permissive security groups.
Cartography by Lyft maps relationships between cloud resources, revealing attack paths through the infrastructure graph.
WeirdAAL (AWS Attack Library) provides additional AWS-specific attack modules.
ROADtools and AADInternals are essential for Azure AD testing.
CloudSploit offers open-source cloud security scanning across multiple providers.
ShopStack Scenario — Cloud Assessment
The ShopStack cloud assessment began with ScoutSuite, which identified 47 findings across their AWS account. Critical findings included three S3 buckets with public read access (one containing customer order data), CloudTrail logging disabled in two regions, and 15 security groups allowing unrestricted SSH access. The team then used Pacu to demonstrate exploitation paths: compromised developer credentials (found in a public GitHub repository) led to Lambda function code download, which contained database credentials in environment variables, enabling full access to the customer database. The remediation roadmap prioritized IAM key rotation, S3 bucket lockdown, and implementing AWS Organizations with Service Control Policies.
29.7 AWS-Specific Deep Dive
Given AWS's dominance in cloud market share, a deeper exploration of AWS-specific attack techniques is warranted. Many of the concepts translate to Azure and GCP, but the specific services, APIs, and configurations differ.
29.7.1 EC2 Instance Security Testing
EC2 instances are virtual machines that form the backbone of many AWS deployments. Security testing should cover:
Security Groups. Security groups act as virtual firewalls for EC2 instances. Common misconfigurations include allowing SSH (port 22) or RDP (port 3389) from 0.0.0.0/0 (the entire internet), allowing all traffic from 0.0.0.0/0 on all ports, overly broad egress rules that allow data exfiltration, and security groups that reference other permissive security groups in a chain.
# Enumerate security groups
aws ec2 describe-security-groups --query \
'SecurityGroups[?IpPermissions[?IpRanges[?CidrIp==`0.0.0.0/0`]]].[GroupId,GroupName,IpPermissions]' \
--output json
# Find instances with public IP addresses
aws ec2 describe-instances --query \
'Reservations[].Instances[?PublicIpAddress!=null].[InstanceId,PublicIpAddress,SecurityGroups[].GroupId]' \
--output table
# Check for key pairs
aws ec2 describe-key-pairs
User Data Scripts. EC2 instances can be launched with user data scripts that run on first boot. These scripts frequently contain secrets:
# Retrieve user data for an instance (requires permissions)
aws ec2 describe-instance-attribute --instance-id i-1234567890abcdef0 \
--attribute userData --output text --query 'UserData.Value' | base64 --decode
# User data often contains:
# - Database connection strings with passwords
# - API keys for external services
# - Configuration management secrets (Chef, Puppet, Ansible)
# - SSH private keys
# - Application secrets
EBS Volumes. Elastic Block Store volumes can contain sensitive data. Unencrypted EBS volumes are a common finding, and old snapshots may contain data from previous deployments:
# List EBS volumes and check encryption status
aws ec2 describe-volumes --query \
'Volumes[?Encrypted==`false`].[VolumeId,Size,State,Attachments[0].InstanceId]' \
--output table
# List EBS snapshots (may reveal old/forgotten data)
aws ec2 describe-snapshots --owner-ids self --query \
'Snapshots[?Encrypted==`false`].[SnapshotId,VolumeSize,StartTime,Description]' \
--output table
# Check for public snapshots (shared with all AWS accounts)
aws ec2 describe-snapshots --restorable-by-user-ids all \
--owner-ids 123456789012 --output json
29.7.2 RDS and Database Security
Relational Database Service (RDS) instances require specific security testing:
# List RDS instances and check for public accessibility
aws rds describe-db-instances --query \
'DBInstances[].[DBInstanceIdentifier,Engine,PubliclyAccessible,StorageEncrypted,Endpoint.Address]' \
--output table
# Check for unencrypted databases
aws rds describe-db-instances --query \
'DBInstances[?StorageEncrypted==`false`].[DBInstanceIdentifier,Engine]' \
--output table
# Check for public snapshots
aws rds describe-db-snapshots --query \
'DBSnapshots[?not_null(DBSnapshotIdentifier)].[DBSnapshotIdentifier,Engine,SnapshotType]' \
--output table
Common RDS security issues include publicly accessible instances (accessible from the internet without VPN), unencrypted storage and connections, default or weak database credentials, open security groups allowing database access from broad IP ranges, and automated backups stored without encryption.
29.7.3 API Gateway and Lambda Security Chain
API Gateway fronting Lambda functions is a common serverless architecture. The security testing chain includes:
API Gateway Configuration:
# List APIs
aws apigateway get-rest-apis
aws apigatewayv2 get-apis # For HTTP and WebSocket APIs
# Get API resources and methods
aws apigateway get-resources --rest-api-id abc123def
aws apigateway get-method --rest-api-id abc123def \
--resource-id xyz789 --http-method GET
# Check for API keys and usage plans
aws apigateway get-api-keys --include-values
aws apigateway get-usage-plans
# Look for custom authorizers
aws apigateway get-authorizers --rest-api-id abc123def
API Gateway Testing Points: - Endpoints without authentication (missing authorizer) - API key in headers (easily leaked in logs and browser history) - Missing request validation (body, query string, headers) - Overly permissive CORS configuration - Missing rate limiting/throttling - Stage variables containing secrets - WAF rules that can be bypassed
Lambda-Specific Concerns:
# List Lambda functions with their configurations
aws lambda list-functions --query \
'Functions[].[FunctionName,Runtime,Handler,Environment.Variables]' \
--output json
# Check function policies (who can invoke the function)
aws lambda get-policy --function-name medsecure-api
# Check function URL configuration (direct invocation without API Gateway)
aws lambda list-function-url-configs --function-name medsecure-api
Lambda environment variables are a particularly rich target. Developers frequently store database credentials, API keys, encryption keys, and other secrets as environment variables because it is convenient. While AWS offers integration with Secrets Manager and SSM Parameter Store, many organizations take the simpler but less secure approach of direct environment variable storage.
29.7.4 CloudTrail and Logging Evasion
Understanding how cloud activity is logged helps both attackers and defenders:
# Check CloudTrail configuration
aws cloudtrail describe-trails --query \
'trailList[].[Name,S3BucketName,IsMultiRegionTrail,LogFileValidationEnabled,IsLogging]' \
--output table
# Check if CloudTrail is actually logging
aws cloudtrail get-trail-status --name default-trail
# Check for event selectors (what types of events are logged)
aws cloudtrail get-event-selectors --trail-name default-trail
Logging Gaps to Test For: - CloudTrail not enabled in all regions (attackers operate in unexpected regions) - Data events not logged (S3 object-level access, Lambda invocations) - Management events filtered (some API calls not recorded) - Log file validation not enabled (logs can be tampered with) - CloudTrail logs stored in an accessible S3 bucket (logs themselves become a target) - No CloudWatch alarms on critical events (compromise goes undetected) - GuardDuty not enabled (no automated threat detection)
29.7.5 Cross-Account Access and Trust
Many organizations use multiple AWS accounts (development, staging, production) with cross-account trust relationships. These relationships can create escalation paths:
# List roles that can be assumed from other accounts
aws iam list-roles --query \
'Roles[?AssumeRolePolicyDocument.Statement[?Principal.AWS!=`arn:aws:iam::CURRENT_ACCOUNT:root`]].[RoleName,AssumeRolePolicyDocument]' \
--output json
# Check for overly permissive trust policies
# Dangerous: Principal: "*" or Principal: "arn:aws:iam::*:root"
# These allow any AWS account to assume the role
Cross-account attack patterns include compromising a low-security development account and using trust relationships to pivot to production, exploiting roles that trust all accounts in an organization without additional conditions, and leveraging confused deputy vulnerabilities in third-party integrations.
Blue Team Perspective — AWS Security Best Practices Checklist
Defenders should implement a comprehensive AWS security baseline: enable CloudTrail in all regions with log file validation, enable GuardDuty for automated threat detection, enable AWS Config for configuration compliance monitoring, use AWS Organizations with Service Control Policies (SCPs) to enforce guardrails, implement VPC flow logs for network visibility, use AWS Security Hub for centralized security findings, enable IAM Access Analyzer for external access detection, and rotate all access keys on a regular schedule.
29.8 Azure and GCP Security Testing
While AWS dominates the cloud market, many organizations use Azure and GCP, often in combination. Each provider has unique security characteristics.
29.8.1 Azure-Specific Testing
Azure's security model centers around Azure Active Directory (Azure AD, now called Microsoft Entra ID) for identity management:
Azure AD Enumeration:
# Using Azure CLI
az login # Authenticate
az account list # List subscriptions
az ad user list # List Azure AD users
az ad group list # List groups
az ad sp list --all # List service principals
# Using ROADtools for comprehensive Azure AD enumeration
roadrecon auth -u user@target.com -p password
roadrecon gather # Collect Azure AD data
roadrecon gui # Launch interactive analysis interface
# Using AADInternals (PowerShell)
Import-Module AADInternals
Get-AADIntLoginInformation -UserName "user@target.com"
Get-AADIntTenantID -Domain "target.com"
Azure Storage Testing:
# List storage accounts
az storage account list --query '[].{Name:name,Kind:kind,HTTPSOnly:enableHttpsTrafficOnly}'
# Test for public blob access
az storage container list --account-name targetsa --output table
curl "https://targetsa.blob.core.windows.net/public-container?restype=container&comp=list"
# Check for shared access signatures in web traffic and source code
# SAS tokens look like: ?sv=2021-06-08&ss=b&srt=sco&sp=rwdlacitfx&se=2024-12-31...
Azure Resource Manager (ARM) Testing:
# List all resources in a subscription
az resource list --output table
# Check for resources without locks (can be deleted)
az lock list --resource-group medsecure-prod
# Review role assignments
az role assignment list --all --output table
# Check for custom role definitions with excessive permissions
az role definition list --custom-role-only true
29.8.2 GCP-Specific Testing
GCP's security model uses IAM policies at organization, folder, project, and resource levels:
# Using gcloud CLI
gcloud auth list # List authenticated accounts
gcloud projects list # List projects
gcloud iam service-accounts list # List service accounts
# Check IAM policy at project level
gcloud projects get-iam-policy PROJECT_ID
# List service account keys (potential credential exposure)
gcloud iam service-accounts keys list --iam-account SA_EMAIL
# Check for public resources
gsutil ls -la gs://bucket-name # Cloud Storage
gcloud compute instances list --filter="networkInterfaces[0].accessConfigs[0].natIP:*" # Public VMs
# Check for metadata server access
# GCP metadata requires Metadata-Flavor: Google header
curl -H "Metadata-Flavor: Google" \
"http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token"
GCP-Specific Vulnerabilities: - Service account key files stored in code repositories - Overly permissive primitive roles (Owner, Editor) assigned at project level - Default compute service account with Editor role (very permissive) - Firebase databases with insecure rules allowing public read/write - Cloud Functions with publicly accessible HTTP triggers without authentication
29.9 Cloud Attack Chains
Real-world cloud attacks rarely exploit a single misconfiguration. They typically involve chains of weaknesses that escalate from initial access to full compromise.
29.9.1 Anatomy of a Cloud Attack Chain
A typical cloud attack chain follows a pattern aligned with the MITRE ATT&CK Cloud Matrix:
1. Initial Access. The attacker gains a foothold through exposed credentials (found in GitHub repositories, configuration files, or via SSRF), phishing for cloud console access targeting cloud administrators, exploiting vulnerabilities in public-facing cloud-hosted applications, or compromising third-party integrations with cross-account trust.
2. Reconnaissance. With initial access established, the attacker maps the environment by enumerating the identity's permissions using enumerate-iam or Pacu, discovering all services and resources in the account, identifying network architecture (VPCs, subnets, peering connections), locating sensitive data stores, and mapping trust relationships between services, roles, and accounts.
3. Privilege Escalation. The attacker exploits IAM misconfigurations: creating new policy versions with administrator permissions, attaching administrative policies to the compromised identity, passing high-privilege roles to services under attacker control, assuming roles through trust chains, or modifying security group rules to expand network access.
4. Lateral Movement. With elevated privileges, the attacker expands access by assuming cross-account roles, traversing VPC peering connections, exploiting shared credentials across services, pivoting through container orchestration to additional workloads, and accessing databases or storage in connected environments.
5. Data Access and Exfiltration. The attacker accesses sensitive data through direct download from S3 or databases, creating snapshots shareable with attacker accounts, exfiltrating through authorized channels like S3 replication, accessing secrets in Secrets Manager or Parameter Store, and querying databases using discovered credentials.
6. Persistence. Sophisticated attackers establish persistent access by creating backdoor IAM users or roles, deploying scheduled Lambda functions for credential rotation, modifying logging to conceal activity, creating cross-account trust to attacker accounts, and embedding access in CI/CD pipeline configurations.
29.9.2 Example Attack Chain: SSRF to Account Compromise
This chain mirrors the Capital One breach pattern:
Step 1: Discover SSRF vulnerability in web application
Step 2: Use SSRF to query EC2 metadata service (IMDSv1)
Step 3: Retrieve temporary IAM credentials from the role
Step 4: Enumerate permissions — discover broad S3 access
Step 5: List all S3 buckets and identify sensitive data
Step 6: Download data and discover additional credentials in configuration files
Step 7: Use database credentials from S3 to access RDS instances
Step 8: Exfiltrate customer/patient data from production databases
Each step in this chain represents both an exploitation opportunity and a potential detection point. Defenders who implement IMDSv2, least-privilege IAM, S3 access logging, and database activity monitoring can break this chain at multiple points.
29.9.3 Example Attack Chain: CI/CD Pipeline Compromise
Step 1: Discover exposed Jenkins/GitLab instance via subdomain enumeration
Step 2: Authenticate with default credentials or exploit known CVE
Step 3: Extract AWS credentials from pipeline environment variables
Step 4: Validate credentials — discover ECR push and ECS update permissions
Step 5: Build and push malicious container image to ECR registry
Step 6: Update ECS service definition to use the malicious image
Step 7: Malicious container inherits the ECS task role with production access
Step 8: Access production databases, S3 buckets, and secrets via task role
29.9.4 Example Attack Chain: GitHub to Cloud Account
Step 1: Search GitHub for exposed AWS access keys using trufflehog or gitleaks
Step 2: Validate discovered credentials with aws sts get-caller-identity
Step 3: Enumerate permissions — developer has iam:PassRole and lambda:*
Step 4: Create Lambda function with high-privilege production role attached
Step 5: Invoke Lambda to enumerate production S3 buckets and RDS instances
Step 6: Download customer data and establish persistence via backdoor IAM user
ShopStack Scenario — Complete Attack Chain
The ShopStack assessment demonstrated a full attack chain from a developer's GitHub repository. The team found an AWS access key in a committed
.envfile. Though the developer had "rotated" the key, the old key remained active. Using Pacu, they escalated throughiam:PassRoleandlambda:CreateFunction, gaining production access that reached 2.3 million customer records in 47 minutes. This finding led to mandatory access key rotation, SCPs restrictingiam:PassRole, and automated GitHub scanning with pre-commit hooks.
29.10 Credential Hunting and Secret Discovery
One of the most impactful activities during a cloud security assessment is hunting for exposed credentials and secrets. Cloud environments are particularly vulnerable to credential exposure because of the sheer number of secrets required to operate — API keys, database passwords, service account tokens, TLS certificates, OAuth client secrets, and cloud provider access keys. These secrets are frequently committed to source code repositories, baked into container images, stored in CI/CD pipeline configurations, or left in cloud service configuration files.
29.10.1 Where Credentials Hide
Source Code Repositories. Developers frequently commit credentials to Git repositories, either directly in application code or in configuration files like .env, application.yml, config.json, or docker-compose.yml. Even when developers "delete" a credential from a file, the original commit containing the secret remains in Git history unless the repository is explicitly cleaned with tools like BFG Repo Cleaner or git filter-branch.
# Search for AWS keys in a cloned repository
grep -rn "AKIA[0-9A-Z]{16}" . # AWS Access Key ID pattern
grep -rn "aws_secret_access_key" .
grep -rn "aws_access_key_id" .
# Search Git history for deleted secrets
git log -p --all -S "AKIA" -- . | head -100
git log -p --all -S "password" -- "*.env" "*.yml" "*.json"
# Automated scanning with trufflehog
trufflehog git file://./repo --only-verified
trufflehog github --org target-organization
# Automated scanning with gitleaks
gitleaks detect --source ./repo --report-format json --report-path findings.json
gitleaks detect --source ./repo --log-opts="--all" # Include all branches and history
Container Images. Docker images frequently contain embedded secrets. Each layer of a Docker image is independently accessible, so even if a secret is deleted in a later layer, it persists in the earlier layer where it was added. Container registries (ECR, ACR, GCR) should be scanned for exposed secrets.
# Pull and analyze container images from ECR
aws ecr list-images --repository-name medsecure-app
aws ecr get-login-password | docker login --username AWS --password-stdin ACCOUNT.dkr.ecr.REGION.amazonaws.com
docker pull ACCOUNT.dkr.ecr.REGION.amazonaws.com/medsecure-app:latest
# Extract and scan each layer
docker save medsecure-app:latest -o image.tar
mkdir image_layers && tar -xf image.tar -C image_layers
# Each layer directory contains a layer.tar with filesystem changes
# Use dive for interactive layer analysis
dive ACCOUNT.dkr.ecr.REGION.amazonaws.com/medsecure-app:latest
CI/CD Pipeline Configurations. Jenkins, GitHub Actions, GitLab CI, and CircleCI configurations often reference secrets through environment variables. If these secrets are not stored in the platform's secrets manager (and instead hardcoded in pipeline definitions), they are trivially accessible to anyone with repository read access.
Cloud Service Configurations. AWS Systems Manager Parameter Store, AWS Secrets Manager, Azure Key Vault, and GCP Secret Manager are the intended secret storage services. However, organizations frequently store secrets in less secure locations: EC2 user data scripts, Lambda environment variables (visible in the AWS Console), CloudFormation templates, and Terraform state files (which store plaintext values for all resources, including secrets).
29.10.2 Automated Credential Scanning
# Comprehensive credential scanning with trufflehog
# Scan a GitHub organization
trufflehog github --org medsecure --token ghp_YOURTOKEN
# Scan S3 buckets for credentials
trufflehog s3 --bucket medsecure-config-bucket
# Scan a CI/CD system
trufflehog circleci --token CIRCLE_TOKEN
# AWS-native credential discovery
# List all IAM access keys and their ages
aws iam generate-credential-report
aws iam get-credential-report --output text --query 'Content' | base64 -d > cred_report.csv
# Check for credentials in EC2 user data
for instance in $(aws ec2 describe-instances --query 'Reservations[].Instances[].InstanceId' --output text); do
echo "=== $instance ==="
aws ec2 describe-instance-attribute --instance-id $instance --attribute userData \
--query 'UserData.Value' --output text | base64 -d 2>/dev/null
done
# Check Lambda environment variables for secrets
for func in $(aws lambda list-functions --query 'Functions[].FunctionName' --output text); do
echo "=== $func ==="
aws lambda get-function-configuration --function-name $func \
--query 'Environment.Variables' --output json
done
# Check SSM parameters (some may not be SecureString)
aws ssm describe-parameters --query 'Parameters[?Type!=`SecureString`]'
29.10.3 Post-Exploitation with Discovered Credentials
When credentials are discovered during testing, the next steps depend on the credential type:
AWS Access Keys: Use aws sts get-caller-identity to determine the associated identity, then enumerate permissions with tools like enumerate-iam or Pacu's iam__enum_permissions module. Check for privilege escalation paths using the techniques from Section 29.3.
Database Credentials: Attempt connection to RDS, DocumentDB, or other database services. Verify whether the credentials provide access to production data, and document the data exposure without exfiltrating actual sensitive records.
API Keys and Tokens: Determine the associated service, validate the token's scope and permissions, and document the access level provided.
Service Account Keys: For GCP service account JSON keys or Azure service principal credentials, authenticate and enumerate the permissions granted to the service account. These often have broader permissions than individual user accounts.
Blue Team Perspective — Credential Security
Defenders should implement a comprehensive secrets management strategy: use cloud-native secrets managers (AWS Secrets Manager, Azure Key Vault, GCP Secret Manager) for all application secrets, enable automated secret rotation, implement pre-commit hooks that scan for secrets before code is pushed (using tools like detect-secrets or gitleaks), use GitHub's secret scanning alerts for repositories, regularly audit IAM access key ages and disable unused keys, enforce IMDSv2 on all EC2 instances to prevent SSRF-based credential theft, and scan container images in CI/CD pipelines before pushing to registries. Organizations should also establish an incident response playbook specifically for credential exposure events.
29.11 Testing Methodology and Reporting
Cloud security testing requires a structured methodology adapted to the cloud context.
29.11.1 Pre-Engagement Considerations
Before testing begins: - Verify cloud provider penetration testing policies - Define scope by AWS account IDs, Azure subscription IDs, or GCP project IDs - Specify regions in scope - Identify services in scope versus out of scope - Determine whether production environments are included - Establish emergency contacts and rollback procedures - Confirm whether credentials will be provided or must be discovered
29.11.2 Testing Phases
Phase 1: External Reconnaissance (No Credentials) - Cloud footprint discovery - S3/Blob/GCS bucket enumeration - Public-facing service identification - Credential exposure hunting (GitHub, Pastebin, etc.)
Phase 2: Authenticated Enumeration (With Credentials) - IAM permission enumeration - Service and resource inventory - Network architecture mapping - Configuration assessment (ScoutSuite/Prowler)
Phase 3: Exploitation and Escalation - IAM privilege escalation - Storage access testing - Service-specific exploitation - Cross-service attack chains
Phase 4: Post-Exploitation - Data access assessment - Persistence mechanism testing - Lateral movement mapping - Compliance impact evaluation
29.11.3 Cloud-Specific Reporting
Cloud security findings require context that traditional vulnerability reports may not include: - The specific cloud service and configuration at fault - The shared responsibility model context (is this a customer or provider issue?) - The blast radius — what other resources could be compromised through this finding - Remediation steps using cloud-native controls - Compliance mapping (CIS, HIPAA, PCI DSS, SOC 2) - Terraform/CloudFormation remediation code where applicable
29.12 Applying to MedSecure: Cloud Security Testing Strategy
MedSecure's AWS environment hosts a patient portal (Elastic Beanstalk), patient data (RDS and S3), medical device data ingestion (IoT Core and Lambda), and integration APIs (API Gateway and Lambda). Their cloud security testing plan must address:
-
IAM Assessment: Enumerate all IAM users, roles, and policies. Identify over-permissive policies, unused credentials, and privilege escalation paths. Pay special attention to the IoT device roles and their access boundaries.
-
Data Storage Audit: Test all S3 buckets for public access, cross-account access, and encryption status. Verify that patient data (PHI under HIPAA) is encrypted at rest with customer-managed KMS keys. Check RDS encryption and access controls.
-
Network Architecture: Map VPC configurations, security groups, and NACLs. Test for unnecessary public exposure. Verify VPC flow logging is enabled. Check VPN configurations for on-premises connectivity.
-
Serverless Security: Download and review all Lambda function code for injection vulnerabilities, hardcoded secrets, and excessive permissions. Test API Gateway configurations for authentication bypasses.
-
Logging and Monitoring: Verify CloudTrail is enabled in all regions, S3 access logging is active, VPC flow logs are configured, and CloudWatch alarms exist for critical events.
-
Compliance Validation: Run Prowler with HIPAA checks enabled to identify compliance gaps specific to healthcare data protection.
Lab Exercise — Setting Up a Cloud Testing Environment
For your home lab, create a free-tier AWS account dedicated to security testing. Deploy intentionally vulnerable environments like CloudGoat (Rhino Security Labs), flAWS.cloud (by Scott Piper), or DVCA (Damn Vulnerable Cloud Application). These provide safe, legal environments to practice cloud exploitation techniques. Never test against cloud resources you do not own.
29.13 Cloud Forensics and Evidence Collection
When cloud security testing reveals active compromise or when conducting purple team exercises, understanding cloud forensics capabilities is essential. Cloud forensics differs fundamentally from traditional disk forensics because infrastructure is ephemeral, shared, and distributed across regions.
29.13.1 AWS Forensic Evidence Sources
AWS provides several evidence sources that are critical for both offensive testing and incident investigation:
CloudTrail Logs record every API call made to AWS services, including the identity that made the call, the time, the source IP address, the request parameters, and the response. CloudTrail is the most important forensic evidence source in AWS.
# Search CloudTrail for specific API actions
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=EventName,AttributeValue=ConsoleLogin \
--start-time 2026-01-01 --end-time 2026-02-27
# Search for IAM changes
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=EventName,AttributeValue=CreateUser
# Search by user identity
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=Username,AttributeValue=compromised-user
# Advanced analysis with Athena (for S3-stored CloudTrail logs)
# Create an Athena table over CloudTrail logs for SQL-based querying
# This enables complex queries like:
# "Show all API calls from unusual source IPs in the last 30 days"
# "Identify all data exfiltration events (S3 GetObject) by unauthorized principals"
VPC Flow Logs capture network traffic metadata for all network interfaces in a VPC. While they do not capture packet contents, they reveal connection patterns, port scanning, lateral movement, and data exfiltration volumes.
S3 Access Logs record every request made to an S3 bucket, including anonymous requests. These logs reveal unauthorized access patterns that CloudTrail alone may miss because CloudTrail records API calls but S3 access logs capture direct object-level access.
GuardDuty Findings aggregate threat intelligence and anomaly detection across CloudTrail, VPC Flow Logs, and DNS logs. GuardDuty findings often provide the first indication of compromise — unusual API calls from new geographic locations, cryptocurrency mining detection, credential exfiltration patterns, and DNS queries to known command-and-control domains.
29.13.2 Forensic Acquisition Techniques
# Create a forensic snapshot of a compromised EC2 instance
# Step 1: Identify the instance and its volumes
aws ec2 describe-instances --instance-ids i-0123456789abcdef0 \
--query 'Reservations[].Instances[].BlockDeviceMappings[]'
# Step 2: Create snapshots of all attached volumes
aws ec2 create-snapshot --volume-id vol-0123456789abcdef0 \
--description "Forensic snapshot - incident IR-2026-001" \
--tag-specifications 'ResourceType=snapshot,Tags=[{Key=Purpose,Value=Forensics},{Key=Incident,Value=IR-2026-001}]'
# Step 3: Share the snapshot with a forensic analysis account
aws ec2 modify-snapshot-attribute --snapshot-id snap-0123456789abcdef0 \
--attribute createVolumePermission \
--operation-type add --user-ids FORENSIC_ACCOUNT_ID
# Step 4: In the forensic account, create a volume and attach it
# to an analysis workstation for examination with standard forensic tools
# Step 5: Capture volatile data before instance termination
# Memory acquisition via SSM (if agent is installed)
aws ssm send-command --instance-ids i-0123456789abcdef0 \
--document-name "AWS-RunShellScript" \
--parameters 'commands=["cat /proc/meminfo","netstat -tlnp","ps aux","cat /etc/shadow"]'
29.13.3 Anti-Forensics Awareness
Sophisticated attackers may attempt to cover their tracks in cloud environments:
- CloudTrail tampering: Disabling CloudTrail, deleting trail S3 buckets, or modifying trail configurations. Defenders should use CloudTrail log file validation and AWS Organizations trails that member accounts cannot modify.
- Log deletion: Removing CloudWatch log groups or S3 access log objects. Cross-account log archiving and S3 Object Lock prevent this.
- Credential rotation: After using stolen credentials, attackers may rotate them to invalidate incident responders' ability to track the compromised credentials.
- Resource deletion: Terminating compromised EC2 instances or deleting Lambda functions to destroy evidence. EBS snapshots and CloudTrail records persist even after resource deletion.
Blue Team Perspective — Forensic Readiness
Organizations should implement forensic readiness before incidents occur. Enable CloudTrail in all regions with log file validation enabled. Send all logs to a centralized, cross-account S3 bucket with Object Lock enabled (WORM — Write Once Read Many). Enable GuardDuty across all accounts. Configure VPC Flow Logs with maximum retention. Use AWS Organizations trails that cannot be disabled by member accounts. This preparation ensures that when a breach occurs — or when a penetration tester simulates one — the forensic evidence exists to trace the complete attack path.
29.14 Emerging Cloud Security Challenges
The cloud security landscape continues to evolve:
AI/ML Service Security. Cloud AI services (SageMaker, Azure ML, Vertex AI) introduce new attack surfaces: model poisoning through training data manipulation, inference API abuse, and model extraction attacks. SageMaker notebook instances, if misconfigured, can provide access to training data, model artifacts, and the IAM role attached to the notebook. Testing should include enumeration of ML-related resources, examination of notebook instance security configurations, and assessment of model endpoint authentication.
Cloud-Native Application Protection Platforms (CNAPP). The convergence of CSPM, CWPP (Cloud Workload Protection), and CIEM (Cloud Infrastructure Entitlement Management) into unified platforms reflects the increasing complexity of cloud security. For penetration testers, understanding what these platforms detect (and their blind spots) is essential for realistic testing. Many CNAPP tools excel at configuration assessment but struggle with runtime behavior analysis and custom attack chains.
Supply Chain Attacks via Cloud Services. Compromised container images in public registries, malicious Lambda layers, and typosquatting on cloud service names represent growing threats. The SolarWinds attack demonstrated how supply chain compromise can extend into cloud environments — attackers who compromised the Orion build system used SAML token forging to access Azure AD and Microsoft 365 environments. Testing should include assessment of third-party integrations, marketplace images, and shared Lambda layers.
Multi-Cloud Identity Federation. As organizations adopt multi-cloud strategies, federated identity configurations create complex trust relationships that are difficult to audit and easy to misconfigure. A misconfigured SAML federation between an on-premises identity provider and AWS can allow any authenticated internal user to assume highly privileged cloud roles. Testing should enumerate all identity federation trust relationships and verify that role assumption conditions are properly scoped.
Kubernetes in the Cloud. Managed Kubernetes services (EKS, AKS, GKE) add an additional layer of complexity. Misconfigured RBAC, exposed Kubernetes dashboards, privileged containers, and overly permissive pod security policies are common findings. The interaction between Kubernetes service accounts and cloud IAM roles (via IRSA in EKS, Workload Identity in GKE) creates additional privilege escalation paths that traditional tools may miss.
Data Residency and Sovereignty. Regulatory requirements increasingly mandate that data remain within specific geographic regions. Testing should verify that data replication, backup, and caching configurations do not inadvertently move regulated data across borders. For MedSecure, ensuring that patient data stays within authorized AWS regions is a HIPAA compliance requirement.
29.15 Cloud Security Testing Lab Setup
Setting up a proper cloud security testing lab environment is essential for developing skills without risking production systems.
29.15.1 Intentionally Vulnerable Cloud Environments
Several projects provide safe, legal practice environments:
CloudGoat (Rhino Security Labs) deploys intentionally vulnerable AWS environments with multiple scenarios covering IAM privilege escalation, SSRF exploitation, Lambda abuse, and EC2 compromise. Each scenario has a documented attack path and cleanup procedure.
# Install CloudGoat
git clone https://github.com/RhinoSecurityLabs/cloudgoat.git
cd cloudgoat
pip3 install -r requirements.txt
# Configure with your AWS profile
./cloudgoat.py config profile
./cloudgoat.py config whitelist --auto
# Deploy a scenario
./cloudgoat.py create iam_privesc_by_rollback
# After testing, clean up
./cloudgoat.py destroy iam_privesc_by_rollback
flAWS and flAWS2 (by Scott Piper) provide web-based challenges that teach AWS security through progressive difficulty levels. Available at flaws.cloud and flaws2.cloud, they cover S3 misconfiguration, metadata service exploitation, and identity federation attacks.
Terraform Goat by Bridgecrew provides misconfigured Terraform templates that deploy vulnerable infrastructure across AWS, Azure, and GCP.
AWSGoat and AzureGoat (by INE) provide comprehensive vulnerable environments for their respective cloud platforms.
29.15.2 Safe Testing Practices
When setting up cloud testing labs, follow these practices:
-
Use dedicated accounts. Never test in accounts that contain production workloads or real data. Create separate AWS accounts (within an Organization), Azure subscriptions, or GCP projects exclusively for security testing.
-
Set budget alerts. Cloud testing can incur unexpected costs, especially if resources are not cleaned up. Set billing alerts at low thresholds ($10, $25, $50) to catch runaway costs.
-
Use Service Control Policies. In AWS Organizations, use SCPs to restrict the testing account to specific regions and services, preventing accidental impact on other accounts.
-
Clean up after every session. Vulnerable cloud resources left running are targets for real attackers. Always destroy test environments when finished. Use
terraform destroyor CloudGoat's cleanup commands. -
Enable MFA on the root account. Even on testing accounts, protect the root account with MFA to prevent account takeover.
Summary
Cloud security testing is a specialized discipline that requires deep understanding of cloud service models, shared responsibility, and the unique attack surfaces that cloud environments present. In this chapter, we explored the fundamentals of cloud security architecture, walked through cloud-specific reconnaissance and enumeration techniques, examined the critical domain of IAM misconfigurations and privilege escalation, investigated storage misconfigurations that have caused major breaches, assessed serverless and container security, and learned to use cloud-native tools like Pacu, ScoutSuite, and Prowler.
The cloud is not inherently less secure than on-premises infrastructure — but it is differently secure. The same features that make cloud computing powerful (programmatic access, rapid provisioning, global scale) also create opportunities for misconfiguration at scale. A single overly permissive IAM policy can expose an entire organization's data. A single public S3 bucket can leak millions of records. The credential hunting techniques we covered demonstrate how secrets spread across repositories, container images, and CI/CD pipelines, creating exposure that traditional perimeter-based testing would never discover.
We also examined cloud forensics and evidence collection — understanding how to trace attacker activity through CloudTrail, VPC Flow Logs, and GuardDuty findings is essential not only for incident response but also for demonstrating the full impact of findings during penetration tests. The ability to show a complete attack narrative from initial access through privilege escalation to data exfiltration, documented with forensic evidence, makes cloud security reports compelling and actionable.
As ethical hackers, our role in cloud security testing is to find these misconfigurations before attackers do, demonstrate their impact through responsible exploitation, and provide actionable remediation guidance that leverages cloud-native controls. The cloud security landscape evolves rapidly — new services launch weekly, configurations change daily, and the attack surface shifts continuously. Maintaining current knowledge of cloud provider services, security features, and known attack patterns is an ongoing professional responsibility. In the next chapter, we will shift our focus to another rapidly growing attack surface: mobile applications.
Next: Chapter 30 — Mobile Application Security