21 min read

> "Incident response is not just about technology. It's about people, processes, and having a plan before you need one." -- NIST SP 800-61

Learning Objectives

  • Apply NIST, SANS, and PICERL incident response frameworks to real-world scenarios
  • Conduct memory forensics using Volatility and related tools
  • Perform disk and file system forensic analysis while maintaining chain of custody
  • Analyze network traffic and logs for indicators of compromise
  • Understand malware analysis fundamentals including static and dynamic analysis
  • Integrate IR and forensics skills with ethical hacking knowledge

Chapter 37: Incident Response and Digital Forensics

"Incident response is not just about technology. It's about people, processes, and having a plan before you need one." -- NIST SP 800-61

"In digital forensics, every bit tells a story. Our job is to listen carefully enough to hear it." -- Harlan Carvey

On June 14, 2016, CrowdStrike published a report that would reverberate through geopolitics for years to come. The cybersecurity firm had been called in to investigate a breach at the Democratic National Committee (DNC), and their forensic analysis attributed the intrusion to two Russian intelligence groups: Fancy Bear (APT28, associated with the GRU) and Cozy Bear (APT29, associated with the SVR). The attribution was based on meticulous digital forensic analysis -- examining malware artifacts, command-and-control infrastructure, tactics and techniques, and operational patterns that matched known Russian intelligence operations. This investigation demonstrated the power and the weight of digital forensics: the ability to reconstruct events, identify adversaries, and provide evidence that can withstand scrutiny.

Incident response and digital forensics are two sides of the same coin. Incident response is the organizational process of preparing for, detecting, containing, eradicating, and recovering from security incidents. Digital forensics is the scientific discipline of collecting, preserving, analyzing, and presenting digital evidence. Together, they form the capability that allows organizations to understand what happened during a breach, limit the damage, recover operations, and prevent recurrence.

For ethical hackers, understanding incident response and forensics is essential for two reasons. First, it informs your offensive work: knowing how defenders investigate and respond helps you simulate realistic attacks and provide actionable recommendations. Second, many ethical hackers transition into or complement their work with incident response and forensic investigation. This chapter provides the comprehensive foundation you need for both.

37.1 Incident Response Frameworks

A structured approach to incident response ensures consistent, effective handling of security incidents regardless of their nature or severity.

37.1.1 NIST SP 800-61: Computer Security Incident Handling Guide

The National Institute of Standards and Technology's Special Publication 800-61 (Revision 2) provides the most widely referenced incident response framework. It defines four phases:

Phase 1: Preparation

Preparation is the continuous work of building incident response capability before incidents occur.

  • Incident Response Plan (IRP): Document that defines roles, responsibilities, communication procedures, and escalation criteria
  • Incident Response Team (IRT): Dedicated team or designated personnel trained for incident handling
  • Tools and resources: Jump kits, forensic workstations, network monitoring infrastructure, pre-authorized tools
  • Training and exercises: Regular tabletop exercises, technical drills, and lessons-learned reviews
  • Communication channels: Secure communication methods (out-of-band), contact lists, notification templates
  • Legal and compliance preparation: Pre-established relationships with legal counsel, law enforcement, regulatory contacts

Jump kit essentials:

Category Items
Hardware Forensic laptop, write blockers, external drives, USB hubs, network tap
Software Forensic boot media, Volatility, Autopsy, Wireshark, KAPE, FTK Imager
Documentation Chain of custody forms, incident logs, evidence bags and labels
Network Crossover cables, portable switch, wireless adapter, cellular hotspot
Reference Checklists, playbooks, emergency contacts, credential vault

Phase 2: Detection and Analysis

Detection involves identifying potential security incidents through multiple sources:

  • Security monitoring: SIEM alerts, EDR detections, IDS/IPS alerts
  • Log analysis: Authentication logs, application logs, network flow data
  • User reports: Help desk tickets, phishing reports, suspicious behavior reports
  • Threat intelligence: IOC matching, threat feed alerts, industry sharing
  • Automated detection: Behavioral analytics, anomaly detection, honey tokens

Analysis involves determining: 1. Is this actually an incident? Distinguish true incidents from false positives 2. What type of incident is it? Malware, unauthorized access, data breach, insider threat, denial of service 3. What is the scope? How many systems are affected? What data is at risk? 4. What is the severity? Based on impact to confidentiality, integrity, and availability 5. What is the priority? Based on severity and business impact

Incident classification framework:

Category Description Example
CAT 1 Unauthorized access Compromised admin credentials
CAT 2 Denial of service DDoS against public-facing services
CAT 3 Malicious code Ransomware deployment
CAT 4 Improper usage Policy violation, insider misuse
CAT 5 Scans/Probes/Recon Port scanning, vulnerability scanning
CAT 6 Investigation Potential incident under review

Phase 3: Containment, Eradication, and Recovery

Containment limits the damage and prevents further spread:

  • Short-term containment: Immediate actions to stop active damage (isolating systems, blocking IPs, disabling accounts)
  • Long-term containment: Sustainable measures while preparing for eradication (network segmentation, enhanced monitoring, temporary workarounds)
  • Evidence preservation: Capture memory dumps, disk images, and log snapshots before containment changes the evidence

Eradication removes the threat from the environment:

  • Remove malware and attacker tools
  • Close exploited vulnerabilities
  • Reset compromised credentials
  • Remove unauthorized accounts and access
  • Patch or rebuild affected systems

Recovery restores normal operations:

  • Restore systems from clean backups or rebuild from known-good images
  • Implement additional monitoring for recurrence
  • Gradually restore services with validation at each step
  • Monitor for signs of re-compromise

Phase 4: Post-Incident Activity

Post-incident activity ensures the organization learns from every incident:

  • Lessons learned meeting: Within 1-2 weeks of incident closure. What happened? What went well? What could be improved?
  • Incident report: Formal documentation of the incident, response actions, and outcomes
  • Evidence retention: Archive evidence per legal and compliance requirements
  • Process improvements: Update plans, playbooks, and tools based on lessons learned
  • Metrics tracking: Track incident volume, MTTD, MTTR, and other KPIs

37.1.2 SANS PICERL Framework

The SANS Institute defines a six-phase incident response framework that provides more granular detail:

  1. Preparation: Same as NIST -- build capability before incidents occur
  2. Identification: Detect and validate potential incidents
  3. Containment: Limit the scope and impact of the incident
  4. Eradication: Remove the threat from the environment
  5. Recovery: Restore normal operations
  6. Lessons Learned: Document and improve

The primary difference from NIST is that SANS separates Containment, Eradication, and Recovery into distinct phases, emphasizing that each requires different skills and approaches.

37.1.3 Incident Response Playbooks

Playbooks provide step-by-step procedures for handling specific incident types:

Ransomware playbook (abbreviated):

DETECTION:
□ EDR alert for file encryption behavior
□ User reports of encrypted files or ransom note
□ Volume shadow copy deletion detected
□ Abnormal file system activity in monitoring

INITIAL RESPONSE (First 60 minutes):
□ Activate incident response team
□ Isolate affected systems from network (DO NOT power off)
□ Capture memory image before any other action
□ Document initial scope (systems, users, data affected)
□ Notify management and legal

CONTAINMENT:
□ Block C2 domains/IPs at firewall
□ Disable affected user accounts
□ Isolate affected network segments
□ Identify and block lateral movement paths
□ Deploy emergency EDR rules to detect encryption behavior

INVESTIGATION:
□ Identify initial access vector
□ Determine scope of encryption
□ Identify ransomware family
□ Check for data exfiltration (double extortion)
□ Timeline of adversary activity

ERADICATION:
□ Remove ransomware artifacts
□ Patch exploited vulnerabilities
□ Reset all potentially compromised credentials
□ Verify no persistence mechanisms remain

RECOVERY:
□ Restore from clean backups (verify backup integrity)
□ Rebuild systems that cannot be verified as clean
□ Implement enhanced monitoring
□ Gradual service restoration with validation

POST-INCIDENT:
□ Lessons learned meeting
□ Update playbooks and procedures
□ Report to regulatory authorities if required
□ Consider law enforcement notification

Blue Team Perspective: Playbooks should be living documents, updated after every incident and every exercise. Keep them in a location accessible during incidents (including offline copies -- your wiki might be encrypted by ransomware). Practice playbooks regularly through tabletop exercises and simulation drills.

37.1.4 Incident Response Metrics

Effective IR programs track key performance indicators:

  • Mean Time to Detect (MTTD): Time from incident start to detection
  • Mean Time to Respond (MTTR): Time from detection to containment
  • Mean Time to Recover (MTTRC): Time from containment to full recovery
  • Incidents by category: Distribution of incident types
  • Source of detection: How incidents are detected (monitoring, user report, external notification)
  • False positive rate: Percentage of alerts that are not actual incidents
  • Recurrence rate: Percentage of incidents that recur after initial remediation

37.2 Digital Forensics Fundamentals

Digital forensics is the application of scientific methods to the identification, collection, analysis, and presentation of digital evidence.

37.2.1 Forensic Principles

The Locard Exchange Principle: "Every contact leaves a trace." In digital forensics, every action on a computer system leaves artifacts that can be discovered and analyzed.

Key forensic principles:

  1. Preserve evidence integrity: Never modify the original evidence. Work on forensic copies.
  2. Maintain chain of custody: Document who handled evidence, when, and what they did with it.
  3. Use forensically sound methods: Use write blockers, validated tools, and documented procedures.
  4. Document everything: Every action taken during the investigation should be documented.
  5. Be objective: Follow the evidence. Do not start with a conclusion and look for supporting evidence.

37.2.2 Evidence Types and Volatility

Digital evidence exists in a hierarchy of volatility -- some evidence is more ephemeral than others:

Order of volatility (most volatile first):

  1. CPU registers and cache -- Lost immediately when power is removed
  2. Memory (RAM) -- Contains running processes, network connections, encryption keys, malware in memory
  3. Network state -- Active connections, routing tables, ARP cache
  4. Running processes -- Process lists, open files, loaded modules
  5. Disk -- File systems, deleted files, slack space, swap files
  6. Removable media -- USB drives, external storage
  7. Logs and monitoring data -- SIEM data, network flow records, authentication logs
  8. Physical configuration -- Network topology, hardware configuration
  9. Archival media -- Backup tapes, offline storage

The Golden Rule: Always collect the most volatile evidence first.

37.2.3 Evidence Acquisition

Memory acquisition:

# Linux memory acquisition with LiME
sudo insmod lime-$(uname -r).ko "path=/evidence/memory.lime format=lime"

# Windows memory acquisition with WinPmem
winpmem_mini_x64.exe memory.raw

# Windows memory acquisition with Magnet RAM Capture
MagnetRAMCapture.exe

# FTK Imager can also capture memory on Windows

Disk acquisition:

# Linux disk imaging with dc3dd (enhanced dd)
dc3dd if=/dev/sda of=/evidence/disk_image.dd hash=sha256 log=/evidence/imaging.log

# Create forensic image with hash verification
sudo dd if=/dev/sda bs=4096 conv=noerror,sync | tee /evidence/disk.raw | sha256sum > /evidence/disk.sha256

# FTK Imager (Windows) - Create forensic image (E01 format)
# GUI-based tool that creates Expert Witness Format images with built-in hashing

Triage collection with KAPE:

# KAPE (Kroll Artifact Parser and Extractor)
# Collect specific forensic artifacts without full disk image
kape.exe --tsource C: --tdest C:\evidence\collection \
  --target KapeTriage \
  --vhdx evidence_collection \
  --vss

# Common KAPE targets:
# KapeTriage - Comprehensive triage collection
# RegistryHives - Windows Registry files
# EventLogs - Windows Event Logs
# WebBrowsers - Browser history, cookies, cache
# Antivirus - AV logs and quarantine

37.2.4 Chain of Custody

Chain of custody documentation ensures evidence integrity and admissibility:

Chain of custody form elements:

  • Case number and description
  • Evidence item number and description
  • Date and time of collection
  • Collector's name and affiliation
  • Collection method and tools used
  • Hash values (MD5 and SHA-256 at minimum)
  • Storage location
  • Transfer log: every person who handled the evidence, when, and why
  • Evidence seal numbers (if physical evidence)
CHAIN OF CUSTODY LOG
Case: IR-2026-0042
Item: MEM-001 - Memory dump from workstation WS-ACCT-015

Collected: 2026-02-27 14:23:00 UTC
Collected by: Jane Smith, IR Team
Tool: WinPmem v4.0
SHA-256: a1b2c3d4e5f6...

Transfer Log:
| Date/Time | From | To | Purpose |
|-----------|------|-----|---------|
| 2026-02-27 14:30 | J. Smith | Evidence Locker A3 | Initial storage |
| 2026-02-28 09:00 | Evidence Locker A3 | M. Johnson | Memory analysis |
| 2026-02-28 17:00 | M. Johnson | Evidence Locker A3 | Return after analysis |

MedSecure Running Example: MedSecure's SOC detects unusual data access patterns on a workstation in the billing department. An account is querying patient records at 3 AM. The IR team is activated. Before any containment action, you capture a memory image from the running workstation using WinPmem, then collect triage artifacts using KAPE. Only after evidence preservation do you isolate the machine from the network. Your chain of custody documentation records every step, ensuring the evidence would be admissible if the incident leads to legal proceedings.

37.3 Memory Forensics

Memory forensics is the analysis of a computer's volatile memory (RAM) to extract evidence of malicious activity. It is often the most valuable forensic discipline because many modern attacks operate entirely in memory.

37.3.1 Why Memory Forensics?

Memory contains evidence that cannot be found anywhere else:

  • Running processes and their code: Including injected code and memory-resident malware
  • Network connections: Active connections, listening ports, and recently closed connections
  • Encryption keys: Keys used by full-disk encryption, ransomware, and communication protocols
  • Passwords and credentials: Plaintext or hashed credentials in memory
  • Command history: Commands executed by the attacker
  • Loaded modules and DLLs: Including injected or hollowed modules
  • Registry hives: In-memory representation may differ from on-disk (showing recent modifications)
  • User activity: Clipboard contents, recently accessed files, running applications

37.3.2 Volatility 3 Framework

Volatility is the premier open-source memory forensics framework. Volatility 3 is a complete rewrite in Python 3 with significant improvements over Volatility 2.

Basic usage:

# List available plugins
vol -f memory.raw windows.info

# Process listing
vol -f memory.raw windows.pslist
vol -f memory.raw windows.pstree    # Tree view showing parent-child relationships
vol -f memory.raw windows.psscan    # Scan for hidden/terminated processes

# Network connections
vol -f memory.raw windows.netscan   # All network connections and listening ports
vol -f memory.raw windows.netstat   # Active connections

# DLL listing
vol -f memory.raw windows.dlllist --pid 1234

# Command line arguments
vol -f memory.raw windows.cmdline

# Handles (files, registry keys, mutexes)
vol -f memory.raw windows.handles --pid 1234

# Registry analysis
vol -f memory.raw windows.registry.hivelist
vol -f memory.raw windows.registry.printkey --key "Software\Microsoft\Windows\CurrentVersion\Run"

# Dump a process executable
vol -f memory.raw windows.dumpfiles --pid 1234

# YARA scanning
vol -f memory.raw windows.vadyarascan --yara-rules "rule test { strings: $s1 = \"malicious\" condition: $s1 }"

37.3.3 Memory Analysis Methodology

Step 1: System Context

# Identify the operating system and build
vol -f memory.raw windows.info

# What is the system? When was the image captured?
# This information helps contextualize all subsequent findings

Step 2: Process Analysis

Look for suspicious processes by examining:

  • Unexpected processes: Processes that should not be running (mimikatz.exe, psexec.exe)
  • Masquerading processes: Legitimate-looking names with wrong paths (svchost.exe not running from System32)
  • Orphan processes: Processes whose parent has terminated (unusual parent-child relationships)
  • Process injection indicators: Processes with unusual memory regions or loaded modules
# Compare pslist (linked list) with psscan (pool tag scanning)
# Differences may indicate hidden or unlinked processes
vol -f memory.raw windows.pslist > pslist.txt
vol -f memory.raw windows.psscan > psscan.txt
# Processes in psscan but not pslist may be rootkit-hidden

Step 3: Network Analysis

# Identify active connections and listening ports
vol -f memory.raw windows.netscan

# Look for:
# - Connections to known-bad IPs
# - Unusual ports (especially high ports)
# - Processes with unexpected network connections
# - Connections to external IPs from internal services

Step 4: Code Injection Detection

# Detect injected code using malfind
vol -f memory.raw windows.malfind

# malfind looks for:
# - Memory regions with PAGE_EXECUTE_READWRITE protection
# - Memory regions that contain code but are not backed by a file on disk
# - Indicators of process hollowing, reflective DLL injection, etc.

Step 5: Persistence Mechanisms

# Check autorun registry keys
vol -f memory.raw windows.registry.printkey --key "Software\Microsoft\Windows\CurrentVersion\Run"
vol -f memory.raw windows.registry.printkey --key "Software\Microsoft\Windows\CurrentVersion\RunOnce"

# Check scheduled tasks, services, WMI subscriptions
vol -f memory.raw windows.svcscan   # Windows services

Step 6: Credential Extraction

# Extract password hashes (for investigation, not offensive use)
vol -f memory.raw windows.hashdump

# Extract cached domain credentials
vol -f memory.raw windows.cachedump

# Extract LSA secrets
vol -f memory.raw windows.lsadump

37.3.4 Detecting Common Attack Techniques in Memory

Process Injection (T1055): - Use malfind to identify injected code regions - Compare pslist and psscan for hidden processes - Look for PAGE_EXECUTE_READWRITE memory regions not backed by files - Check for unusual DLL loads in process memory

Credential Dumping (T1003): - Look for processes accessing LSASS memory (lsass.exe) - Check for Mimikatz artifacts in memory - Examine process handles for access to credential stores - Look for ntdsutil.exe or suspicious vshadow.exe activity

Fileless Malware: - PowerShell scripts in process memory - .NET assemblies loaded entirely from memory - WMI event subscriptions with embedded scripts - COM object hijacking

Blue Team Perspective: Implement proactive memory forensics capability. Tools like Velociraptor can collect memory artifacts across your fleet at scale. When a detection fires, having the ability to quickly triage memory on the affected system can mean the difference between a contained incident and a widespread breach.

37.4 Disk and File System Forensics

While memory forensics captures the volatile present, disk forensics reveals the persistent history of system activity.

37.4.1 File System Fundamentals

NTFS (New Technology File System):

NTFS is the default file system for Windows and contains rich forensic artifacts:

  • Master File Table (MFT): Database of every file and directory on the volume. Each entry contains timestamps, attributes, permissions, and data pointers.
  • NTFS timestamps: Created, Modified, Accessed, and Entry Modified (MACE/MACB) -- each stored in both $STANDARD_INFORMATION and $FILE_NAME attributes
  • Alternate Data Streams (ADS): Additional data streams attached to files, sometimes used to hide data
  • $UsnJrnl (USN Journal): Change journal recording file system modifications
  • $LogFile: Transaction log for the file system
  • $I30 (Index) entries: Directory listings that may contain references to deleted files

EXT4 (Extended File System 4):

EXT4 is the default file system for many Linux distributions:

  • Inodes: Metadata structures for each file (permissions, timestamps, block pointers)
  • Journal: Transaction log for crash recovery
  • Timestamps: Access, Modify, Change, and Birth (creation) times
  • Directory entries: May retain references to deleted files until overwritten

37.4.2 Windows Forensic Artifacts

Windows systems generate an extraordinary wealth of forensic artifacts:

Registry artifacts:

NTUSER.DAT (per-user registry hive):
- UserAssist: Programs executed via Explorer (ROT13 encoded)
- RecentDocs: Recently accessed documents
- TypedPaths: Paths typed in Explorer address bar
- RunMRU: Commands typed in Run dialog
- MountPoints2: USB devices mounted by the user

SYSTEM hive:
- Services: Installed services (potential persistence)
- CurrentControlSet\Enum\USBSTOR: USB device history
- ComputerName: System name
- TimeZoneInformation: System timezone

SOFTWARE hive:
- Run/RunOnce: Auto-start programs
- InstalledApps: Application inventory
- NetworkList: Known wireless networks

Event logs:

Security.evtx:
- 4624: Successful logon (with logon type)
- 4625: Failed logon attempt
- 4648: Explicit credential logon
- 4720: User account created
- 4732: Member added to security group
- 4688: Process creation (with command line if auditing enabled)

System.evtx:
- 7045: New service installed
- 7036: Service start/stop

Microsoft-Windows-Sysmon/Operational.evtx (if Sysmon deployed):
- 1: Process creation (with full command line, hashes, parent process)
- 3: Network connection
- 7: Image loaded (DLL)
- 8: CreateRemoteThread
- 10: Process access
- 11: File created
- 13: Registry value set
- 22: DNS query

Prefetch files:

Windows Prefetch files (C:\Windows\Prefetch\*.pf) record application execution:

  • Application name and path
  • Execution count
  • Timestamps of last eight executions
  • Files and directories referenced during execution
  • Volume information
# Parse Prefetch files with Eric Zimmerman's PECmd
PECmd.exe -d C:\Windows\Prefetch --csv output_directory

# Key evidence: Was a tool like mimikatz.exe, psexec.exe, or a renamed
# suspicious executable ever run? Prefetch will tell you.

Shimcache / Amcache:

  • Shimcache (AppCompatCache): Records executables that Windows evaluated for compatibility. Located in SYSTEM registry hive.
  • Amcache (Amcache.hve): Records information about executed programs, including SHA-1 hashes and file paths.
# Parse Amcache
AmcacheParser.exe -f C:\Windows\appcompat\Programs\Amcache.hve --csv output_directory

# Parse Shimcache
AppCompatCacheParser.exe -f SYSTEM --csv output_directory

ShellBags:

ShellBags record folder access even after folders are deleted. They are stored in NTUSER.DAT and UsrClass.dat:

# Parse ShellBags
SBECmd.exe -d C:\Users\suspect --csv output_directory

37.4.3 Timeline Analysis

Timeline analysis is the most powerful forensic technique. It correlates artifacts from multiple sources into a chronological narrative:

Creating a super timeline with Plaso/log2timeline:

# Generate timeline from disk image
log2timeline.py timeline.plaso disk_image.E01

# Filter and format the timeline
psort.py -o l2tcsv timeline.plaso "date > '2026-02-01' AND date < '2026-02-28'" > timeline.csv

# Alternative: Use KAPE to collect artifacts, then process with Timeline Explorer
kape.exe --tsource C: --tdest output --target KapeTriage
# Process collected artifacts with Eric Zimmerman's tools
# Open results in Timeline Explorer for visual analysis

Timeline analysis methodology:

  1. Establish the timeframe: When did the incident begin and end?
  2. Identify anchor events: Known events with confirmed timestamps (alert fired, user report, etc.)
  3. Build outward: From anchor events, trace forward (what happened next?) and backward (what led to this?)
  4. Correlate sources: Match events across different artifact types (process creation + network connection + file creation)
  5. Identify anomalies: Events that do not match normal patterns (activity at unusual hours, unexpected programs, new user accounts)

37.4.4 File Recovery and Carving

Deleted files can often be recovered because file systems typically only mark the space as available without immediately overwriting the data:

File recovery approaches:

  • MFT-based recovery: If the MFT entry still exists, the file metadata is intact even though the file is "deleted"
  • File carving: Searching raw disk data for file signatures (magic bytes) to recover files without file system metadata
  • Slack space analysis: Examining the space between the end of file data and the end of the allocated cluster
# File carving with Scalpel
scalpel -c scalpel.conf disk_image.raw -o carved_files/

# File carving with Foremost
foremost -t all -i disk_image.raw -o carved_files/

# Photorec for comprehensive file recovery
photorec disk_image.raw

ShopStack Running Example: During the ShopStack incident response, disk forensics reveals that the attacker used a staging directory at C:\ProgramData\WindowsUpdate\ to store their tools. Although the attacker deleted this directory before the SOC detected them, the MFT still contains entries for the deleted files, including timestamps and partial content. Prefetch files confirm that mimikatz.exe (renamed as winupdate.exe) was executed three times. Amcache provides the SHA-1 hash, which matches a known Mimikatz build. ShellBags show the attacker browsed to the \\fileserver\finance\ share. This evidence chain reconstructs the attack timeline from initial tool staging through credential theft to lateral movement.

37.5 Network Forensics and Log Analysis

Network forensics examines network traffic and logs to reconstruct events and identify malicious activity.

37.5.1 Network Evidence Sources

Full packet capture (PCAP): - Complete recording of all network traffic - Highest fidelity but enormous storage requirements - Tools: Wireshark, tcpdump, Zeek, Arkime (Moloch)

Network flow data (NetFlow/IPFIX): - Metadata about network connections (source/destination IP, ports, bytes, duration) - Much smaller than full packet capture - Sufficient for many investigations - Tools: ntopng, SiLK, Elastiflow

DNS logs: - All DNS queries and responses - Essential for identifying C2 communication, data exfiltration via DNS, and domain generation algorithms - Tools: Passive DNS databases, DNS server logs, Zeek DNS logs

Proxy and firewall logs: - HTTP/HTTPS requests (URLs, user agents, response codes) - Connection allow/deny decisions - Essential for tracing data exfiltration and C2 channels

Authentication logs: - Domain controller logs (Windows Event IDs 4624, 4625, 4648, 4768, 4769) - VPN connection logs - Multi-factor authentication logs - Service authentication logs

37.5.2 Network Traffic Analysis with Wireshark

Common investigation scenarios:

C2 Communication Detection:

# Filter for beaconing behavior (regular intervals)
# Look for HTTP requests at consistent intervals
http.request and ip.dst == <suspicious_ip>

# DNS-based C2 detection
dns.qry.name contains "suspicious-domain.com"

# Unusual TLS connections
tls.handshake.type == 1 and ip.dst != <known_good_ips>

# Large data transfers (potential exfiltration)
tcp.len > 10000 and ip.dst != <internal_subnets>

Data Exfiltration Analysis:

# DNS tunneling indicators
# Look for high volume of DNS queries to a single domain
# Long subdomain labels (encoded data)
dns.qry.name matches "^[a-z0-9]{30,}\."

# HTTP POST with large body (data exfiltration)
http.request.method == "POST" and http.content_length > 100000

# ICMP tunneling
icmp and data.len > 64

Lateral Movement Detection:

# SMB activity (file shares, PsExec)
smb2 and ip.src == <compromised_host>

# Remote Desktop Protocol
tcp.port == 3389

# WinRM
tcp.port == 5985 or tcp.port == 5986

# WMI over DCOM
tcp.port == 135

37.5.3 Log Analysis at Scale

For enterprise investigations, manual log review is insufficient. SIEM and log analysis platforms enable investigation at scale:

Splunk queries for incident investigation:

# Failed authentication followed by success (brute force)
index=windows sourcetype=WinEventLog:Security EventCode=4625
| stats count by src_ip, user
| where count > 10
| join src_ip [search index=windows EventCode=4624 | stats earliest(_time) as first_success by src_ip]

# Lateral movement detection (remote logons)
index=windows EventCode=4624 Logon_Type=3 OR Logon_Type=10
| stats count values(dest) as targets by src_ip, user
| where count > 5

# Process execution anomalies
index=windows EventCode=4688
| where NOT match(New_Process_Name, "(?i)c:\\windows\\system32")
| stats count by New_Process_Name, Creator_Process_Name
| sort -count

# PowerShell script block logging
index=windows sourcetype=WinEventLog:Microsoft-Windows-PowerShell/Operational EventCode=4104
| search ScriptBlockText=*downloadstring* OR ScriptBlockText=*invoke-expression* OR ScriptBlockText=*encodedcommand*

ELK Stack queries:

// Failed authentication attempts
{
  "query": {
    "bool": {
      "must": [
        { "match": { "event.code": "4625" } },
        { "range": { "@timestamp": { "gte": "now-24h" } } }
      ]
    }
  },
  "aggs": {
    "by_source": {
      "terms": { "field": "source.ip", "size": 20 },
      "aggs": {
        "by_user": {
          "terms": { "field": "user.name", "size": 10 }
        }
      }
    }
  }
}

37.5.4 Zeek for Network Forensics

Zeek (formerly Bro) is a network analysis framework that generates rich, structured logs from network traffic:

# Process PCAP with Zeek
zeek -r capture.pcap

# Generated log files:
# conn.log - Connection summaries
# dns.log - DNS queries and responses
# http.log - HTTP requests and responses
# ssl.log - TLS/SSL connection details
# files.log - File transfers
# notice.log - Notable events

Zeek logs provide: - Connection metadata (duration, bytes, protocol) - HTTP details (URIs, user agents, referrers) - DNS queries and responses - TLS certificate information - File transfer metadata including hashes - Protocol-specific details (SMB, SSH, SMTP)

37.6 Malware Analysis Fundamentals

Malware analysis determines the capabilities, indicators, and attribution of malicious software. For incident responders, it answers critical questions: What does this malware do? How does it spread? What data did it access?

37.6.1 Analysis Approaches

Static Analysis: Examining the malware without executing it.

# File identification
file suspicious.exe
# PE header analysis
python3 pefile_parser.py suspicious.exe

# String extraction
strings -a suspicious.exe | grep -iE "http|cmd|powershell|reg|password"
strings -el suspicious.exe  # Unicode strings

# Hash calculation for threat intelligence lookup
sha256sum suspicious.exe
# Search hash on VirusTotal, Hybrid Analysis, MalwareBazaar

# YARA rule matching
yara rules.yar suspicious.exe

Dynamic Analysis: Executing the malware in a controlled environment and observing its behavior.

# Sandbox analysis (use isolated VM with snapshots)
# Monitor with:
# - Process Monitor (Procmon) for file system and registry changes
# - Wireshark/FakeNet-NG for network communication
# - Regshot for registry comparison before/after
# - API Monitor for API calls

# Automated sandbox services:
# - ANY.RUN (interactive sandbox)
# - Hybrid Analysis (CrowdStrike Falcon Sandbox)
# - Joe Sandbox
# - CAPE Sandbox (open source)

Code Analysis: Reverse engineering the malware's code for detailed understanding.

# Disassembly with Ghidra (open source, NSA)
ghidraRun  # GUI-based reverse engineering

# Key analysis targets:
# - Entry point and main function
# - Network communication functions
# - Encryption/encoding routines
# - Persistence mechanisms
# - Anti-analysis techniques (anti-VM, anti-debug)
# - Configuration extraction (C2 addresses, encryption keys)

37.6.2 YARA Rules

YARA is a pattern-matching tool used to identify and classify malware:

rule Mimikatz_Detection {
    meta:
        description = "Detects Mimikatz credential dumping tool"
        author = "IR Team"
        date = "2026-02-27"
        severity = "high"

    strings:
        $s1 = "mimikatz" ascii wide nocase
        $s2 = "gentilkiwi" ascii wide
        $s3 = "sekurlsa::logonpasswords" ascii wide
        $s4 = "lsadump::dcsync" ascii wide
        $s5 = "kerberos::golden" ascii wide
        $pe_magic = { 4D 5A }  // MZ header

    condition:
        $pe_magic at 0 and (2 of ($s*))
}

rule Suspicious_PowerShell_Download {
    meta:
        description = "Detects PowerShell download cradles"
        severity = "medium"

    strings:
        $dl1 = "DownloadString" ascii wide nocase
        $dl2 = "DownloadFile" ascii wide nocase
        $dl3 = "Invoke-WebRequest" ascii wide nocase
        $dl4 = "wget" ascii wide nocase
        $dl5 = "curl" ascii wide nocase
        $exec1 = "Invoke-Expression" ascii wide nocase
        $exec2 = "IEX" ascii wide
        $exec3 = "-enc" ascii wide nocase
        $exec4 = "-EncodedCommand" ascii wide nocase

    condition:
        any of ($dl*) and any of ($exec*)
}

37.6.3 Indicators of Compromise (IOCs)

IOCs are forensic artifacts that indicate a system has been compromised:

IOC types:

Type Example Use
Hash (MD5/SHA-256) a1b2c3d4e5f6... File identification
IP Address 192.168.1.100 C2 communication
Domain evil-c2.example.com C2 communication
URL http://evil.com/payload.exe Payload delivery
Email address phish@evil.com Spearphishing attribution
Registry key HKLM\SOFTWARE\... Persistence mechanism
File path C:\ProgramData\update.exe Malware staging
Mutex Global\MalwareMutex123 Malware identification
User agent Mozilla/5.0 (compatible; MSIE 6.0b) C2 communication
YARA rule See above Pattern matching

IOC formats:

  • STIX/TAXII: Structured Threat Information eXpression for sharing threat intelligence
  • OpenIOC: Mandiant's XML-based IOC format
  • MISP: Malware Information Sharing Platform format
  • CSV/JSON: Simple formats for quick sharing

37.6.4 Malware Analysis Safety

Safety precautions for malware analysis:

  1. Isolated environment: Use dedicated VMs with no network access to production systems
  2. Snapshots: Take VM snapshots before analysis; revert after
  3. Host-only networking: If network access is needed, use isolated virtual networks
  4. Monitoring tools: Run analysis monitoring tools before executing malware
  5. Physical isolation: For high-risk samples, use air-gapped analysis systems
  6. Disable shared folders: Prevent malware from escaping to the host system
  7. Anti-VM detection: Be aware that malware may detect VMs and behave differently

Blue Team Perspective: Build a malware analysis capability even if you never expect to do deep reverse engineering. The ability to quickly triage a suspicious file -- check its hash against threat intelligence, run it in a sandbox, extract basic IOCs -- dramatically accelerates incident response. Services like ANY.RUN and Hybrid Analysis provide free tiers that cover basic analysis needs.

37.7 Case Study Integration: Putting It All Together

37.7.1 End-to-End Investigation Scenario

MedSecure Ransomware Incident:

This scenario ties together all the concepts in this chapter:

Day 0 (Monday 2:00 AM): MedSecure's SOC receives an EDR alert: multiple files being encrypted on a billing workstation. The ransomware playbook is activated.

Initial Response (2:00-3:00 AM): - IR team activated via PagerDuty - Affected workstation identified: WS-BILL-023 - Memory captured remotely using Velociraptor before network isolation - Network isolation implemented for the billing VLAN - Management and legal notified

Evidence Collection (3:00-6:00 AM): - Memory image analyzed with Volatility: malicious process svchost.exe (running from C:\ProgramData\) identified with network connections to 45.33.xx.xx - KAPE triage collection from WS-BILL-023 and three additional affected workstations - Full disk image of WS-BILL-023 created with FTK Imager - Firewall and proxy logs exported for the past 30 days - DNS logs exported for the past 30 days

Investigation (Day 1-3):

Memory forensics reveals: - Cobalt Strike Beacon running as injected code in a legitimate svchost.exe process - Active network connection to C2 server at 45.33.xx.xx:443 - Credential dumping artifacts (Mimikatz loaded in memory) - PowerShell script in process memory showing lateral movement commands

Disk forensics reveals: - Initial access: Phishing email with malicious Excel attachment received 14 days earlier - Excel macro dropped PowerShell stager to C:\Users\billing\AppData\Local\Temp\ - Prefetch files confirm execution of renamed Mimikatz and PsExec - Amcache confirms SHA-1 hashes matching known Cobalt Strike Beacon - Event logs show lateral movement to four additional workstations over 12 days

Network forensics reveals: - Beaconing to C2 server every 60 seconds for 14 days - DNS queries to a domain generation algorithm (DGA) domain used as backup C2 - 47 GB of data exfiltrated to a cloud storage service over the past 3 days (double extortion) - Lateral movement via SMB and WinRM between compromised workstations

Containment and Eradication (Day 3-5): - All identified C2 domains and IPs blocked at firewall - All compromised credentials reset (including service accounts) - Cobalt Strike artifacts removed from all affected systems - Vulnerability that allowed initial macro execution patched - Enhanced email filtering rules deployed

Recovery (Day 5-10): - Affected workstations rebuilt from clean images - Data restored from clean backups (verified pre-compromise) - Enhanced monitoring deployed (Sysmon, increased log retention, new SIEM rules) - Gradual service restoration with validation

Post-Incident (Day 14): - Lessons learned meeting with all stakeholders - Incident report completed with full timeline and IOCs - 14 new SIEM detection rules created based on observed TTPs - Phishing awareness training reinforced for billing department - New email attachment sandboxing implemented

Student Home Lab Exercise: Build an IR/forensics practice lab:

  1. Create a Windows 10 VM and install Sysmon with a detection-focused configuration
  2. Deploy the ELK stack (or Wazuh) on a separate VM for log collection
  3. Practice evidence acquisition: capture memory with WinPmem, collect artifacts with KAPE
  4. Install Volatility 3 on an analysis workstation
  5. Download memory forensics challenge images from MemLabs or DFIR challenges
  6. Analyze the challenge images: find malicious processes, extract IOCs, build a timeline
  7. Practice timeline analysis with Plaso/log2timeline on a disk image
  8. Write YARA rules to detect common malware patterns
  9. Set up a malware analysis sandbox with FlareVM

37.8.1 Cloud Forensics

Cloud environments present unique forensic challenges:

  • Ephemeral resources: Containers and serverless functions may not persist for analysis
  • Shared responsibility: Cloud providers control infrastructure; customers control configuration
  • Log availability: Cloud audit logs (CloudTrail, Azure Activity Log, GCP Audit Log) become primary evidence
  • Disk imaging: Traditional disk imaging may not be possible; use cloud-native snapshot capabilities
  • Multi-tenancy: Forensic acquisition must not impact other tenants

Cloud-specific evidence sources: - AWS CloudTrail (API call logging) - AWS VPC Flow Logs (network metadata) - Azure Activity Log and Azure Monitor - GCP Cloud Audit Logs and VPC Flow Logs - Container runtime logs (Kubernetes audit logs)

37.8.2 AI-Assisted Investigation

AI and machine learning are increasingly applied to forensic investigation:

  • Automated triage: ML models prioritize alerts and artifacts for human review
  • Anomaly detection: Behavioral baselines identify deviations automatically
  • Natural language processing: Automated log analysis and report generation
  • Image classification: Automated categorization of forensic images
  • Pattern recognition: Identifying attack patterns across large datasets
  • GDPR implications: IR procedures must account for data protection requirements
  • SEC cyber incident disclosure: Publicly traded companies must disclose material incidents within 4 business days
  • CIRCIA: Cyber Incident Reporting for Critical Infrastructure Act requires reporting to CISA
  • Cross-border investigations: International data sovereignty laws complicate multi-jurisdiction incidents

Summary

This chapter provided a comprehensive foundation in incident response and digital forensics. We began with structured IR frameworks from NIST and SANS, understanding the phases of preparation, detection, containment, eradication, recovery, and post-incident activity. We examined how playbooks, metrics, and continuous improvement transform IR from ad-hoc firefighting into a mature organizational capability.

We explored digital forensics fundamentals, from the order of volatility to evidence acquisition and chain of custody. Memory forensics with Volatility 3 revealed how to extract evidence of running processes, network connections, injected code, and credentials from volatile memory. Disk and file system forensics uncovered the wealth of artifacts that Windows systems generate, from Prefetch and Amcache to ShellBags and Event Logs.

Network forensics and log analysis demonstrated how to reconstruct adversary activity from network traffic, DNS queries, and centralized log data. Malware analysis fundamentals provided the tools and techniques for triaging suspicious files, from static analysis and YARA rules to dynamic sandbox analysis.

The integrated case study demonstrated how these disciplines combine in a realistic investigation, following a ransomware incident from initial detection through evidence collection, investigation, containment, and recovery. Throughout, we emphasized the importance of preparation, documentation, and continuous improvement.

For ethical hackers, these skills serve dual purposes: they inform your offensive work by helping you understand how defenders investigate and respond, and they prepare you for the incident response roles that many security professionals eventually assume. The ability to both attack and defend, to both break in and investigate the break-in, is what makes a truly complete security professional.