25 min read

> "The red team's job is not to break things. It's to make the organization stronger by showing how a real adversary would break things." -- Jeremiah Talamantes

Learning Objectives

  • Distinguish red teaming from penetration testing and understand when each is appropriate
  • Plan and execute red team engagements using structured methodologies
  • Apply the MITRE ATT&CK framework to adversary emulation and detection testing
  • Conduct physical security assessments as part of red team operations
  • Implement purple teaming for collaborative security improvement

Chapter 35: Red Team Operations

"The red team's job is not to break things. It's to make the organization stronger by showing how a real adversary would break things." -- Jeremiah Talamantes

"If you know the enemy and know yourself, you need not fear the result of a hundred battles." -- Sun Tzu, The Art of War

In October 2018, MITRE released the first results of its ATT&CK Evaluations, pitting endpoint security vendors against emulated techniques from the APT3 threat group. The results were revelatory -- not because any single vendor performed perfectly, but because the evaluation exposed the massive gap between security marketing claims and actual detection capabilities. Vendors that claimed comprehensive coverage showed significant blind spots. The exercise demonstrated what red teamers had known for years: the only way to truly understand your defenses is to test them against realistic adversary behavior.

Red teaming is not penetration testing with a fancier name. It is a fundamentally different discipline with different objectives, methodologies, and outcomes. While penetration testing asks "What vulnerabilities exist?", red teaming asks "Can a determined adversary achieve their objectives despite our defenses?" This distinction matters enormously in practice. A penetration test might find 50 vulnerabilities but miss the single attack path that a real adversary would exploit. A red team engagement might exploit only three vulnerabilities but demonstrate a complete kill chain from initial access to data exfiltration.

This chapter provides a comprehensive guide to red team operations. You will learn how to plan, execute, and report on red team engagements. You will master the MITRE ATT&CK framework as both a planning tool and a communication language. You will explore physical security testing, adversary emulation, and the increasingly important practice of purple teaming. Throughout, the emphasis remains on authorized, ethical security testing that makes organizations measurably more resilient.

35.1 Red Teaming vs. Penetration Testing

Understanding the distinction between red teaming and penetration testing is essential for both practitioners and the organizations that hire them.

35.1.1 Defining Red Team Operations

Red teaming is an adversary simulation exercise in which a dedicated team emulates the tactics, techniques, and procedures (TTPs) of real-world threat actors to test an organization's detection and response capabilities. The term originates from military wargaming, where a "red team" would play the opposing force to test battle plans.

Key characteristics of red teaming:

  • Objective-based: Red teams pursue specific objectives (e.g., access the crown jewels, exfiltrate customer data, disrupt operations) rather than finding all possible vulnerabilities
  • Adversary emulation: Red teams emulate specific threat actors, using their known TTPs
  • Stealth-focused: Red teams attempt to avoid detection, testing the blue team's monitoring and response capabilities
  • Full-scope: Red team engagements typically include all attack vectors: network, application, social engineering, physical, and supply chain
  • Extended duration: Red team engagements typically run weeks to months, not days
  • Tests people and processes: Red teaming explicitly tests the human and procedural elements of security, not just technology

35.1.2 Comparison Matrix

Dimension Penetration Testing Red Teaming
Primary Goal Find vulnerabilities Test detection and response
Scope Defined systems/applications Entire organization
Duration Days to weeks Weeks to months
Stealth Not required Essential
Methodology Systematic vulnerability discovery Adversary emulation
Team Awareness IT/Security team typically aware Limited knowledge (need-to-know)
Rules of Engagement Detailed scope boundaries Objective-based with broad authority
Output Vulnerability list with severities Narrative of attack path + detection gaps
Attacker Model Opportunistic attacker Specific threat actor profile
Social Engineering Sometimes included Core component
Physical Testing Rarely included Often included

35.1.3 When to Use Each

Penetration testing is appropriate when: - You need to assess the security of a specific application, network, or system - Compliance requirements mandate vulnerability assessment (PCI DSS, HIPAA) - You want a comprehensive list of technical vulnerabilities - Your security program is relatively immature and needs to address foundational issues

Red teaming is appropriate when: - Your security program is mature enough to benefit from adversary simulation - You want to test your Security Operations Center (SOC) detection capabilities - You need to evaluate incident response procedures under realistic conditions - Leadership needs to understand the real-world risk from specific threat actors - You want to validate that security investments are working as intended

Blue Team Perspective: Do not jump straight to red teaming. If your organization cannot pass a basic penetration test, a red team engagement will simply demonstrate what you already know -- that your defenses have gaps. Build your security program incrementally: vulnerability scanning, then penetration testing, then purple teaming, then full red team exercises.

35.1.4 The Assumed Breach Model

Many organizations have recognized that perimeter-based defenses are insufficient. The assumed breach model starts the engagement from a point of initial access, skipping the initial compromise phase to focus on post-exploitation detection and response.

Benefits of assumed breach:

  • Tests the 80% of the kill chain where defenders have the most opportunity to detect and respond
  • Provides immediate value even against well-defended perimeters
  • More cost-effective for testing specific detection use cases
  • Focuses on what matters most: can you detect and contain an attacker who is already inside?

MedSecure Running Example: MedSecure's CISO requests a red team engagement to test whether the security team can detect and respond to a realistic attack. Given that MedSecure's primary threat actors are ransomware gangs targeting healthcare, you plan an engagement that emulates the TTPs of the Clop ransomware group. The engagement will test initial access through phishing, lateral movement through the clinical network, access to the EHR system, and simulated data exfiltration. The engagement duration is six weeks, with only the CISO and General Counsel aware of the exercise.

35.2 Red Team Planning and Threat Modeling

A successful red team engagement begins long before the first packet is sent. Planning and threat modeling determine the engagement's value and safety.

35.2.1 Engagement Planning

Scope Definition and Rules of Engagement (ROE)

The ROE document is the legal and operational foundation of every red team engagement. It must be signed by authorized personnel (typically C-level or equivalent) before any activity begins.

ROE elements include:

  1. Authorization: Explicit written authorization from the organization's leadership
  2. Objectives: Specific goals the red team will attempt to achieve
  3. Scope: Systems, networks, facilities, and personnel that are in scope
  4. Exclusions: Systems or actions that are explicitly off-limits (e.g., patient care systems, production databases)
  5. Timeframe: Start date, end date, and any time-of-day restrictions
  6. Communication plan: Emergency contacts, deconfliction procedures, "get out of jail free" documentation
  7. Data handling: How the red team will handle any sensitive data encountered
  8. Acceptable actions: Explicit listing of permitted activities (social engineering, physical access, etc.)
  9. Escalation procedures: What happens if the red team discovers active threats or critical vulnerabilities
  10. Reporting requirements: Frequency, format, and recipients of status updates and final reports

Threat Profile Selection

Red team engagements should emulate realistic threat actors. The threat profile should be based on:

  • Industry-specific threats: What threat groups target your industry?
  • Geopolitical context: Is your organization a target for nation-state actors?
  • Asset value: What data or systems would motivate different threat actors?
  • Historical incidents: What attacks has your organization or sector experienced?
  • Intelligence feeds: What does current threat intelligence say about targeting trends?

35.2.2 Threat Intelligence-Informed Engagement

The best red team engagements are driven by threat intelligence. This means:

  1. Select a threat actor relevant to the target organization
  2. Research their TTPs using threat intelligence reports, ATT&CK mappings, and published indicators
  3. Develop an emulation plan that replicates those TTPs at the appropriate fidelity
  4. Execute the plan while maintaining the threat actor's operational security practices
  5. Measure detection against each TTP used

Common threat intelligence sources:

  • MITRE ATT&CK groups database
  • Mandiant (now Google Cloud) threat intelligence reports
  • CrowdStrike adversary profiles
  • CISA advisories and threat assessments
  • Sector-specific ISACs (Information Sharing and Analysis Centers)
  • Academic research on threat groups

35.2.3 Operational Security for Red Teams

Red teams must practice operational security (OPSEC) to provide realistic testing:

Infrastructure OPSEC: - Use dedicated, compartmentalized infrastructure for each engagement - Register domains that appear legitimate (aged domains, appropriate naming) - Use redirectors to hide team infrastructure from blue team analysis - Implement traffic encryption and domain fronting where appropriate - Separate engagement infrastructure from team management infrastructure

Execution OPSEC: - Time activities to blend with normal business operations - Mimic the target's legitimate network traffic patterns - Use living-off-the-land techniques before deploying custom tools - Monitor for indicators that the blue team has detected your presence - Have contingency plans for each phase of the operation

Data handling OPSEC: - Encrypt all collected data at rest and in transit - Never exfiltrate real sensitive data (use proof tokens or synthetic data) - Securely destroy all engagement data after the reporting period - Maintain chain of custody documentation for any evidence collected

35.3 MITRE ATT&CK for Red Teams

The MITRE ATT&CK framework has become the universal language for describing adversary behavior. Red teams use it for planning, execution, and reporting.

35.3.1 ATT&CK Framework Overview

ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) is a knowledge base of real-world adversary behaviors organized into:

  • Tactics: The adversary's tactical goals (what they are trying to achieve). There are 14 Enterprise tactics ranging from Reconnaissance through Impact.
  • Techniques: How adversaries achieve tactical goals. There are over 200 techniques.
  • Sub-techniques: More specific descriptions of techniques. There are over 400 sub-techniques.
  • Procedures: Specific implementations of techniques by threat groups or malware.

Enterprise tactics in order:

  1. Reconnaissance -- Gathering information for planning
  2. Resource Development -- Establishing infrastructure and capabilities
  3. Initial Access -- Gaining a foothold in the target network
  4. Execution -- Running adversary-controlled code
  5. Persistence -- Maintaining access across restarts
  6. Privilege Escalation -- Gaining higher-level permissions
  7. Defense Evasion -- Avoiding detection
  8. Credential Access -- Stealing credentials
  9. Discovery -- Understanding the environment
  10. Lateral Movement -- Moving through the network
  11. Collection -- Gathering data for exfiltration
  12. Command and Control -- Communicating with compromised systems
  13. Exfiltration -- Stealing data from the target
  14. Impact -- Manipulating, interrupting, or destroying data/systems

35.3.2 Using ATT&CK for Engagement Planning

The ATT&CK Navigator is a web-based tool for visualizing and planning technique coverage:

Step 1: Create a threat actor layer. Import or manually create a Navigator layer showing the techniques used by your target threat actor. ATT&CK provides pre-built layers for many groups.

Step 2: Create a detection coverage layer. Work with the blue team (or estimate independently) to create a layer showing which techniques the organization can currently detect.

Step 3: Overlay layers. The gap between the threat actor's techniques and the organization's detection coverage reveals the highest-risk areas to test.

Step 4: Plan technique chain. Select specific techniques to chain together into a realistic attack narrative. Consider technique dependencies (e.g., you need Initial Access before Lateral Movement).

Example Attack Chain (Emulating APT29):
1. Initial Access: T1566.001 - Spearphishing Attachment
2. Execution: T1059.001 - PowerShell
3. Persistence: T1053.005 - Scheduled Task
4. Privilege Escalation: T1548.002 - Bypass UAC
5. Defense Evasion: T1070.004 - File Deletion
6. Credential Access: T1003.001 - LSASS Memory
7. Discovery: T1087.002 - Domain Account Discovery
8. Lateral Movement: T1021.002 - SMB/Windows Admin Shares
9. Collection: T1560.001 - Archive via Utility
10. C2: T1071.001 - Web Protocols
11. Exfiltration: T1048.002 - Exfiltration Over Asymmetric Encrypted Non-C2 Protocol

35.3.3 ATT&CK-Based Reporting

Red team reports that map findings to ATT&CK provide enormous value to defenders:

Per-technique reporting: - Technique ID and name - Procedure used (specific tool, command, method) - Timestamp and target system - Detection result (detected/not detected/partially detected) - Detection source (if detected): EDR, SIEM, network monitoring, user report - Time to detect (if detected) - Recommended detection improvements

This structured reporting allows organizations to track their detection improvement over time, technique by technique.

35.3.4 Atomic Red Team

Atomic Red Team is an open-source library of small, focused tests for individual ATT&CK techniques. Each "atomic test" is a discrete, self-contained test that exercises a single technique.

Benefits: - Tests individual techniques without requiring a full engagement - Can be automated for continuous validation - Provides a standardized, repeatable testing methodology - Maps directly to ATT&CK technique IDs

# Install Atomic Red Team (PowerShell)
IEX (IWR 'https://raw.githubusercontent.com/redcanaryco/invoke-atomicredteam/master/install-atomicredteam.ps1' -UseBasicParsing)

# List available tests for a technique
Invoke-AtomicTest T1003.001 -ShowDetailsBrief

# Execute a specific atomic test
Invoke-AtomicTest T1003.001 -TestNumbers 1

# Execute and check if prerequisites are met
Invoke-AtomicTest T1003.001 -CheckPrereqs

# Clean up after testing
Invoke-AtomicTest T1003.001 -Cleanup

Warning

Atomic Red Team tests execute real adversary techniques. Only run them in authorized environments. Some tests can cause damage or trigger security alerts. Always understand what a test does before executing it.

35.4 Adversary Emulation and Simulation

Adversary emulation is the practice of mimicking a specific threat actor's behavior as closely as possible. It is the gold standard for realistic security testing.

35.4.1 Emulation vs. Simulation

Adversary Emulation: Manually or semi-automatically replaying a threat actor's known TTPs with high fidelity. The red team uses similar tools, infrastructure, and procedures to the real threat actor.

Adversary Simulation: Automated replay of threat actor behaviors using frameworks like Caldera, Prelude Operator, or commercial Breach and Attack Simulation (BAS) platforms. Lower fidelity but higher repeatability and coverage.

Fidelity spectrum:

Level Description Use Case
High Fidelity Custom tools matching threat actor's exact capabilities Critical assessments, nation-state emulation
Medium Fidelity Commercial C2 frameworks with threat actor's TTPs Standard red team engagements
Low Fidelity Automated BAS platforms running technique tests Continuous validation, regression testing

35.4.2 Command and Control (C2) Frameworks

C2 frameworks are the red team's primary operational platform. They provide the infrastructure for managing implants, executing commands, and maintaining access.

Commercial C2: - Cobalt Strike: The industry standard commercial C2 framework. Beacon implant supports extensive post-exploitation capabilities. Malleable C2 profiles allow customization of network indicators. - Nighthawk: Modern C2 designed for advanced red teams with extensive evasion capabilities.

Open-Source C2: - Sliver: Written in Go, supports multiple implant types (session, beacon), multiple C2 protocols (mTLS, HTTP(S), DNS, WireGuard), and multi-player operation - Mythic: Modular C2 framework written in Go with a web-based UI. Supports multiple agent types through a plugin architecture - Havoc: Modern, open-source C2 with a focus on evasion and modularity - Covenant: .NET-based C2 framework focused on offensive .NET tradecraft

C2 infrastructure design:

A mature red team C2 infrastructure includes:

[Team Server] → [Redirector 1] → [Target Network]
     |          [Redirector 2] → [Target Network]
     |          [Redirector 3] → [Target Network]
     |
[Management]  → [Short-haul C2] (interactive, high bandwidth)
     |          [Long-haul C2]  (beaconing, low and slow)
     |          [Backup C2]     (fallback if primary burns)

Redirectors serve multiple purposes: - Hide the true location of team servers - Provide multiple C2 channels for resilience - Enable geographic distribution to match legitimate traffic patterns - Allow rapid rotation if blue team identifies indicators

35.4.3 Post-Exploitation Operations

Once initial access is achieved, the red team conducts post-exploitation operations:

Situational awareness:

# Host reconnaissance (Windows)
whoami /all
systeminfo
net user /domain
nltest /dclist:
ipconfig /all
tasklist /v
netstat -ano

# Network reconnaissance
net group "Domain Admins" /domain
net group "Enterprise Admins" /domain
Get-ADComputer -Filter * | Select-Object Name, DNSHostName

Credential harvesting: - LSASS memory dumping (with appropriate safeguards) - Kerberoasting and AS-REP roasting - NTLM relay and credential forwarding - Keylogging (with ROE authorization) - Credential file discovery (password files, configuration files, browser storage)

Lateral movement: - PsExec and SMB-based execution - WMI and WinRM remoting - RDP with harvested credentials - Pass-the-hash and pass-the-ticket - DCOM-based execution

Data targeting: - File share enumeration and sensitive data discovery - Database access and query execution - Email access and search - Cloud resource access (AWS, Azure, GCP)

Blue Team Perspective: Focus your detection on post-exploitation behaviors, not just initial access. An attacker who gets initial access but is detected during lateral movement has been effectively contained. Invest in behavioral detections for credential access, discovery, and lateral movement techniques.

35.4.4 MITRE Caldera

Caldera is MITRE's free adversary emulation platform. It automates the execution of ATT&CK techniques using agents deployed on target systems.

Key features: - Abilities: Individual ATT&CK technique implementations - Adversary profiles: Collections of abilities that emulate specific threat actors - Operations: Automated execution of adversary profiles against target agents - Planning: Autonomous decision-making about which techniques to use next - Plugins: Extensible architecture for adding capabilities

# Example Caldera adversary profile
adversary:
  name: "APT29 Emulation"
  description: "Emulates APT29 (Cozy Bear) initial access and reconnaissance"
  atomic_ordering:
    - "T1059.001"  # PowerShell
    - "T1082"       # System Information Discovery
    - "T1083"       # File and Directory Discovery
    - "T1087.002"   # Domain Account Discovery
    - "T1003.001"   # OS Credential Dumping: LSASS Memory
    - "T1021.002"   # SMB/Windows Admin Shares

35.5 Physical Security Testing

Physical security is an integral component of red team operations. Many organizations invest heavily in network security while leaving physical security gaps that provide trivial access.

35.5.1 Physical Penetration Testing Methodology

Pre-engagement: - Obtain explicit written authorization for physical testing - Carry authorization documentation at all times (a "get out of jail free" letter) - Identify emergency contacts and escalation procedures - Establish safe words or code phrases for de-escalation - Understand local laws regarding trespassing, impersonation, and recording

Reconnaissance: - Photograph building exteriors, entrances, and security measures - Identify security cameras, guards, badge readers, and access control systems - Observe employee behavior patterns (smoking areas, lunch patterns, door propping) - Research building layouts using public sources (fire evacuation plans, permit records) - Identify service entrances, loading docks, and less monitored access points

Access techniques:

  1. Tailgating/Piggybacking: Following an authorized person through a controlled access point
  2. Social engineering: Impersonating delivery personnel, IT support, maintenance workers, or new employees
  3. Badge cloning: Copying RFID/NFC credentials from proximity card systems (HID iCLASS, MIFARE)
  4. Lock picking: Bypassing physical locks (with appropriate authorization and training)
  5. Door bypass: Using tools to manipulate door hardware (under-door tools, latch slipping)
  6. Dumpster diving: Searching discarded materials for sensitive information

35.5.2 Physical Security Assessment Areas

Perimeter security: - Fencing, gates, and barriers - Exterior lighting - Security cameras (coverage gaps, dummy cameras) - Vehicle access controls

Building access: - Card reader systems and their vulnerabilities - Visitor management procedures - Reception desk security awareness - Door security (locks, hinges, closers, alarms) - After-hours security

Interior security: - Server room access controls - Network closet security - Clean desk policy compliance - Document handling and disposal - USB/device policy enforcement - Screen lock compliance

Social engineering resilience: - Employee awareness of social engineering tactics - Willingness to challenge unknown individuals - Adherence to visitor escort policies - Reporting of suspicious behavior

35.5.3 Physical Testing Tools

Access control testing: - Proxmark3: RFID/NFC research and cloning tool - Flipper Zero: Multi-tool for hardware security testing - Lock pick sets: For authorized physical lock testing - Under-door tools: For bypassing door latches - Shove knives and travelers: For latch manipulation

Surveillance and reconnaissance: - Covert cameras for documenting security gaps - Directional microphones (where legally permitted) - Wi-Fi analysis tools for identifying nearby networks - Bluetooth scanners for identifying devices

Implant devices: - Network implants (LAN Turtle, Packet Squirrel) - USB rubber ducky / Bash Bunny for keystroke injection - Wireless access points for rogue AP deployment - Hardware keyloggers (with explicit authorization)

Warning

Physical security testing carries unique risks including confrontation with security personnel, law enforcement involvement, and personal safety concerns. Always carry authorization documentation. Never resist if confronted. Always prioritize safety over the engagement.

35.5.4 ShopStack Physical Assessment

ShopStack Running Example: During a red team engagement against ShopStack, you conduct physical security testing of their corporate headquarters. Wearing a vendor polo and carrying a clipboard, you enter through the main lobby during a busy Monday morning. The receptionist is handling a delivery and waves you through. You tailgate an employee through the badge-controlled door to the engineering floor. Over the next two hours, you photograph unlocked screens showing production dashboards, find a sticky note with database credentials on a developer's monitor, and plant a LAN Turtle network implant in an unused ethernet port in a conference room. The implant provides persistent remote access to ShopStack's internal network for the remainder of the engagement. None of these activities triggered any alerts or confrontations.

35.6 Purple Teaming and Collaborative Security

Purple teaming bridges the gap between adversary emulation and defensive improvement. It is not a separate team but a collaborative methodology.

35.6.1 What Is Purple Teaming?

Purple teaming is the practice of red and blue teams working together in real-time to improve detection and response capabilities. Rather than the red team operating in secret and revealing findings at the end, purple teaming involves:

  1. Joint planning: Red and blue teams agree on techniques to test
  2. Collaborative execution: Red team executes techniques while blue team observes
  3. Real-time feedback: Blue team reports what they can and cannot detect
  4. Iterative improvement: Teams work together to develop, test, and validate new detections
  5. Documentation: Both offensive procedures and defensive detections are documented

35.6.2 Purple Team Exercise Structure

Pre-exercise (1-2 weeks before):

  1. Select ATT&CK techniques to test based on threat intelligence
  2. Red team prepares procedures for each technique
  3. Blue team reviews current detection coverage for selected techniques
  4. Agree on exercise timeline and communication channels
  5. Set up tracking tools (Vectr, spreadsheets, ATT&CK Navigator)

Exercise day structure (per technique):

15 minutes: Red team briefs the technique and procedure
 5 minutes: Blue team predicts whether they can detect it
10 minutes: Red team executes the technique
15 minutes: Blue team searches for evidence in their tools
15 minutes: Joint discussion of results
    - If detected: Document the detection. Test evasion variants.
    - If not detected: Develop new detection logic together.

Post-exercise (1-2 weeks after):

  1. Red team documents all procedures used
  2. Blue team implements and validates new detections
  3. Joint report summarizing coverage improvements
  4. Update ATT&CK Navigator detection coverage layer
  5. Plan next exercise cycle

35.6.3 Detection Engineering Through Purple Teaming

Purple teaming drives detection engineering by providing real-world data to develop detections against:

Example: Detecting Kerberoasting (T1558.003)

Red team executes:

# Kerberoasting using Rubeus
Rubeus.exe kerberoast /outfile:hashes.txt

Blue team observes: - Windows Event ID 4769: Kerberos Service Ticket Request - Anomalous encryption type: RC4_HMAC (0x17) instead of AES - Unusual volume of service ticket requests from a single account - Service tickets requested for accounts with SPNs

Blue team creates detection:

# Sigma rule for Kerberoasting detection
title: Potential Kerberoasting Activity
status: experimental
logsource:
  product: windows
  service: security
detection:
  selection:
    EventID: 4769
    TicketEncryptionType: '0x17'
    ServiceName|endswith:
      - '$'  # Exclude machine accounts
  filter:
    ServiceName: 'krbtgt'
  condition: selection and not filter
  timeframe: 5m
  | count(ServiceName) by TargetUserName > 3
level: high
tags:
  - attack.credential_access
  - attack.t1558.003

Red team tests evasion:

# Kerberoasting with AES encryption to evade RC4 detection
Rubeus.exe kerberoast /tgtdeleg

Blue team enhances detection: - Monitor for unusual service ticket volume regardless of encryption type - Correlate with authentication anomalies - Implement honeypot SPN accounts

This iterative cycle produces robust, tested detections that are validated against real adversary behavior.

35.6.4 Measuring Purple Team Outcomes

Track these metrics across purple team exercises:

  • Detection coverage: Percentage of tested ATT&CK techniques detected
  • Mean time to detect (MTTD): Average time between technique execution and detection
  • Mean time to respond (MTTR): Average time between detection and containment
  • Detection improvement rate: New detections created per exercise
  • False positive rate: Percentage of new detections that generate false positives
  • Coverage trend: Detection coverage over time across exercises

Vectr is a free tool from SecurityRisk Advisors designed specifically for tracking purple team exercises. It provides:

  • Campaign management for organizing exercises
  • Technique tracking with ATT&CK mapping
  • Detection coverage visualization
  • Historical trending and reporting
  • Export capabilities for leadership reporting

35.7 Advanced Red Team Techniques

Beyond the fundamentals, advanced red teams employ sophisticated techniques to challenge mature security organizations.

35.7.1 Custom Tool Development

Mature red teams develop custom tools to avoid detection by signature-based security controls:

Payload development considerations: - Avoid known malware signatures and behaviors - Use direct system calls instead of API calls that are hooked by EDR - Implement sleep obfuscation to evade memory scanning - Use legitimate execution methods (reflective loading, process hollowing, module stomping) - Develop custom C2 protocols that blend with normal traffic

Development languages for red team tools: - C/C++: Low-level control, direct syscalls, shellcode development - Rust: Memory safety without garbage collection, growing offensive tooling ecosystem - Go: Easy cross-compilation, large standard library, Sliver and many tools written in Go - C#/.NET: Runs on Windows natively, extensive offensive .NET ecosystem (SharpCollection) - Nim: Compiles to native code, Python-like syntax, growing offensive use

35.7.2 Cloud Red Teaming

Modern red team engagements must include cloud environments:

AWS-specific techniques: - IAM privilege escalation (iam:PassRole, lambda:CreateFunction) - Metadata service exploitation (IMDS v1/v2) - S3 bucket misconfiguration exploitation - Cross-account access abuse - SSM session manager for C2

Azure-specific techniques: - Azure AD enumeration and privilege escalation - Managed identity token theft - Key Vault access and secret extraction - Azure Functions for serverless C2 - Conditional Access Policy bypass

Tools: - Pacu: AWS exploitation framework - ROADtools: Azure AD enumeration - ScoutSuite: Multi-cloud security auditing - Prowler: AWS/Azure/GCP security assessment

35.7.3 Identity-Based Attacks

Modern red teams focus heavily on identity systems:

Active Directory attacks: - Kerberoasting and AS-REP roasting - DCSync for credential harvesting - Golden and Silver ticket attacks - Delegation abuse (constrained, unconstrained, resource-based) - Certificate Services (AD CS) abuse: ESC1-ESC8 attacks - Shadow Credentials and Key Trust attacks

Azure AD / Entra ID attacks: - Consent grant phishing - Application permission abuse - Device code phishing - Token theft and replay - PRT (Primary Refresh Token) abuse

35.7.4 Evasion Techniques

Endpoint Detection and Response (EDR) evasion: - Direct system calls (bypassing user-mode API hooks) - Syscall proxying and indirect syscall techniques - ETW (Event Tracing for Windows) patching - AMSI bypass techniques - Sleep obfuscation (encrypting implant memory during sleep) - Module stomping and phantom DLL loading

Network detection evasion: - Domain fronting and CDN abuse - Malleable C2 profiles mimicking legitimate services - DNS-over-HTTPS for C2 - Encrypted channels with legitimate certificates - Traffic timing manipulation

Blue Team Perspective: Do not play the evasion arms race alone. Layer your defenses so that attackers must evade multiple independent detection mechanisms simultaneously. Combine endpoint telemetry, network monitoring, identity analytics, and behavioral baselines. An attacker who evades your EDR should still be caught by network monitoring or identity analytics.

35.8 Reporting and Debrief

The value of a red team engagement is ultimately determined by the quality of its reporting and the improvements it drives.

35.8.1 Red Team Report Structure

An effective red team report includes:

Executive Summary (1-2 pages): - Engagement objectives and threat scenario - Key findings in business impact terms - Overall risk assessment - Top recommendations

Engagement Overview: - Scope, timeline, and rules of engagement - Threat actor profile and emulation rationale - Team composition and tools used - Limitations and caveats

Attack Narrative: - Chronological account of the engagement - Each phase: what was attempted, what succeeded, what was detected - ATT&CK technique mapping for every action - Evidence (screenshots, logs, artifacts)

Detection and Response Assessment: - Which techniques were detected and how - Which techniques were not detected - Time-to-detect for each detected technique - Quality of blue team response when alerts fired - ATT&CK Navigator visualization of detection coverage

Findings and Recommendations: - Prioritized list of findings - Each finding includes: description, evidence, business impact, ATT&CK mapping, remediation - Short-term, medium-term, and long-term recommendations - Investment priorities based on maximum risk reduction

Technical Appendices: - Detailed tool output and command logs - Indicators of Compromise (IOCs) from the engagement - Network diagrams of attack paths - Raw ATT&CK Navigator layers

35.8.2 The Debrief Process

Operational debrief (Red + Blue teams): - Walk through the engagement chronologically - Red team reveals what they did and why - Blue team shares what they detected and what they missed - Collaborative discussion of detection improvement opportunities - This is NOT about blame. It is about learning and improvement.

Executive debrief (Leadership): - Present findings in business risk terms - Use attack narrative to make the risk tangible - Provide clear recommendations with effort/impact analysis - Request specific investments or organizational changes

Lessons learned: - What worked well (both offense and defense)? - What surprised both teams? - What processes need improvement? - What investments would have the greatest impact?

35.8.3 Continuous Improvement Cycle

Red teaming should not be a one-time event. Mature organizations implement a continuous improvement cycle:

  1. Threat assessment -- Identify relevant threat actors and TTPs
  2. Red team exercise -- Test detection and response against those TTPs
  3. Gap analysis -- Identify detection and response gaps
  4. Purple team development -- Build and validate new detections collaboratively
  5. Automated validation -- Implement continuous testing with BAS/Atomic Red Team
  6. Re-assessment -- Return to step 1 with updated threat intelligence

Student Home Lab Exercise: Build a purple team lab:

  1. Set up a small Active Directory domain (2 DCs, 5 workstations) in VMs
  2. Install Sysmon with a detection-focused configuration
  3. Deploy a SIEM (ELK stack or Wazuh) to collect and analyze logs
  4. Install Atomic Red Team on a workstation
  5. Execute techniques one at a time, checking detection in your SIEM after each
  6. Write Sigma or SIEM-native rules for each technique you test
  7. Track your coverage on an ATT&CK Navigator layer

Blue Team Perspective: After every red team engagement, measure your mean time to detect (MTTD) and mean time to respond (MTTR) for each phase of the attack. These are your most important security metrics. Track them over time. Every engagement should show improvement. If MTTD and MTTR are not improving, you have a systemic problem that technology alone cannot solve.

Summary

This chapter covered the comprehensive discipline of red team operations. We began by distinguishing red teaming from penetration testing, understanding that red teaming tests detection and response capabilities through realistic adversary emulation, not just vulnerability discovery. We explored engagement planning, including rules of engagement, threat profile selection, and operational security requirements.

We deep-dived into the MITRE ATT&CK framework as the common language for red team operations, examining how to use it for engagement planning, execution tracking, and structured reporting. We covered adversary emulation using both manual techniques and automated platforms like Caldera and Atomic Red Team. We explored C2 frameworks, infrastructure design, and post-exploitation operations.

Physical security testing was covered as an integral component of red team operations, including access techniques, tool usage, and the unique safety considerations of physical assessments. We examined purple teaming as the collaborative bridge between offense and defense, with detailed methodology for conducting exercises and measuring outcomes.

The chapter concluded with advanced techniques including custom tool development, cloud red teaming, identity-based attacks, and EDR evasion, followed by comprehensive guidance on reporting and establishing continuous improvement cycles. Red teaming, when done well, is the most powerful tool an organization has for understanding and improving its real-world security posture.

In the next chapter, we turn to Bug Bounty Hunting, where individual researchers apply many of these same skills to discover vulnerabilities across the internet's most valuable targets.