> "Without a methodology, you're just hacking randomly. With a methodology, you're conducting a professional security assessment." --- Georgia Weidman, Penetration Testing
Learning Objectives
- Understand and apply PTES, OSSTMM, and OWASP testing methodologies
- Plan and scope penetration testing engagements effectively
- Draft comprehensive rules of engagement documentation
- Execute testing with quality assurance processes
- Navigate PCI DSS penetration testing requirements
- Meet CREST, CHECK, and other accreditation standards
In This Chapter
Chapter 38: Penetration Testing Methodology and Standards
"Without a methodology, you're just hacking randomly. With a methodology, you're conducting a professional security assessment." --- Georgia Weidman, Penetration Testing
You have spent the preceding thirty-seven chapters learning how to break things. You know how to enumerate networks, exploit vulnerabilities, escalate privileges, pivot through Active Directory forests, and chain web application flaws into full compromise. But raw technical skill is only half the equation. Without a rigorous methodology, your findings will be inconsistent, your reports will miss critical issues, and your clients will have no confidence that the assessment was thorough.
This chapter is about turning you from a skilled hacker into a professional penetration tester. We will examine the major industry methodologies --- PTES, OSSTMM, the OWASP Testing Guide --- and show how they provide structure, repeatability, and defensible results. We will walk through the entire engagement lifecycle, from the first scoping call to the final deliverable, with real-world templates and examples drawn from MedSecure Health Systems, ShopStack, and countless lessons learned from the field.
If the earlier chapters taught you what to test and how to test it, this chapter teaches you why methodology matters and how to ensure that every engagement meets the standards your clients, regulators, and industry bodies demand.
38.1 PTES, OSSTMM, and OWASP Testing Guides
Three major methodologies dominate the penetration testing industry. Each has different strengths, and experienced practitioners typically blend elements from all three. Understanding their philosophies, structures, and intended use cases is essential for any professional engagement.
38.1.1 The Penetration Testing Execution Standard (PTES)
PTES emerged from the frustration of penetration testers who found that no standard adequately described how a pentest should actually work from start to finish. Published by a consortium of experienced security professionals, PTES defines seven phases:
-
Pre-engagement Interactions. Everything that happens before the first scan: scoping calls, contracts, rules of engagement, emergency contacts, legal authorization. PTES dedicates significant attention to this phase because poor scoping is the number one cause of failed engagements.
-
Intelligence Gathering. Passive and active reconnaissance, OSINT, and target profiling. PTES distinguishes between three levels of intelligence gathering: Level 1 (automated, passive), Level 2 (manual, mixed), and Level 3 (deep-dive, potentially involving social engineering reconnaissance).
-
Threat Modeling. Identifying the most likely attack paths based on the intelligence gathered. PTES uses a structured approach that maps business assets to threat actors, creating attack trees that guide the testing phase. This is a step many less rigorous methodologies skip entirely.
-
Vulnerability Analysis. Both automated scanning and manual testing to identify vulnerabilities. PTES explicitly distinguishes between active testing, passive testing, and validation activities.
-
Exploitation. Attempting to leverage identified vulnerabilities to demonstrate impact. PTES emphasizes precision over breadth: the goal is not to exploit everything possible but to demonstrate meaningful business impact through selected exploitation paths.
-
Post-Exploitation. Determining the value of compromised systems, pivoting, data exfiltration simulation, and persistence testing. PTES is one of the few methodologies that explicitly defines post-exploitation activities.
-
Reporting. Deliverable creation, including executive summary and technical findings. PTES defines both report structure and quality criteria.
Strengths of PTES: - Comprehensive coverage of the entire engagement lifecycle - Explicit inclusion of threat modeling and post-exploitation - Defined skill levels (Level 1 through Level 3) that help scope effort - Technical guidelines that complement the methodology with specific tool usage
Limitations of PTES: - The standard has not been significantly updated since its initial release - Technical guidelines can become outdated as tools and techniques evolve - Less prescriptive about specific test cases than OWASP
MedSecure in Practice: When we scoped the MedSecure engagement in Chapter 1, we followed PTES pre-engagement interactions. The scoping call identified critical assets (patient database, medical device network, provider portal), emergency contacts (CISO Dr. Sarah Chen's mobile number), and explicit restrictions (no testing of active medical devices during patient care hours). This pre-engagement rigor prevented what could have been a dangerous situation: an inexperienced tester might have fuzzed a connected infusion pump.
38.1.2 The Open Source Security Testing Methodology Manual (OSSTMM)
OSSTMM, maintained by the Institute for Security and Open Methodologies (ISECOM), takes a fundamentally different approach from PTES. Rather than defining a pentest lifecycle, OSSTMM defines a framework for measuring the operational security of any system.
OSSTMM version 3 organizes testing around five channels:
- Human Security. Social engineering, physical social interactions, and personnel security testing.
- Physical Security. Access controls, environmental security, and physical safeguards.
- Wireless Communications. Radio frequency, infrared, Bluetooth, and other wireless protocols.
- Telecommunications. Voice, fax, PBX, and VoIP systems.
- Data Networks. Traditional network and application testing.
For each channel, OSSTMM defines specific test cases organized around the concept of Operations Security (OpSec) and introduces the rav (Risk Assessment Value), a quantitative metric that measures the balance between controls, limitations, and actual security posture.
The rav Calculation:
OSSTMM's signature contribution is the rav, which provides a numerical score for operational security:
rav = (Porosity + Controls) / (Limitations + Attack Surface) × 100
Where: - Porosity represents visibility and access points - Controls include authentication, encryption, alarm, and filtering mechanisms - Limitations include vulnerabilities, weaknesses, and exposures - Attack Surface is the total interaction surface available to an attacker
A rav above 100 indicates the target has more protection than exposure --- it is "above par." A rav below 100 means exposure exceeds protection.
Strengths of OSSTMM: - Quantitative metrics (rav) provide measurable, comparable results - Channel-based approach ensures comprehensive coverage beyond just networks - Focuses on operational security rather than vulnerability counting - Strong emphasis on testing repeatability and verifiability
Limitations of OSSTMM: - Steeper learning curve than PTES or OWASP - The rav calculation, while valuable, can be complex to implement - Less widely adopted in North American commercial pentesting - Can feel academic compared to the practical focus of PTES
38.1.3 The OWASP Testing Guide
The OWASP Testing Guide (currently version 4.2, with version 5 in development) is the definitive reference for web application penetration testing. Unlike PTES and OSSTMM, which cover all testing domains, OWASP focuses specifically on web application security.
The Testing Guide organizes its test cases around eleven categories:
| Category | Example Tests |
|---|---|
| Information Gathering | Fingerprint web server, review application architecture |
| Configuration and Deploy Management | Test network/infrastructure, test file extensions |
| Identity Management | Test user registration, test account provisioning |
| Authentication | Test for default credentials, test password reset |
| Authorization | Test directory traversal, test IDOR |
| Session Management | Test session timeout, test cookie attributes |
| Input Validation | Test for XSS, SQLi, SSTI, command injection |
| Error Handling | Test for stack traces, test custom error messages |
| Cryptography | Test for weak SSL/TLS, test for sensitive data in clear text |
| Business Logic | Test for business logic data validation, test workflow bypass |
| Client-Side | Test DOM-based XSS, test JavaScript execution, test clickjacking |
Each test case in the OWASP Testing Guide follows a consistent structure:
- Summary: What the test evaluates
- Test Objectives: Specific goals of the test
- How to Test: Step-by-step instructions with tools and techniques
- Remediation: How to fix identified issues
- References: Links to relevant standards and resources
Strengths of the OWASP Testing Guide: - Extremely detailed, prescriptive test cases for web applications - Regularly updated by a large community of contributors - Directly maps to the OWASP Top 10 and ASVS - Tool-agnostic: describes techniques rather than prescribing specific tools - Free and open source
Limitations of the OWASP Testing Guide: - Focused exclusively on web applications (not network, wireless, or physical) - Can be overwhelming: version 4.2 contains over 90 individual test cases - Does not define engagement lifecycle (scoping, contracts, etc.) - Some test cases require significant expertise to execute properly
ShopStack Application: When we tested ShopStack's React/Node.js e-commerce platform in Parts 4 and 6, we mapped our testing against OWASP Testing Guide categories. This ensured we didn't miss the business logic tests (like testing whether coupon codes could be applied multiple times or whether the checkout flow could be manipulated to change prices) that purely automated tools would never catch.
38.1.4 NIST SP 800-115 and Other Government Methodologies
For testers working in US government and defense contexts, NIST Special Publication 800-115 (Technical Guide to Information Security Testing and Assessment) serves as the primary methodology reference.
NIST 800-115 Structure: NIST 800-115 organizes security testing into three categories:
-
Testing: Hands-on evaluation of systems and networks. This includes: - Network scanning and discovery - Vulnerability scanning - Penetration testing - Password cracking - Social engineering - Wireless security testing
-
Examination: Reviewing documentation, logs, rulesets, and system configurations. This includes: - Policy and procedure review - Log review and analysis - Ruleset review (firewalls, IDS/IPS) - System configuration review
-
Interviewing: Discussing security practices with system administrators, developers, and management.
Key Differences from PTES: NIST 800-115 differs from PTES in several important ways: - It covers a broader range of assessment activities beyond penetration testing - It places more emphasis on examination and interview activities - It is more prescriptive about the assessment planning and reporting processes - It explicitly addresses the government regulatory context (FISMA, FedRAMP) - It was last significantly updated in 2008, making some technical details dated
Other Government Frameworks:
NSA/IARPA Red Team Assessment Guidelines: Used for testing classified systems and critical national infrastructure. Details are not publicly available, but the existence of these guidelines influences the expectations of government clients.
DISA STIG Testing: The Defense Information Systems Agency publishes Security Technical Implementation Guides (STIGs) that define specific configuration requirements for government systems. Penetration testers working in DoD environments are expected to test against STIG requirements.
FedRAMP Penetration Testing Guidance: The Federal Risk and Authorization Management Program (FedRAMP) publishes specific guidance for penetration testing of cloud service providers seeking government authorization. This guidance requires testing that covers web applications, APIs, network infrastructure, and cloud-specific attack vectors.
ShopStack Government Ambitions: ShopStack recently decided to pursue FedRAMP authorization to sell their e-commerce platform to government agencies. This means their next penetration test must follow FedRAMP penetration testing guidance, which requires a Third-Party Assessment Organization (3PAO) to conduct the testing. The scope expands beyond their standard assessment to include cloud infrastructure testing, application security testing, and specific tests for FedRAMP-required controls.
38.1.5 ISSAF (Information Systems Security Assessment Framework)
The ISSAF, maintained by the Open Information Systems Security Group (OISSG), provides another methodology option, though it is less widely adopted than PTES or OWASP. ISSAF is notable for its extremely detailed technical procedures --- it provides specific command-line instructions for common testing activities, making it useful as a reference during testing.
ISSAF organizes testing into nine domains: 1. Information gathering 2. Network mapping 3. Vulnerability identification 4. Penetration (exploitation) 5. Gaining access and privilege escalation 6. Enumerating further 7. Compromise remote users/sites 8. Maintaining access 9. Covering tracks
Each domain includes detailed technical procedures with specific tool usage. While the tool references can become outdated, the structured approach provides a useful checklist for ensuring comprehensive testing.
38.1.6 Choosing and Combining Methodologies
In practice, professional penetration testers rarely follow a single methodology in isolation. The most effective approach combines elements from each:
- PTES for the overall engagement lifecycle (pre-engagement through reporting)
- OSSTMM for quantitative measurement and ensuring channel coverage
- OWASP Testing Guide for web application test cases
A typical engagement might flow like this:
- Follow PTES pre-engagement interactions for scoping and contracting
- Use PTES intelligence gathering and threat modeling phases
- Leverage OWASP Testing Guide test cases for web application assessment
- Apply OSSTMM channel categories to ensure wireless, physical, and telecom testing aren't overlooked
- Use PTES post-exploitation and reporting phases
- Calculate OSSTMM rav scores for quantitative executive reporting
Professional Tip: Many penetration testing firms develop their own internal methodology that synthesizes elements from PTES, OSSTMM, and OWASP. This is perfectly acceptable as long as the methodology is documented, repeatable, and can be mapped back to recognized standards when clients or auditors ask about your approach.
38.2 Planning and Scoping Engagements
Poor scoping kills more engagements than poor hacking. A brilliantly executed pentest against the wrong targets, at the wrong depth, or without proper authorization is worse than no pentest at all. This section covers the scoping process in detail.
38.2.1 The Scoping Call
Every engagement begins with a scoping call (or series of calls) between the testing team and the client. The scoping call has several objectives:
Understanding Business Context: - What is the client's industry and regulatory environment? - What triggered this engagement (compliance requirement, incident response, new deployment)? - What are the crown jewel assets --- the data or systems that would cause the most damage if compromised? - What is the client's security maturity level?
Defining Scope: - Which networks, IP ranges, and domains are in scope? - Which applications and APIs will be tested? - Are cloud environments (AWS, Azure, GCP) included? - Is social engineering in scope? Physical testing? - Are third-party systems or shared infrastructure involved?
Identifying Constraints: - Testing windows and blackout periods - Systems that must not be disrupted (production databases, medical devices, financial processing) - Geographic or jurisdictional restrictions - Rate limiting and bandwidth constraints - Notification requirements (will the blue team know testing is happening?)
Logistics: - Timeline: start date, duration, report delivery date - Communication channels and frequency - Point of contact and escalation path - VPN credentials, test accounts, or other access provisions - Emergency stop procedure
MedSecure Scoping Example: The MedSecure scoping call revealed several critical constraints. Their patient portal (portal.medsecure.example.com) processes live patient data, so we needed to test against a staging environment that mirrored production. Medical devices on the 10.10.50.0/24 subnet were completely off limits during business hours (6 AM to 8 PM). Marcus Torres, the sysadmin, would be our primary technical contact, with Dr. Sarah Chen as the escalation point. We agreed on a two-week testing window with daily status updates via encrypted email.
38.2.2 Engagement Types
The scope must clearly define the engagement type, as this fundamentally affects methodology, effort, and deliverables.
Black Box Testing: The tester receives minimal information --- perhaps just a company name or a list of IP addresses. They must discover everything else through reconnaissance. This simulates an external attacker with no inside knowledge.
- Advantage: Most realistic simulation of an external threat
- Disadvantage: Time-intensive reconnaissance phase; may miss internal vulnerabilities
- Best for: External network assessments, red team exercises
White Box Testing: The tester receives comprehensive information: network diagrams, source code, credentials, architecture documents. This enables the deepest possible analysis.
- Advantage: Most thorough coverage; finds vulnerabilities that would take attackers weeks to discover
- Disadvantage: Less realistic; doesn't test detection capabilities
- Best for: Application security assessments, code review, compliance testing
Gray Box Testing: The tester receives partial information --- perhaps network ranges and basic credentials, but no source code or detailed architecture. This is the most common engagement type.
- Advantage: Balances thoroughness with realism; efficient use of testing time
- Disadvantage: Coverage depends heavily on what information is provided
- Best for: Most commercial penetration tests, PCI DSS assessments
38.2.3 Effort Estimation
Accurate effort estimation prevents scope creep, ensures adequate coverage, and sets appropriate client expectations. Experienced practitioners estimate effort based on several factors:
Network Assessment Effort:
| Factor | Low | Medium | High |
|---|---|---|---|
| IP addresses in scope | < 50 | 50-500 | 500+ |
| Network segments | 1-2 | 3-5 | 6+ |
| Active Directory forests | 0-1 | 1-2 | 3+ |
| Testing approach | Black box | Gray box | White box |
Web Application Effort:
| Factor | Low | Medium | High |
|---|---|---|---|
| User roles | 1-2 | 3-5 | 6+ |
| Unique pages/endpoints | < 20 | 20-100 | 100+ |
| API endpoints | < 10 | 10-50 | 50+ |
| Authentication mechanisms | 1 | 2-3 | 4+ |
| Business logic complexity | Simple | Moderate | Complex |
A rule of thumb for experienced testers: a straightforward external network pentest with 50 IPs takes 3-5 days. A complex web application with multiple user roles and business logic takes 5-10 days. An internal network with Active Directory takes 5-10 days. A full-scope engagement combining all three could take 15-25 days.
Warning
These are estimates for experienced testers. Junior testers need significantly more time. Never quote effort based on your best day --- quote based on your average day with contingency for unexpected complexity.
38.2.4 The Statement of Work
The scoping call culminates in a Statement of Work (SOW) or proposal that formally documents:
- Scope definition: Precise listing of in-scope and out-of-scope targets
- Engagement type: Black/gray/white box
- Testing approach: Methodology to be followed
- Timeline: Start date, duration, milestones, report delivery date
- Deliverables: What the client will receive (report, raw data, debrief presentation)
- Effort and cost: Person-days and total cost
- Assumptions and dependencies: What the client must provide (VPN access, test accounts, etc.)
- Limitations and exclusions: What is explicitly not covered
- Terms and conditions: Liability, confidentiality, data handling
The SOW should be reviewed by both parties' legal teams before testing begins. Never start testing with only a verbal agreement.
38.2.5 Pricing Models and Commercial Considerations
Understanding how penetration testing is priced helps both testers and clients set appropriate expectations.
Time-and-Materials (T&M): The most common model for consulting firms. The client pays for a defined number of person-days at a daily or hourly rate. Advantages include flexibility to adjust scope and transparency in effort. Disadvantages include unpredictable final cost and the client's concern that the tester might "stretch" the engagement.
Typical rates (US market, 2026): - Junior tester: $1,200-$1,800/day - Mid-level tester: $1,800-$2,800/day - Senior tester: $2,800-$4,000/day - Specialist (red team, cloud, IoT): $3,500-$5,500/day
Fixed-Price: The tester quotes a flat fee for a defined scope. Advantages include budget predictability for the client. Disadvantages include the risk that scope complexity is underestimated, potentially rushing the assessment or requiring change orders.
Common fixed-price packages: - External network assessment (up to 50 IPs): $8,000-$15,000 - Web application assessment (standard complexity): $12,000-$25,000 - Internal network + AD assessment: $15,000-$30,000 - Full-scope external + internal + web: $25,000-$60,000
Retainer/Subscription: Some organizations purchase testing on a retainer basis, with a set number of testing days per quarter or year. This model works well for clients with ongoing testing needs and provides predictable revenue for the testing firm.
Bug Bounty: Pay-per-vulnerability model, typically managed through platforms like HackerOne or Bugcrowd. The organization pays only for validated findings, but has less control over testing methodology and coverage (covered in detail in Chapter 36).
Professional Tip: When pricing an engagement, always include time for reporting. A common mistake is quoting five days for testing and then spending two additional (unquoted) days writing the report. A general rule: add 30-40% to your testing time estimate for report writing and QA.
38.2.6 Client Communication During Scoping
The scoping process reveals important information about the client's security maturity and expectations. Pay attention to these signals:
Red Flags: - "We just need a quick scan" (they may not understand what penetration testing involves) - "Our last tester found nothing" (either their last tester was not thorough, or they have unrealistic expectations) - "We don't have budget for more than two days" (the scope may be too small for meaningful testing) - "Can you start tomorrow?" (they may not have internal processes to support testing) - "We'd prefer if you don't find anything critical" (a fundamental misunderstanding of the engagement's purpose)
Green Flags: - They provide detailed technical documentation during scoping - They have conducted previous assessments and can share reports - They have a dedicated security team ready to support testing - They ask about your methodology and qualifications - They are clear about what they want to learn from the assessment
38.3 Rules of Engagement Documentation
Rules of Engagement (RoE) are the single most important document in any penetration testing engagement. They define what you can do, what you cannot do, when you can do it, and what happens when things go wrong. The RoE protects both the tester and the client.
38.3.1 Essential RoE Components
A comprehensive RoE document includes:
Authorization: - Written authorization from someone with legal authority to approve testing - Identification of the authorizing individual (name, title, contact information) - Scope of authorization (which systems, networks, applications) - Date range of authorization - Explicit statement that the tester is authorized to attempt to identify and exploit vulnerabilities
Scope Definition: - In-scope IP addresses, domains, URLs, and applications (explicit listing) - Out-of-scope items (with reasons) - Any systems that require special handling (fragile systems, production databases)
Testing Parameters: - Testing hours and days (e.g., "Monday through Friday, 8 AM to 6 PM EST only") - Blackout periods (e.g., "No testing during end-of-month financial processing, March 28-31") - Permitted testing techniques (scanning, exploitation, social engineering, physical) - Prohibited actions (DoS testing, data destruction, testing of specific systems) - Maximum exploitation depth (e.g., "Exploitation is permitted but data exfiltration of actual patient records is not")
Communication: - Primary and secondary points of contact (with phone numbers and email) - Escalation path for emergencies - Frequency and format of status updates - Incident notification requirements (e.g., "Critical vulnerabilities must be reported within 4 hours of discovery") - Secure communication channel specification (encrypted email, Signal, etc.)
Emergency Procedures: - "Stop" command: How the client can halt testing immediately - Contact information available 24/7 during the testing window - Procedures if testing causes an outage or disruption - Procedures if the tester discovers evidence of an actual breach in progress - Procedures if the tester discovers illegal content (e.g., CSAM on a compromised system)
Data Handling: - How testing data will be stored during the engagement - Encryption requirements for data in transit and at rest - Data retention period - Data destruction procedures after the engagement - Handling of sensitive data encountered during testing (PII, PHI, financial records)
38.3.2 Sample Rules of Engagement Template
Below is a condensed template based on the format used for the MedSecure engagement:
RULES OF ENGAGEMENT
Penetration Testing Engagement
Client: MedSecure Health Systems
Tester: [Security Firm Name]
Engagement ID: MS-2026-PT-001
1. AUTHORIZATION
This document authorizes [Security Firm] to conduct penetration testing
against MedSecure Health Systems as described herein.
Authorizing Party: Dr. Sarah Chen, CISO
Authorization Date: [Date]
Testing Window: [Start Date] through [End Date]
2. SCOPE
In Scope:
- External: 203.0.113.0/24 (corporate), portal.medsecure.example.com
- Internal: 10.10.10.0/24 (corporate LAN), 10.10.20.0/24 (server VLAN)
- Web Applications: Patient Portal (staging), Provider Dashboard (staging)
- Active Directory: medsecure.local domain
Out of Scope:
- 10.10.50.0/24 (medical device network) -- patient safety
- Third-party SaaS applications (Salesforce, Office 365)
- Physical facilities
- Social engineering of patients
3. TESTING PARAMETERS
- Testing Hours: Monday-Friday, 0600-2000 EST
- Permitted: Network scanning, exploitation, privilege escalation,
password attacks, web application testing, AD attacks
- Prohibited: Denial of service, modification of patient records,
testing of active medical devices, social engineering of patients
- Max Exploitation: Full compromise permitted; access to patient data
must be documented but not exfiltrated beyond test network
4. COMMUNICATION
- Primary Contact: Marcus Torres, Sysadmin
Phone: [number] | Email: [email]
- Escalation: Dr. Sarah Chen, CISO
Phone: [number] | Email: [email]
- Status Updates: Daily at 1700 EST via encrypted email
- Critical Finding Notification: Within 4 hours via phone + encrypted email
5. EMERGENCY PROCEDURES
- Stop Command: Email subject "[STOP] MS-2026-PT-001" to both contacts
- 24/7 Emergency: [Phone number]
- If active breach discovered: Immediately notify client, halt testing,
preserve evidence
- If illegal content discovered: Immediately notify client and tester's
legal counsel
6. DATA HANDLING
- All testing data encrypted with AES-256 at rest
- Data transmitted via VPN or encrypted email only
- Retention: 90 days after report delivery
- Destruction: Certificate of destruction provided after retention period
Signatures:
Client: _________________________ Date: _______
Tester: _________________________ Date: _______
38.3.3 Get-Out-of-Jail-Free Letters
In addition to the RoE, penetration testers --- especially those conducting physical or social engineering assessments --- should carry a "get-out-of-jail-free" letter. This is a concise document (typically one page) that:
- Identifies the tester by name and photograph
- States that they are authorized to conduct security testing
- Provides the name and phone number of the authorizing executive
- Includes the engagement dates
- Instructs anyone who discovers the tester to call the listed contact for verification
This letter has saved many a penetration tester from arrest during physical assessments. It should be carried on the tester's person during all on-site activities.
Real-World Lesson: In 2019, two penetration testers from Coalfire were arrested while conducting an authorized physical assessment of a courthouse in Iowa. Despite having written authorization, the county sheriff arrested and charged them. The charges were eventually dropped, but the incident exposed gaps in how authorization chains work across government entities. Always ensure your authorization comes from someone with actual authority over the specific systems and facilities you're testing.
38.4 Testing Execution and Quality Assurance
A solid methodology and thorough scoping are meaningless if the testing itself is sloppy. This section covers how to execute testing professionally, maintain quality throughout, and avoid common pitfalls.
38.4.1 Pre-Testing Setup
Before the first scan packet leaves your machine, complete these preparation steps:
Environment Preparation: - Update your testing OS and all tools - Verify VPN connectivity and access credentials - Confirm your source IP addresses with the client - Set up your project directory structure for organized note-taking - Configure time synchronization (critical for log correlation) - Test your screenshot tool and screen recording setup
Documentation Setup: - Create your engagement notebook (many testers use CherryTree, Obsidian, or Notion) - Prepare finding templates - Set up your evidence collection directory structure:
MS-2026-PT-001/
├── 01-recon/
│ ├── nmap-scans/
│ ├── osint/
│ └── notes.md
├── 02-enumeration/
│ ├── service-enumeration/
│ ├── web-enumeration/
│ └── notes.md
├── 03-exploitation/
│ ├── evidence/
│ ├── screenshots/
│ └── notes.md
├── 04-post-exploitation/
│ ├── evidence/
│ ├── screenshots/
│ └── notes.md
├── 05-findings/
│ ├── finding-001-sqli-patient-portal.md
│ ├── finding-002-default-creds-admin.md
│ └── ...
└── 06-report/
├── draft-v1.md
└── final.pdf
Verification: - Scan a known safe target to verify your tools work - Confirm that your scanning traffic appears in the client's logs (have the client verify) - Run a quick connectivity test to all in-scope targets
38.4.2 Structured Testing Approach
Follow your methodology phases systematically. Resist the temptation to jump straight to exploitation when you spot something interesting during reconnaissance. A structured approach ensures:
- Completeness: Every in-scope target gets tested
- Consistency: The same tests are applied across similar systems
- Traceability: Every finding can be traced back to the test case that discovered it
- Efficiency: Reconnaissance informs exploitation, preventing wasted effort
Phase Gates:
Implement phase gates --- checkpoints between methodology phases where you review progress, verify coverage, and plan the next phase:
| Phase Gate | Review Items |
|---|---|
| After Recon | Have all in-scope targets been identified? Any new scope questions? |
| After Enumeration | Are all services enumerated? All web applications mapped? |
| After Vulnerability Analysis | Are all identified vulnerabilities validated? False positives removed? |
| After Exploitation | Have exploitation attempts been documented? Evidence captured? |
| After Post-Exploitation | Has business impact been demonstrated? Cleanup performed? |
38.4.3 Real-Time Documentation
The single most important quality habit is documenting as you go, not after the fact. Real-time documentation means:
- Every command you run is logged (use
scripton Linux or terminal logging in your notes tool) - Every finding is documented immediately with:
- Timestamp
- Affected system(s)
- Vulnerability description
- Steps to reproduce
- Evidence (screenshots, command output)
- Initial risk rating
- Every unexpected event is recorded (service crashes, unexpected access, network issues)
- Every client communication is documented (emails, phone calls, decisions)
Professional Tip: Many experienced testers use a dual-screen setup: one screen for testing, one for documentation. Some use screen recording software (OBS Studio) as a backup, recording their entire testing session so they can go back and capture evidence they missed in real time.
38.4.4 Quality Assurance During Testing
Quality assurance is not just a post-testing activity. Build QA into your testing process:
Daily QA Checks: - Review the day's findings for completeness - Verify all evidence is properly stored and labeled - Check testing coverage against scope --- are you on track? - Review for false positives: can every finding be independently verified?
Peer Review: For significant engagements, a second tester should review key findings: - Can they reproduce the finding using your documentation? - Is the risk rating appropriate? - Are there related issues the primary tester might have missed? - Is the evidence clear and convincing?
Scope Compliance: Regularly verify that you're staying within scope: - Check that all tested IPs and domains are in scope - Review your tool configurations to ensure they're not scanning out-of-scope targets - Verify that your testing is within authorized hours
38.4.5 Common Testing Pitfalls
Experienced testers learn to avoid these common mistakes:
The Rabbit Hole: You find one interesting vulnerability and spend three days on it while neglecting the rest of the scope. Set time limits for each target and phase.
Tool Dependency: Running Nessus and Burp Suite and treating their output as the complete assessment. Automated tools are starting points, not endpoints. Manual testing is where the critical findings live.
The Trophy Hunter: Focusing only on critical/high findings and ignoring medium/low issues. Your client deserves complete coverage, and a chain of medium-severity issues can often be combined into a critical attack path.
Poor Evidence: A screenshot of a vulnerability scan result is not evidence. Evidence should include the exact steps to reproduce the issue, the proof of exploitation, and the demonstrated impact.
Scope Creep: The client asks you to "also test this other application" mid-engagement. Stop. Document the request, update the scope in writing, get updated authorization, and then proceed.
Testing Fatigue: By day eight of a ten-day engagement, you're tired and cutting corners. Schedule your most important testing early in the engagement, and build in breaks.
38.4.6 Client Status Updates
Professional communication with the client during testing is essential. A structured status update cadence keeps the client informed and prevents surprises.
Daily Status Updates: Most engagements include daily status updates, typically via encrypted email at the end of each testing day. A daily update should include:
Subject: [ENGAGEMENT-ID] Daily Status Update - Day [N]
Summary:
- Today's testing focus: [What was tested]
- Key observations: [What was found, at a high level]
- Critical findings: [Any findings requiring immediate notification]
- Tomorrow's plan: [What will be tested next]
- Blockers: [Any issues requiring client action]
- Scope questions: [Any clarifications needed]
Overall Progress: [X]% of scope completed (estimated)
Critical Finding Notifications: When you discover a critical vulnerability during testing, notify the client immediately --- do not wait for the daily status update or the final report. This notification should: - Use the emergency communication channel defined in the RoE - Describe the finding at a high level (enough to convey urgency) - Recommend immediate interim mitigation if possible - Be followed by a detailed finding in the final report
MedSecure Critical Notification: When we discovered the SQL injection in the Patient Portal (F-001), we immediately called Marcus Torres and sent an encrypted email to both Marcus and Dr. Chen within two hours of confirmation. The email described the finding, explained that patient data was accessible, and recommended immediately restricting access to the search function while the development team implemented a fix. MedSecure deployed a temporary fix (disabling the search function) within four hours of our notification.
Weekly Summaries: For multi-week engagements, provide a weekly summary that aggregates daily findings and provides a broader view of testing progress, coverage, and preliminary risk assessment. This summary helps the client prepare for the final report and begin planning remediation before the engagement concludes.
38.4.7 Post-Testing Activities
Testing does not end when you stop scanning. Several post-testing activities are essential:
Evidence Cleanup: - Remove any tools, scripts, or payloads deployed on client systems - Reset any passwords changed during testing - Delete any test accounts created - Remove any persistence mechanisms (unless explicitly asked to leave them for blue team training) - Document all cleanup activities
Data Reconciliation: - Verify that all testing data is properly organized in your evidence directory - Cross-reference your testing notes against the scope to identify any targets that were not tested - Identify any open questions or ambiguities that need client clarification
Initial Debrief: Many engagements include an informal debrief call immediately after testing concludes (before the formal report is delivered). This gives the client early visibility into the most critical findings and allows the tester to ask clarifying questions that improve the report.
38.5 PCI DSS Penetration Testing Requirements
The Payment Card Industry Data Security Standard (PCI DSS) is one of the most common drivers for penetration testing engagements. Understanding its specific requirements is essential for any tester who works with organizations that process credit card data.
38.5.1 PCI DSS Overview for Pentesters
PCI DSS version 4.0 (effective March 2025, with full enforcement of new requirements by March 2025) contains twelve requirements for protecting cardholder data. Requirement 11.4 specifically mandates penetration testing.
The requirement states that organizations must: - Perform external and internal penetration testing at least once every twelve months - Perform penetration testing after any significant change to the cardholder data environment (CDE) - Use a qualified internal resource or qualified external third party for testing - Correct exploitable vulnerabilities found during testing and retest to verify corrections
38.5.2 PCI DSS Pentest Scope
The scope of a PCI DSS penetration test is defined by the Cardholder Data Environment (CDE):
In Scope: - All systems that store, process, or transmit cardholder data - All systems connected to the CDE - All network segments connected to the CDE - All systems providing security services to the CDE (firewalls, IDS/IPS, authentication servers)
Segmentation Testing: If the organization uses network segmentation to reduce PCI scope, the penetration test must verify that segmentation controls are effective. This means testing from outside the CDE to verify that out-of-scope systems cannot reach CDE systems. Segmentation testing is required every six months for service providers.
38.5.3 PCI DSS Testing Methodology Requirements
PCI DSS 4.0 Requirement 11.4.1 specifies that the penetration testing methodology must:
- Be based on an industry-accepted approach (PTES, OSSTMM, NIST SP 800-115, or equivalent)
- Include coverage for the entire CDE perimeter and critical systems
- Include testing from both inside and outside the network
- Include testing to validate any segmentation and scope-reduction controls
- Define application-layer penetration tests to include, at minimum, the OWASP Top 10 vulnerabilities
- Define network-layer penetration tests to include both network and operating system components
38.5.4 What Assessors Look For
Qualified Security Assessors (QSAs) reviewing penetration test reports will check for:
- Methodology documentation: Was a recognized methodology followed and documented?
- Scope adequacy: Does the test cover the entire CDE?
- Internal and external testing: Were both perspectives tested?
- Segmentation validation: If segmentation is claimed, was it verified?
- Application testing: Were OWASP Top 10 issues tested in web applications?
- Remediation verification: Were found vulnerabilities corrected and retested?
- Tester qualifications: Is the tester appropriately qualified?
- Test independence: Was the test conducted independently of system administrators?
MedSecure PCI Context: MedSecure processes patient co-payments via credit card, making them subject to PCI DSS. Their CDE includes the payment terminal in the front office (10.10.30.10), the payment processing server (10.10.30.20), and the firewall rules isolating the payment VLAN. Our pentest scope for PCI compliance specifically included segmentation testing: we verified from the corporate LAN, the medical device network, and the Wi-Fi network that none could reach the payment VLAN. We documented one finding where the medical device network had an overly permissive firewall rule that allowed traffic to the payment server on port 443 --- a segmentation failure that would have resulted in a significant expansion of PCI scope.
38.5.5 Common PCI Pentest Failures
Organizations frequently fail their PCI penetration tests for predictable reasons:
Inadequate Segmentation: The most common failure. Organizations claim network segmentation reduces their PCI scope, but the penetration test reveals that the segmentation is incomplete. Common causes include: - Firewall rules that allow traffic from non-CDE to CDE networks - Shared management interfaces (a single vCenter managing both CDE and non-CDE VMs) - DNS servers that bridge network segments - Monitoring systems with access to CDE systems
Default Credentials: Payment terminals, payment application admin interfaces, and supporting infrastructure (databases, application servers) running with default credentials.
Missing Patches: CDE systems with critical patches missing, especially in environments where "it's PCI compliant" is assumed to mean "it's fully patched."
Weak Application Security: The payment-related web application has SQL injection, cross-site scripting, or broken authentication --- straight from the OWASP Top 10.
Inadequate Logging: Not a direct pentest finding, but penetration testing often reveals that the organization's logging is insufficient to detect the tester's activities, which means they would also miss a real attacker.
38.5.6 PCI DSS 4.0 Changes for Penetration Testing
Version 4.0 introduced several changes relevant to penetration testers:
- Customized Approach: Organizations can now demonstrate compliance through customized controls rather than strictly following defined requirements. Penetration testers may be asked to validate custom security controls.
- Enhanced Authentication Requirements: Multi-factor authentication requirements expanded, meaning pentesters should specifically test MFA implementation on all CDE access points.
- Targeted Risk Analysis: Organizations must perform targeted risk analysis to determine the frequency of certain activities. Penetration testing frequency may be driven by risk analysis rather than the default twelve-month cycle.
- Automated Technical Solutions: There is increased emphasis on automated mechanisms for detection and protection. Penetration testers should evaluate whether automated detection systems actually work.
38.6 CREST and CHECK Standards
While PTES, OSSTMM, and OWASP define what to test, accreditation bodies like CREST and CHECK define who can test and to what standard. Understanding these accreditation frameworks is essential for working in regulated environments, particularly in the UK, Europe, and Asia-Pacific.
38.6.1 CREST (Council of Registered Ethical Security Testers)
CREST is an international not-for-profit accreditation body that certifies both individuals and companies performing penetration testing, cyber incident response, threat intelligence, and security operations center (SOC) services.
CREST Company Accreditation:
To become a CREST-accredited company, an organization must:
- Employ qualified professionals. A minimum number of staff must hold CREST certifications at the appropriate level.
- Demonstrate robust processes. The company must have documented methodologies, quality assurance processes, data handling procedures, and reporting standards.
- Pass a company assessment. CREST assessors review the company's technical capabilities, business processes, and sample deliverables.
- Maintain accreditation. Annual reviews ensure ongoing compliance.
CREST accreditation levels for companies include: - CREST Penetration Testing: Standard commercial pentesting - CREST STAR (Simulated Targeted Attack and Response): Advanced red team/adversary simulation - CREST Vulnerability Assessment: Automated and manual vulnerability assessment
CREST Individual Certifications:
| Certification | Level | Focus |
|---|---|---|
| CREST Practitioner Security Analyst (CPSA) | Entry | Foundational knowledge |
| CREST Registered Penetration Tester (CRT) | Intermediate | Infrastructure testing |
| CREST Certified Web Application Tester (CCT App) | Advanced | Web application testing |
| CREST Certified Infrastructure Tester (CCT Inf) | Advanced | Infrastructure testing |
| CREST Certified Simulated Attack Manager (CCSAM) | Expert | Red team management |
| CREST Certified Simulated Attack Specialist (CCSAS) | Expert | Red team technical execution |
The CREST exams are known for their difficulty and practical focus. The CCT-level exams, in particular, are entirely practical: candidates must demonstrate their ability to compromise systems in a controlled environment, then write a professional report on their findings.
38.6.2 CHECK (IT Health Check)
CHECK is a UK government scheme managed by the National Cyber Security Centre (NCSC, part of GCHQ). CHECK-approved companies and consultants are authorized to perform penetration testing of UK government systems and critical national infrastructure.
CHECK operates at two levels:
- CHECK Team Leader (CTL): Can lead and oversee CHECK assessments. Requires passing CREST CCT (Inf and/or App) plus additional NCSC assessment. Must hold appropriate security clearance.
- CHECK Team Member (CTM): Can perform CHECK assessments under a CTL's supervision. Requires CREST CRT plus NCSC assessment.
CHECK Assessment Standards: CHECK assessments follow the NCSC's IT Health Check framework, which defines: - Standard scope templates for government systems - Minimum testing requirements - Report format and content requirements - Data handling requirements (including security clearance requirements) - Quality assurance procedures
CHECK is significant because many UK government contracts require CHECK-approved testers. Similar schemes exist in other countries --- for example, CBEST in the UK financial sector and TIBER-EU for European financial institutions.
38.6.3 CBEST and TIBER Frameworks
For critical national infrastructure and financial institutions, more intensive testing frameworks exist:
CBEST: Developed by the Bank of England, CBEST is a threat intelligence-led penetration testing framework for UK financial institutions. Key features: - Testing is driven by bespoke threat intelligence about threats specific to the target organization - Tests simulate realistic attack scenarios based on current threat actor capabilities - Both the threat intelligence provider and the penetration testing provider must be CREST-accredited - Scope typically includes people, processes, and technology - The financial institution's board must be aware of and authorize the testing
TIBER-EU: The European Central Bank's Threat Intelligence-Based Ethical Red Teaming (TIBER) framework extends the CBEST concept across the EU: - Each EU member state can implement TIBER-EU as a national framework (e.g., TIBER-NL in the Netherlands, TIBER-DE in Germany) - Testing follows three phases: preparation, testing, and closure - Requires separate threat intelligence and red team providers - Findings are shared with national financial regulators
DORA (Digital Operational Resilience Act): The EU's DORA regulation, effective January 2025, requires certain financial entities to conduct advanced threat-led penetration testing using the TIBER-EU framework at least every three years. This has significantly increased demand for CREST-accredited testers in Europe.
38.6.4 Other National and International Standards
NIST SP 800-115: The National Institute of Standards and Technology's "Technical Guide to Information Security Testing and Assessment" is widely referenced in US government and defense contexts. While not an accreditation framework, it provides a methodology standard that many RFPs reference.
ISO 27001 / ISO 27002: ISO 27001 Annex A.8.8 requires organizations to manage technical vulnerabilities, and ISO 27002 provides guidance that includes penetration testing. While ISO doesn't accredit penetration testers, many organizations require their testers to follow ISO-aligned processes.
PCI SSC Penetration Testing Guidance: The PCI Security Standards Council published specific penetration testing guidance that supplements PCI DSS Requirement 11.4. This guidance provides detailed expectations for PCI penetration tests.
SOC 2: SOC 2 Type II audits often reference penetration testing as evidence for the Security trust service criterion. The AICPA doesn't define penetration testing standards, but auditors evaluate whether the testing methodology is appropriate and the results are addressed.
Professional Tip: When responding to RFPs or proposals, always reference the specific standards and accreditations relevant to the client's regulatory environment. A US healthcare client cares about NIST and HIPAA. A UK government client cares about CHECK. A European bank cares about CREST, TIBER-EU, and DORA. Speaking the client's regulatory language demonstrates maturity and builds trust.
38.6.5 Mapping Standards to Engagement Types
| Engagement Type | Primary Standard | Accreditation | Regulatory Driver |
|---|---|---|---|
| Commercial pentest (US) | PTES / NIST 800-115 | OSCP, GPEN | Varies by industry |
| PCI DSS pentest | PTES + PCI guidance | PCI QSA recognition | PCI DSS 11.4 |
| UK government IT health check | CHECK framework | CHECK CTL/CTM | UK government policy |
| UK financial sector | CBEST | CREST STAR | Bank of England |
| EU financial sector | TIBER-EU | CREST | DORA / ECB |
| General commercial (UK/EU) | OWASP + PTES | CREST CRT/CCT | Client requirement |
| US government / DoD | NIST 800-115 | Varies | FISMA, RMF |
38.6.6 Building Your Own Internal Methodology
While external standards provide the framework, every mature penetration testing practice develops its own internal methodology that synthesizes elements from multiple standards. Building an internal methodology requires several components:
Methodology Document: A living document that describes your firm's testing approach, organized by engagement type: - External network penetration testing methodology - Internal network and Active Directory methodology - Web application testing methodology - Cloud security testing methodology - Red team / adversary simulation methodology - Social engineering methodology - Physical security testing methodology
Each section should define: - Pre-requisites and preparation steps - Testing phases with specific activities - Tool usage guidelines (approved tools, configuration standards) - Quality checkpoints between phases - Evidence collection standards - Reporting requirements
Checklists: Convert methodology phases into actionable checklists that testers can follow during engagements. Checklists ensure consistency across team members and reduce the risk of missing critical testing activities. A well-designed checklist for web application testing might include 200+ individual checks, organized by OWASP Testing Guide category.
Templates: Standardize common documents: - Scoping questionnaire - Rules of Engagement template - Statement of Work template - Daily status update template - Finding template - Report template - Engagement closure checklist
Training Program: New testers should undergo a structured onboarding process that covers the internal methodology. This typically includes: - Paired testing: junior testers work alongside senior testers for several engagements - Internal lab assessments: complete a simulated engagement using the internal methodology - Report review: write findings reviewed by a senior tester before they appear in client reports - Certification milestones: achieve relevant certifications (OSCP, CREST CRT) within defined timelines
Professional Tip: Document your methodology as if a new hire with no context needs to follow it. If a step says "enumerate the target," that is too vague. If it says "run Nmap SYN scan against all TCP ports (-sS -p-), followed by service version detection on discovered ports (-sV -p [discovered ports])," that is actionable. The most useful internal methodologies read like detailed playbooks, not high-level process descriptions.
38.6.7 Methodology Maintenance and Continuous Improvement
An internal methodology that is not regularly updated becomes a liability. Attack techniques evolve, new tools emerge, and client environments change. Build a process for methodology maintenance:
Quarterly Reviews: Every quarter, review the methodology for: - New attack techniques published by researchers - New tools or updated versions of existing tools - Lessons learned from recent engagements - Client feedback on testing coverage or quality - Changes in compliance requirements
Post-Engagement Retrospectives: After significant engagements, conduct a brief retrospective: - What worked well in the methodology? - What was missing? Were there gaps in coverage? - Were any methodology steps unnecessary or outdated? - What would we do differently next time?
Community Input: Monitor the security community for methodology improvements: - Conference presentations on testing techniques - Published research on new attack vectors - Updates to external standards (PTES, OWASP, NIST) - Peer firm publications and thought leadership
38.7 Chapter Summary
Methodology is what separates a professional penetration tester from a hobbyist hacker. In this chapter, we have covered the complete landscape of testing standards and professional practice.
Key Concepts Reviewed
Methodologies: - PTES provides an end-to-end engagement lifecycle with seven defined phases - OSSTMM offers quantitative security measurement through the rav and channel-based testing - The OWASP Testing Guide delivers prescriptive web application test cases across eleven categories - Professional testers combine methodologies, using PTES for lifecycle, OWASP for web applications, and OSSTMM for measurement
Engagement Planning: - The scoping call is the most important conversation in any engagement - Engagement types (black/gray/white box) fundamentally affect methodology and effort - Effort estimation must account for scope complexity, testing depth, and tester experience - The Statement of Work formally documents scope, timeline, deliverables, and cost
Rules of Engagement: - RoE documents are the single most important protection for both tester and client - They must cover authorization, scope, parameters, communication, emergencies, and data handling - "Get-out-of-jail-free" letters protect testers during physical assessments - Authorization must come from someone with actual authority over the systems being tested
Testing Quality: - Pre-testing setup ensures tools work and documentation structures are in place - Structured phase-based testing with phase gates ensures complete coverage - Real-time documentation prevents evidence gaps and supports quality reporting - Daily QA checks, peer review, and scope compliance monitoring maintain quality
PCI DSS: - Requirement 11.4 mandates internal and external penetration testing annually - PCI pentests must cover the entire CDE, validate segmentation, and test OWASP Top 10 - Common failures include inadequate segmentation, default credentials, and weak application security - PCI DSS 4.0 introduced customized approaches and enhanced authentication requirements
Accreditation: - CREST certifies individuals and companies for penetration testing - CHECK authorizes testing of UK government systems - CBEST and TIBER-EU provide frameworks for financial sector testing - DORA requires TIBER-EU-based testing for certain EU financial entities - Different regulatory environments demand different standards and accreditations
What's Next
In Chapter 39, we will put methodology into practice by learning how to write effective penetration testing reports. The best testing in the world is worthless if you can't communicate your findings clearly. We will dissect report structure, learn to write for both technical and executive audiences, and develop the evidence documentation skills that turn a good pentest into a great deliverable.
The MedSecure Methodology in Practice
To bring everything together, let us trace how the MedSecure engagement followed methodology from start to finish:
Pre-Engagement (Week -2): We conducted a scoping call with Dr. Sarah Chen and Marcus Torres. We identified the scope (external network, internal network, web applications, Active Directory), constraints (no testing of medical devices during patient care hours), and logistics (VPN access, test credentials, daily status updates). We delivered a Statement of Work and Rules of Engagement for legal review. Both documents were signed three days before testing began.
Intelligence Gathering (Day 1-2): Following PTES Level 2 intelligence gathering, we conducted passive reconnaissance (OSINT on medsecure.example.com, DNS enumeration, certificate transparency analysis) and active reconnaissance (Nmap scanning of the external IP range, web application fingerprinting). We identified the technology stack: Node.js/Express on the Patient Portal, Apache on the legacy Provider Dashboard, Windows Server 2019 domain controllers, Ubuntu 22.04 web servers.
Threat Modeling (Day 2): Based on reconnaissance results, we developed a threat model identifying three primary attack paths: 1. External web application exploitation leading to database access 2. VPN/remote access compromise leading to internal network access 3. Internal network exploitation leading to Active Directory compromise
We prioritized these based on likelihood and impact, focusing first on the web application path (highest likelihood for an external attacker).
Vulnerability Analysis (Day 3-4): We ran automated vulnerability scans (Nessus for network, Burp Suite for web applications) and conducted manual testing against OWASP Testing Guide categories. We identified the SQL injection in the Patient Portal, default credentials on the admin interface, and several missing patches on internal systems.
Exploitation (Day 5-7): Following the prioritized attack paths, we: - Exploited the SQL injection to access the patient database (F-001) - Used default credentials to access the admin interface (F-002) - Conducted Kerberoasting against Active Directory service accounts (F-003) - Exploited missing patches for privilege escalation (F-004) - Tested network segmentation and discovered the medical-to-payment VLAN gap (F-005)
We immediately notified the client about the SQL injection (critical finding notification) and documented all findings with screenshots and command output.
Post-Exploitation (Day 8): We demonstrated the complete attack chain from internet to payment VLAN, documented the business impact, cleaned up our testing artifacts, and verified that all tools and payloads were removed from client systems.
Reporting (Day 9-10): We compiled findings into the report following the template from Chapter 39, conducted self-review and peer review, and delivered the encrypted report to Dr. Chen and Marcus Torres. We scheduled a debrief presentation for the following week.
This structured approach ensured that every in-scope target was tested, every finding was properly documented, and the client received a comprehensive, defensible assessment that met PCI DSS, HIPAA, and professional practice standards.
Blue Team Perspective: Understanding penetration testing methodology isn't just for pentesters. If you're a defender responsible for commissioning penetration tests, this chapter helps you evaluate whether your testing provider is using a rigorous approach. Ask them about their methodology, their quality assurance process, and their relevant accreditations. A provider who can't articulate these clearly may not deliver the thoroughness your organization needs.
Try It in Your Lab: Even for your home lab exercises, practice following a structured methodology. Create a rules of engagement document for testing your own Metasploitable VM. Set up an engagement directory structure. Document your findings as you go. The habits you build in your lab will carry directly into professional practice.