> "The deliverable is the report. Everything else you did --- the scanning, the exploitation, the lateral movement --- was just research for the report." --- Unknown CREST assessor
Learning Objectives
- Structure a professional penetration testing report
- Write effective executive summaries and technical findings
- Apply consistent vulnerability descriptions and risk ratings
- Document evidence with professional screenshots and proof-of-concept detail
- Craft actionable remediation recommendations
- Implement report review and quality assurance processes
In This Chapter
Chapter 39: Writing Effective Pentest Reports
"The deliverable is the report. Everything else you did --- the scanning, the exploitation, the lateral movement --- was just research for the report." --- Unknown CREST assessor
You have just finished two weeks of testing against MedSecure Health Systems. You compromised the patient portal through a SQL injection vulnerability, escalated privileges on three Linux servers, Kerberoasted two Active Directory service accounts, and pivoted through the corporate network to reach the payment processing VLAN. You have screenshots, command output, and notes scattered across your testing notebook. Now comes the hardest part of the engagement: turning all of that into a document that will actually change how MedSecure operates.
This chapter is about the craft of report writing. A penetration test without a good report is like a doctor performing surgery without telling the patient what they found. The report is the artifact that justifies the engagement, communicates risk to decision-makers, guides remediation efforts, and serves as evidence for compliance auditors. It is, without exaggeration, the most important deliverable of any engagement.
We will dissect report structure, learn to write for radically different audiences (the CISO who has five minutes and the sysadmin who needs exact commands), build a vocabulary for describing risk, and develop the evidence documentation habits that separate amateur reports from professional deliverables.
39.1 Report Structure and Components
A professional penetration testing report follows a well-defined structure. While individual firms customize their templates, the core components are consistent across the industry.
39.1.1 The Standard Report Template
A complete penetration testing report contains the following sections:
1. Cover Page - Report title (e.g., "External and Internal Penetration Test Report") - Client name and engagement identifier - Testing dates - Report version and date - Classification (Confidential, Client Confidential, etc.) - Testing firm name and logo
2. Document Control - Version history (draft, review, final) - Distribution list (who receives this report) - Confidentiality notice - Document handling instructions
3. Table of Contents - Auto-generated from headings - Include page numbers
4. Executive Summary (1-2 pages) - High-level overview for non-technical leadership - Overall risk assessment - Key findings summarized in business terms - Strategic recommendations - Comparison to previous assessment (if applicable)
5. Scope and Methodology - What was tested (IP ranges, applications, domains) - What was not tested (explicit exclusions) - Testing approach (black/gray/white box) - Methodology followed (PTES, OWASP, etc.) - Testing dates and duration - Tools used (high-level) - Limitations and caveats
6. Findings Summary - Risk rating summary table (Critical/High/Medium/Low/Informational count) - Findings organized by severity - Visual summary (charts, heat maps)
7. Detailed Technical Findings - Individual finding write-ups (the bulk of the report) - Each finding follows a consistent template (covered in Section 39.3)
8. Remediation Roadmap - Prioritized remediation plan - Quick wins vs. long-term improvements - Dependencies between remediation items
9. Appendices - Detailed tool output (Nmap scans, vulnerability scanner results) - Full evidence documentation - Methodology details - Glossary of terms - Severity rating definitions
39.1.2 The MedSecure Report: A Complete Example
Let us build the MedSecure penetration testing report as our running example throughout this chapter. Here is the engagement summary:
Engagement: External and Internal Penetration Test Client: MedSecure Health Systems Duration: 10 business days (two calendar weeks) Scope: External network (203.0.113.0/24), internal network (10.10.10.0/24, 10.10.20.0/24), web applications (Patient Portal, Provider Dashboard), Active Directory (medsecure.local) Approach: Gray box (network ranges and test credentials provided)
Key Findings Discovered:
| ID | Finding | Severity | CVSS |
|---|---|---|---|
| F-001 | SQL Injection in Patient Portal Search | Critical | 9.8 |
| F-002 | Default Credentials on Admin Interface | Critical | 9.1 |
| F-003 | Kerberoastable Service Accounts with Weak Passwords | High | 8.1 |
| F-004 | Missing Patches on Linux Web Servers (CVE-2024-XXXX) | High | 7.8 |
| F-005 | Inadequate Network Segmentation (Medical-to-Payment) | High | 7.5 |
| F-006 | Session Tokens Not Invalidated on Logout | Medium | 6.5 |
| F-007 | Verbose Error Messages Expose Stack Traces | Medium | 5.3 |
| F-008 | TLS 1.0 Enabled on External Web Server | Medium | 5.0 |
| F-009 | Missing HTTP Security Headers | Low | 3.7 |
| F-010 | Internal DNS Zone Transfer Permitted | Low | 3.5 |
We will use these findings throughout the chapter to demonstrate how to write each section of the report.
39.1.3 Tailoring Reports for Compliance
When the engagement is driven by a compliance requirement, the report must be structured to serve as compliance evidence. Different frameworks have different expectations:
PCI DSS Reports: QSAs reviewing penetration test reports for PCI DSS compliance expect: - Explicit statement that the testing covered the entire CDE - Segmentation testing results with specific evidence - Application-layer testing mapped to OWASP Top 10 - Internal and external testing results clearly distinguished - Methodology based on a recognized standard (PTES, NIST 800-115) - Remediation status for all findings
Consider including a PCI DSS compliance mapping section that lists each relevant PCI requirement and how the testing addressed it:
| PCI DSS Requirement | Testing Activity | Result |
|---|---|---|
| 11.4.1 Methodology | PTES-based methodology | Documented in Section 2 |
| 11.4.2 Internal testing | Internal network + AD assessment | Findings F-003 through F-010 |
| 11.4.3 External testing | External network + web app assessment | Findings F-001, F-002, F-008, F-009 |
| 11.4.4 Segmentation | Tested from all segments to CDE | Finding F-005 (failure) |
| 11.4.5 Application testing | OWASP Top 10 coverage | Findings F-001, F-006, F-007 |
HIPAA Reports: For healthcare organizations, map findings to HIPAA Security Rule safeguards: - Administrative Safeguards (164.308): Risk analysis support, security management - Physical Safeguards (164.310): Facility access, device controls - Technical Safeguards (164.312): Access controls, audit controls, integrity, encryption
SOC 2 Reports: SOC 2 auditors want to see evidence that penetration testing supports the trust service criteria: - CC6.1 (Access Control): Did testing evaluate access control effectiveness? - CC6.6 (System Boundary Protection): Did testing validate network segmentation? - CC7.1 (Vulnerability Management): Were vulnerabilities identified and tracked? - CC7.2 (Incident Response): Did testing evaluate detection capabilities?
ISO 27001 Reports: For ISO-certified organizations, reference the relevant Annex A controls: - A.8.8: Technical vulnerability management - A.5.35: Independent review of information security - A.8.9: Configuration management verification
39.1.4 Report Length and Format
Length Guidelines: - Executive Summary: 1-2 pages - Scope and Methodology: 2-3 pages - Findings Summary: 1-2 pages - Technical Findings: 2-4 pages per finding (so a 10-finding report = 20-40 pages) - Appendices: As needed (can be extensive) - Total: 40-80 pages for a typical engagement
Format: - PDF is the standard delivery format (prevents accidental editing) - Include clickable hyperlinks in the table of contents and cross-references - Use consistent formatting: fonts, heading levels, code blocks, screenshot borders - Many firms also provide a machine-readable format (CSV, JSON) for integration with vulnerability management tools
Professional Tip: Some clients request reports in specific formats --- Microsoft Word for editing, PowerPoint for board presentations, or CSV for import into their GRC platform. Discuss deliverable format during the scoping call. Having a report generation tool (like the one we build in this chapter's code exercises) saves enormous time when multiple formats are needed.
39.2 Writing for Technical and Executive Audiences
The greatest challenge in report writing is addressing two fundamentally different audiences in a single document: executives who make budget decisions and technical staff who implement fixes. These audiences have different knowledge, different concerns, and different time constraints.
39.2.1 The Executive Summary
The executive summary is the most important section of the report because it is often the only section that senior leadership reads. It must communicate the overall security posture, the most significant risks, and the strategic implications --- all in one to two pages, without technical jargon.
What Executives Want to Know: 1. Are we at risk? (Overall assessment) 2. What are the biggest risks? (Top findings in business terms) 3. How do we compare? (To industry, to our last assessment) 4. What should we do? (Strategic recommendations) 5. How much will it cost? (At least directionally)
MedSecure Executive Summary Example:
Executive Summary
MedSecure Health Systems engaged [Security Firm] to conduct a comprehensive penetration test of its external and internal network infrastructure, web applications, and Active Directory environment from [date] to [date]. The objective was to identify security vulnerabilities that could be exploited by malicious actors and to provide recommendations for remediation.
Overall Risk Assessment: HIGH
The assessment identified 10 vulnerabilities: 2 Critical, 3 High, 3 Medium, and 2 Low. The most significant finding was a SQL injection vulnerability in the Patient Portal that would allow an unauthenticated attacker to access the entire patient database, including protected health information (PHI). This finding alone represents a significant HIPAA compliance risk and potential liability exposure.
Most critically, the testing team was able to chain multiple vulnerabilities to simulate a complete breach scenario: starting from the public internet, we gained access to the Patient Portal database, escalated privileges within the internal network, and ultimately reached the payment processing environment. This attack path demonstrates that a determined attacker could compromise both patient health records and payment card data.
Key Risk Areas:
1. Patient Data Exposure: The SQL injection vulnerability in the Patient Portal could be exploited by an unsophisticated attacker using freely available tools to access patient records. Immediate remediation is required.
2. Inadequate Network Segmentation: The boundary between the medical device network and the payment processing network is insufficiently enforced, expanding PCI DSS scope and creating unnecessary risk.
3. Credential Management: Multiple systems were accessible using default or weak credentials, and Active Directory service accounts used passwords that could be cracked in under two hours.
Strategic Recommendations:
1. Immediate (0-30 days): Remediate the two Critical findings (SQL injection and default credentials) and verify the fixes. 2. Short-term (30-90 days): Address all High findings, implement network segmentation improvements, and deploy a patch management program. 3. Medium-term (90-180 days): Address Medium and Low findings, implement a vulnerability management program, and conduct follow-up testing.
Compared to industry benchmarks for healthcare organizations of similar size, MedSecure's security posture is below average. The combination of critical web application vulnerabilities and network segmentation failures creates significant regulatory and reputational risk. However, the issues identified are well-understood and remediable with focused effort.
Executive Summary Best Practices: - Lead with the overall risk assessment --- do not bury the lead - Use business language, not technical jargon ("patient database" not "PostgreSQL instance") - Quantify risk where possible ("this vulnerability could expose 50,000 patient records") - Connect findings to business impact (HIPAA fines, reputational damage, operational disruption) - Provide clear, prioritized recommendations with timelines - Include a comparison point (previous assessment, industry benchmark) - Keep it to two pages maximum --- if you cannot summarize the risk in two pages, you do not understand it well enough
39.2.2 Writing for Technical Audiences
The technical sections of the report serve a different purpose: they give the system administrators, developers, and security engineers the information they need to understand and fix each issue.
Technical readers need: - Exact details: Which system, which parameter, which endpoint - Reproducible steps: Enough detail that they can verify the issue themselves - Evidence: Screenshots, command output, request/response pairs - Root cause: Why the vulnerability exists, not just that it exists - Specific remediation: Exact configuration changes, code fixes, or patches to apply
The balance between technical depth and readability is crucial. Dumping raw tool output is not useful. Curating and annotating tool output is.
39.2.3 The Board Presentation
In addition to the written report, many engagements require a board-level presentation. This is a fundamentally different deliverable from the report:
Board Presentation Structure (5-10 slides):
Slide 1: Engagement Overview - What we did, when, and why - One sentence summary of scope - Testing approach in non-technical terms
Slide 2: Overall Risk Assessment - Single, clear risk rating with visual (traffic light, gauge, or similar) - One-sentence summary of what the rating means for the business - Comparison to previous assessment or industry benchmark
Slide 3-4: Top Risks - Three to five highest-impact findings - Each described in business terms: what data is at risk, what could happen, estimated financial impact - No technical details --- save those for the report
Slide 5: Risk Trend - If this is a recurring engagement, show trend over time - Chart showing findings by severity across assessments - Are things getting better, staying the same, or getting worse?
Slide 6: Remediation Roadmap - Prioritized actions with timelines and estimated costs - Categorized into immediate, short-term, and medium-term - Assigned to teams or functions
Slide 7: Investment Recommendation - What budget is needed for remediation? - What is the cost of inaction? (regulatory fines, breach costs, operational disruption) - Frame as risk reduction per dollar invested
Delivery Tips: - Keep to 15-20 minutes maximum (boards have limited attention spans) - Anticipate questions: "How do we compare to our peers?" "Could this really happen to us?" "What would it cost if we were breached?" - Have a brief demo ready if requested (but do not lead with it --- board members are not impressed by terminal windows) - Bring the tester who did the work to answer technical questions from the CISO or CTO
39.2.4 Bridging the Gap
Some report consumers fall between executive and technical --- security managers, risk officers, compliance analysts. For these readers, include:
- A findings summary table that maps each finding to business risk
- Risk ratings with clear definitions (see Section 39.3)
- A remediation roadmap that shows dependencies and priorities
- Trend analysis if this is a recurring engagement
39.2.4 Tone and Voice
Do: - Be professional and objective - State facts and evidence - Be specific about what you found and what it means - Acknowledge uncertainty when appropriate ("the tester was unable to determine whether...") - Use active voice ("The tester identified..." not "It was identified...")
Don't: - Be condescending ("The client failed to implement basic security...") - Be alarmist ("The network is completely compromised and should be shut down immediately") - Use unnecessary jargon ("We pwned the box and got DA") - Be vague ("Several vulnerabilities were found") - Editorialize ("It is shocking that this vulnerability existed")
Professional Tip: Before writing, imagine your reader. The CISO will read the executive summary aloud to the board. The sysadmin will read the finding detail and follow it step by step to reproduce the issue. The compliance officer will map each finding to a regulatory requirement. The legal team will assess liability. Your report must serve all of them.
39.3 Vulnerability Descriptions and Risk Ratings
Each finding in your report needs a clear description and a defensible risk rating. This section covers how to write finding descriptions that are both technically accurate and accessible, and how to assign risk ratings that reflect actual business impact.
39.3.1 The Finding Template
Every finding should follow a consistent template. Consistency helps readers know where to find information and makes the report easier to scan.
Standard Finding Template:
FINDING ID: F-XXX
TITLE: [Descriptive title]
SEVERITY: [Critical / High / Medium / Low / Informational]
CVSS SCORE: [X.X] (CVSS:3.1/AV:X/AC:X/PR:X/UI:X/S:X/C:X/I:X/A:X)
AFFECTED SYSTEM(S): [IP, hostname, URL]
STATUS: [Open / Remediated / Accepted Risk]
DESCRIPTION:
[Clear explanation of what the vulnerability is]
BUSINESS IMPACT:
[What this means for the organization in business terms]
TECHNICAL DETAIL:
[Detailed technical explanation with evidence]
STEPS TO REPRODUCE:
1. [Step one]
2. [Step two]
...
EVIDENCE:
[Screenshots, request/response pairs, command output]
REMEDIATION:
[Specific, actionable fix instructions]
REFERENCES:
[CVE numbers, OWASP references, vendor advisories]
39.3.2 Writing the Finding: MedSecure F-001 Example
Let us write the complete finding for the SQL injection vulnerability discovered in the MedSecure Patient Portal:
FINDING ID: F-001
TITLE: SQL Injection in Patient Portal Search Functionality
SEVERITY: Critical
CVSS SCORE: 9.8 (CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H)
AFFECTED SYSTEM: portal.medsecure.example.com --- Patient Portal Search Function
STATUS: Open
Description:
The Patient Portal's appointment search functionality is vulnerable to SQL injection. User-supplied input in the search_date parameter is incorporated directly into a SQL query without parameterization or input validation. An unauthenticated attacker can exploit this vulnerability to extract the entire contents of the backend database, including patient personally identifiable information (PII) and protected health information (PHI).
Business Impact:
This vulnerability allows an unauthenticated external attacker to access the patient database containing approximately 50,000 patient records, including names, dates of birth, Social Security numbers, medical diagnoses, and treatment histories. Exploitation of this vulnerability could result in:
- HIPAA breach notification requirements for all affected patients
- Potential HIPAA penalties (up to $1.5 million per violation category per year)
- Reputational damage and loss of patient trust
- Potential class-action litigation from affected patients
- Regulatory investigation by the HHS Office for Civil Rights
Technical Detail:
The search functionality at https://portal.medsecure.example.com/appointments/search accepts a search_date parameter via POST request. This parameter is concatenated directly into a SQL query without parameterization. Testing confirmed the backend database is PostgreSQL 14.2.
The following payload demonstrates boolean-based blind SQL injection:
POST /appointments/search HTTP/1.1
Host: portal.medsecure.example.com
Content-Type: application/x-www-form-urlencoded
Cookie: session=<valid_session>
search_date=2026-01-15' AND 1=1--&patient_id=12345
This returns a normal response (appointments found). Changing the condition to AND 1=2-- returns no results, confirming the injection point.
Using time-based extraction, the tester confirmed the ability to extract data:
search_date=2026-01-15' AND (SELECT CASE WHEN (SELECT
substring(current_user,1,1))='p' THEN pg_sleep(5) ELSE
pg_sleep(0) END)--
The 5-second delay confirmed the database user begins with 'p' (subsequently identified as portal_app).
Using automated extraction via sqlmap, the tester confirmed access to the following databases and tables:
available databases [3]:
[*] information_schema
[*] medsecure_portal
[*] postgres
Database: medsecure_portal
[8 tables]
+-------------------+
| appointments |
| audit_log |
| diagnoses |
| insurance_claims |
| medications |
| patient_records |
| providers |
| users |
+-------------------+
A sample extraction from the patient_records table (limited to 3 records with data redacted) confirmed access to PHI:
+----+-----------+------------+-------------+
| id | full_name | dob | ssn_hash |
+----+-----------+------------+-------------+
| 1 | [REDACT] | [REDACTED] | [REDACTED] |
| 2 | [REDACT] | [REDACTED] | [REDACTED] |
| 3 | [REDACT] | [REDACTED] | [REDACTED] |
+----+-----------+------------+-------------+
Note: The tester extracted only three records to confirm the vulnerability and immediately notified the client per the critical finding notification procedure in the Rules of Engagement. No patient data was retained by the testing team.
Steps to Reproduce:
- Navigate to
https://portal.medsecure.example.com/login - Log in with any valid patient account
- Navigate to Appointments > Search
- Intercept the search request with Burp Suite
- Modify the
search_dateparameter to:2026-01-15' AND 1=1-- - Observe that results are returned normally
- Change the payload to:
2026-01-15' AND 1=2-- - Observe that no results are returned, confirming SQL injection
Evidence:
[Screenshot: Burp Suite showing the injected request and response] [Screenshot: sqlmap output showing database enumeration] [Screenshot: Redacted sample data extraction confirming PHI access]
Remediation:
- Immediate: Implement parameterized queries (prepared statements) for all database interactions in the appointment search functionality. In the Node.js/PostgreSQL stack, replace string concatenation with parameterized queries:
``javascript
// VULNERABLE (current)
const query =SELECT * FROM appointments
WHERE date = '${req.body.search_date}'
AND patient_id = ${req.body.patient_id}`;
// FIXED (parameterized) const query = { text: 'SELECT * FROM appointments WHERE date = $1 AND patient_id = $2', values: [req.body.search_date, req.body.patient_id] }; ```
-
Short-term: Conduct a code review of all database interactions in the Patient Portal to identify and remediate additional SQL injection vulnerabilities.
-
Medium-term: Implement a Web Application Firewall (WAF) as a defense-in-depth measure. Deploy database activity monitoring to detect SQL injection attempts.
-
Long-term: Implement a secure development lifecycle (SDLC) with security code review and DAST/SAST testing as part of the CI/CD pipeline.
References: - OWASP: SQL Injection (https://owasp.org/www-community/attacks/SQL_Injection) - CWE-89: Improper Neutralization of Special Elements used in an SQL Command - OWASP Testing Guide: Testing for SQL Injection (OTG-INPVAL-005)
39.3.3 Risk Rating Systems
Risk ratings translate technical vulnerability severity into business-relevant language. Several systems are in common use:
CVSS (Common Vulnerability Scoring System): CVSS 3.1 (and the newer CVSS 4.0) provides a standardized numerical score from 0.0 to 10.0:
| CVSS Score | Severity |
|---|---|
| 0.0 | None |
| 0.1 - 3.9 | Low |
| 4.0 - 6.9 | Medium |
| 7.0 - 8.9 | High |
| 9.0 - 10.0 | Critical |
CVSS is excellent for scoring known CVEs but has limitations for business logic vulnerabilities and configuration issues. Always include the full CVSS vector string so readers can verify your scoring.
DREAD: Microsoft's DREAD model rates vulnerabilities on five factors: - Damage potential (0-10) - Reproducibility (0-10) - Exploitability (0-10) - Affected users (0-10) - Discoverability (0-10)
The average of these five scores gives an overall rating. DREAD is simpler than CVSS but more subjective.
Custom Risk Matrices: Many firms use a risk matrix that considers both technical severity and business impact:
| Low Business Impact | Medium Business Impact | High Business Impact | |
|---|---|---|---|
| High Technical Severity | Medium Risk | High Risk | Critical Risk |
| Medium Technical Severity | Low Risk | Medium Risk | High Risk |
| Low Technical Severity | Informational | Low Risk | Medium Risk |
This approach is useful because it accounts for the client's specific context. A SQL injection in a marketing website is less critical than the same vulnerability in a patient portal.
39.3.4 CVSS 4.0: What Changes
CVSS version 4.0, published by FIRST in 2023, introduces several changes relevant to penetration testing reports:
New Metric Groups: CVSS 4.0 introduces four metric groups (compared to three in 3.1): - Base Metrics: Similar to CVSS 3.1 but with refined definitions - Threat Metrics: Replaces the Temporal group with more practical threat assessment (Exploit Maturity) - Environmental Metrics: Similar to 3.1 but with expanded options - Supplemental Metrics: New group for additional context (Automatable, Recovery, Value Density, Vulnerability Response Effort, Provider Urgency)
Key Changes for Report Writers: - The Attack Requirements (AT) metric replaces the less granular Attack Complexity - The naming convention changes: "CVSS-B" for base score, "CVSS-BT" for base + threat, "CVSS-BE" for base + environmental, "CVSS-BTE" for all groups - The scoring formula has been updated, meaning some vulnerabilities will score differently - Supplemental metrics provide additional context without affecting the numerical score
Adoption Considerations: As of 2026, many organizations still use CVSS 3.1 because their vulnerability management tools, GRC platforms, and compliance frameworks reference CVSS 3.1. When writing reports, check with the client which version they prefer. If in doubt, provide both: the CVSS 3.1 score for compatibility and CVSS 4.0 for additional context.
39.3.5 Writing Finding Chains
Some of the most impactful findings in a penetration test are not individual vulnerabilities but chains of vulnerabilities that, when combined, create a more severe attack path than any single finding suggests.
The MedSecure Attack Chain: In the MedSecure engagement, we demonstrated the following chain:
- SQL injection in Patient Portal (F-001, CVSS 9.8) provides database access
- Database credentials reuse (not a standalone finding) enables OS-level access
- Missing patches on web server (F-004, CVSS 7.8) enables privilege escalation
- Inadequate network segmentation (F-005, CVSS 7.5) allows lateral movement to payment VLAN
Individually, these findings range from CVSS 7.5 to 9.8. But the chain is more significant than any single finding because it demonstrates a complete attack path from unauthenticated internet access to the payment processing environment --- a scenario that has regulatory implications for both HIPAA and PCI DSS.
Documenting Chains: When documenting attack chains, create a separate section or finding that: - Describes the complete attack path from initial access to objective - References each individual finding by ID (F-001 + F-004 + F-005) - Explains how the findings combine to create elevated risk - Provides a realistic assessment of how long this attack would take a motivated attacker - Includes a visual diagram showing the attack path through the network
Chain Severity: The severity of an attack chain should reflect the overall impact, not the average of individual findings. A chain of three Medium findings that results in complete database compromise should be rated as High or Critical, even though no individual finding exceeds Medium severity.
39.3.6 Contextualizing Risk
Raw CVSS scores do not tell the whole story. A CVSS 9.8 SQL injection in a test environment with no real data is less urgent than a CVSS 7.5 segmentation failure that exposes the payment network. Always contextualize risk ratings:
- Consider the data classification of affected systems
- Consider the regulatory implications (HIPAA, PCI DSS, GDPR)
- Consider the attack path: is this vulnerability reachable from the internet or only from the internal network?
- Consider exploit availability: is there a public exploit? A Metasploit module?
- Consider the client's ability to detect exploitation: would they know if this were exploited?
ShopStack Example: During the ShopStack assessment, we identified an IDOR (Insecure Direct Object Reference) vulnerability in the order API that allowed any authenticated user to view other users' order details, including shipping addresses and partial payment information. The CVSS base score was 6.5 (Medium), but given that ShopStack has 200,000 active customers and the vulnerability could be automated to dump all order data, we rated the business impact as High, elevating the overall risk to High.
39.4 Evidence Documentation and Screenshots
Evidence is the foundation of credibility. Without proper evidence, your findings are assertions. With it, they are proven vulnerabilities. This section covers how to document evidence professionally.
39.4.1 Types of Evidence
Screenshots: The most common form of evidence. Effective screenshots should: - Be clear and readable (no tiny fonts or blurry captures) - Be annotated with red boxes or arrows highlighting the relevant area - Include context (URL bar, timestamp, tool name) - Be cropped to show relevant content without excess whitespace - Be numbered and referenced in the finding text
Request/Response Pairs: For web application findings, include the full HTTP request and response (or relevant excerpts). Use Burp Suite's "Copy as curl command" feature for reproducibility:
POST /api/v1/orders/12345 HTTP/1.1
Host: api.shopstack.example.com
Authorization: Bearer eyJ...user_token...
Content-Type: application/json
{"order_id": "99999"}
HTTP/1.1 200 OK
Content-Type: application/json
{
"order_id": "99999",
"customer_name": "Jane Doe",
"shipping_address": "123 Main St, Anytown, US 12345",
"items": [...],
"payment_last_four": "4242"
}
Command Output: For infrastructure findings, include relevant command output. Clean it up for readability --- remove irrelevant lines, highlight important parts, but never alter the actual output:
$ crackmapexec smb 10.10.10.0/24 -u '' -p '' --shares
SMB 10.10.10.50 445 FILESRV01 [*] Windows Server 2019 Build 17763
SMB 10.10.10.50 445 FILESRV01 [+] medsecure.local\: (Guest)
SMB 10.10.10.50 445 FILESRV01 [+] Enumerated shares
SMB 10.10.10.50 445 FILESRV01 Share Permissions Remark
SMB 10.10.10.50 445 FILESRV01 ----- ----------- ------
SMB 10.10.10.50 445 FILESRV01 HR$ READ Human Resources
SMB 10.10.10.50 445 FILESRV01 Finance READ,WRITE Finance Department
Log Entries: When relevant, include log entries that corroborate your findings or demonstrate detection (or lack thereof).
39.4.2 Screenshot Best Practices
Resolution and Clarity: - Capture at a resolution that remains readable when printed or viewed at 100% - Use a consistent screenshot tool (Flameshot on Linux, Greenshot on Windows) - If capturing terminal output, increase the font size before capturing
Annotation: - Use red rectangles to highlight the vulnerability or key evidence - Use arrows to draw attention to specific fields - Add numbered callouts that match your written description - Use a consistent annotation color scheme (red for vulnerability, green for successful exploitation, blue for reference)
Redaction: - Redact real IP addresses, hostnames, and domain names in client reports - Redact actual data (patient records, credit card numbers, passwords) - Use consistent redaction (black bars, [REDACTED] text) --- be thorough - Never redact the vulnerability itself; redact only sensitive data
Organization:
- Name screenshots with a consistent scheme: F001-01-injection-point.png, F001-02-data-extraction.png
- Store screenshots in the evidence directory alongside the finding documentation
- Reference every screenshot in the finding text: "As shown in Figure F001-1..."
39.4.3 Request/Response Documentation Best Practices
For web application findings, request/response documentation is the gold standard of evidence. Here are best practices for capturing and presenting this data:
Capturing in Burp Suite: - Use the "Copy as curl command" feature for easy reproduction - Save the full request and response (not just the highlighted portions) - Annotate requests with comments explaining each parameter's purpose - Use the Repeater tab to create clean, minimal reproduction requests (removing irrelevant headers and cookies)
Formatting for Reports: Present request/response pairs in a clean, readable format:
# Request (annotated)
POST /api/v1/orders HTTP/1.1
Host: api.shopstack.example.com
Authorization: Bearer eyJ... # Regular user token (user_id: 12345)
Content-Type: application/json
{
"order_id": "99999" # <-- This order belongs to user_id: 67890
}
# Response (demonstrates unauthorized access)
HTTP/1.1 200 OK
Content-Type: application/json
{
"order_id": "99999",
"customer_name": "[REDACTED]",
"shipping_address": "[REDACTED]",
"items": [{"product": "Widget A", "price": 29.99}],
"payment_last_four": "4242" # <-- Sensitive data exposed
}
Key Principles:
- Highlight the vulnerability with inline comments (using # <-- or similar)
- Show both the malicious request and the unauthorized response
- Redact sensitive data in the response (real names, addresses, full card numbers)
- Include enough context that the reader understands the attack flow
- For comparison, consider showing a legitimate request alongside the malicious one
Command-Line Evidence: For network and infrastructure findings, present command output cleanly:
# Kerberoasting: Requesting service tickets for crackable SPNs
$ GetUserSPNs.py medsecure.local/testuser:Password123 -request
ServicePrincipalName Name MemberOf
--------------------- ----------- --------
MSSQLSvc/db01:1433 svc_sql CN=Domain Admins
HTTP/portal.medsecure svc_web CN=WebServers
# Cracking the service ticket hash
$ hashcat -m 13100 svc_sql.hash /usr/share/wordlists/rockyou.txt
svc_sql:Summer2024!
# Time to crack: 47 seconds
# This weak password allows an attacker to authenticate as a Domain Admin
39.4.4 Proof of Concept vs. Full Exploitation
A critical professional judgment: how far do you go to demonstrate a vulnerability?
Minimum Viable Evidence: For most findings, demonstrate that the vulnerability exists and show the potential impact without actually exploiting it to completion. For the MedSecure SQL injection, we: - Demonstrated the injection point with boolean-based testing - Enumerated the database structure - Extracted three redacted records to prove data access - Did NOT dump the entire database
When Full Exploitation is Needed: Sometimes, stakeholders need to see the complete attack chain to understand the risk: - Compliance assessments where exploitation is explicitly required - When the client disputes the severity of a finding - When demonstrating an attack chain across multiple vulnerabilities - Red team engagements where the objective is to demonstrate a specific scenario
Ethical Boundaries: - Never exfiltrate real sensitive data beyond what is needed to prove the vulnerability - Never modify production data - Document exactly what you accessed and what you did not - Follow the RoE data handling procedures
Professional Tip: When you discover sensitive data during testing, document the type and quantity of data accessible (e.g., "50,000 patient records including names, SSNs, and diagnoses") but avoid including actual sensitive data in the report. Use redacted samples --- just enough to prove the point.
39.5 Remediation Recommendations
Remediation recommendations are where your report creates value. Finding vulnerabilities is important, but helping the client fix them is what justifies your engagement fee.
39.5.1 Writing Effective Recommendations
Good remediation recommendations share these characteristics:
Specific: Not "fix the SQL injection" but "implement parameterized queries in the searchAppointments() function in /src/controllers/appointmentController.js using the pg library's parameterized query syntax."
Actionable: The reader should know exactly what to do. Include code examples, configuration snippets, or specific patch identifiers.
Prioritized: Indicate whether this is an immediate fix, a short-term improvement, or a long-term architectural change.
Realistic: Consider the client's technology stack, team capabilities, and constraints. Recommending "rewrite the entire application in Rust" is not helpful.
Layered: Provide defense-in-depth recommendations --- the primary fix plus additional mitigating controls:
- Primary fix: Parameterized queries (eliminates the vulnerability)
- Defense in depth: WAF rules (detects and blocks exploitation attempts)
- Detection: Database activity monitoring (alerts on suspicious queries)
- Prevention: SAST/DAST in CI/CD pipeline (prevents reintroduction)
39.5.2 Remediation Categories
Organize your remediation recommendations into categories that align with how the client assigns work:
Application Fixes (Development Team): - Code changes (input validation, parameterized queries, output encoding) - Library updates (upgrading vulnerable dependencies) - Configuration changes (disabling debug mode, setting secure cookie flags)
Infrastructure Fixes (Operations Team): - Patching (specific CVEs and patch identifiers) - Configuration hardening (firewall rules, service configurations) - Architecture changes (network segmentation, access control)
Policy and Process Changes (Management): - Implementing vulnerability management programs - Adding security to SDLC processes - Training and awareness
Quick Wins (Immediate Action): - Changing default credentials - Disabling unnecessary services - Applying critical patches
39.5.3 The Remediation Roadmap
At the end of the report, provide a consolidated remediation roadmap that helps the client plan their response:
| Priority | Finding | Remediation | Owner | Estimated Effort | Timeline |
|---|---|---|---|---|---|
| 1 (Critical) | F-001 SQL Injection | Implement parameterized queries | Dev Team | 2-3 days | 0-7 days |
| 2 (Critical) | F-002 Default Creds | Change credentials, implement credential management | Ops Team | 1 day | 0-7 days |
| 3 (High) | F-003 Kerberoastable Accounts | Reset passwords, use gMSAs | AD Team | 2-3 days | 7-30 days |
| 4 (High) | F-004 Missing Patches | Apply patches, implement patch management | Ops Team | 3-5 days | 7-30 days |
| 5 (High) | F-005 Segmentation | Revise firewall rules, verify segmentation | Network Team | 5-10 days | 7-30 days |
39.5.4 Positive Observations
A common oversight in penetration testing reports is the absence of positive observations. While the primary purpose of a pentest is to identify vulnerabilities, documenting what works well provides important context and value:
Why Include Positive Observations: - They demonstrate that the assessment was thorough (you tested these areas and found them effective) - They acknowledge the client's security investments (important for morale and continued funding) - They provide context for the negative findings (the security program is not entirely broken) - They help auditors and regulators understand the overall security posture
Examples from MedSecure: - "The web application firewall (WAF) successfully blocked multiple automated SQL injection attempts, demonstrating effective defense-in-depth for standard attack patterns. However, the WAF was bypassed using manual testing techniques (see F-001)." - "Multi-factor authentication was correctly implemented for VPN access, preventing password-based attacks against the remote access infrastructure." - "Active Directory Group Policy was configured to enforce screen lock after 5 minutes of inactivity, consistent with security best practices." - "TLS 1.2 with strong cipher suites was correctly configured on all production web servers (note: TLS 1.0 was additionally enabled, see F-008)."
Where to Include Them: Include positive observations either as a dedicated section between the detailed findings and the remediation roadmap, or as brief notes within the scope and methodology section under "effective controls observed."
39.5.5 Verification Testing
Always recommend verification testing after remediation:
- Retest: Return to test that specific fixes are effective (typically at reduced cost)
- Regression Testing: Ensure that fixes did not introduce new vulnerabilities
- Timeline: Recommend retesting within 30-60 days of remediation completion
- Scope: Retesting should cover the specific findings, not the full engagement scope
39.5.6 Remediation for Specific Technology Stacks
The value of your remediation recommendations increases dramatically when they reference the client's specific technology stack. Here are examples of technology-specific remediation patterns:
Node.js / Express / PostgreSQL (ShopStack):
For SQL injection:
// Use parameterized queries with pg library
const { Pool } = require('pg');
const pool = new Pool();
// VULNERABLE
const result = await pool.query(
`SELECT * FROM users WHERE id = '${req.params.id}'`
);
// FIXED
const result = await pool.query(
'SELECT * FROM users WHERE id = $1',
[req.params.id]
);
For XSS in React applications:
// React auto-escapes by default (safe)
<p>{user.displayName}</p>
// DANGEROUS - bypasses React's escaping
<p dangerouslySetInnerHTML={{__html: user.displayName}} />
// If HTML rendering is required, use a sanitization library
import DOMPurify from 'dompurify';
<p dangerouslySetInnerHTML={{__html: DOMPurify.sanitize(user.content)}} />
Python / Django / PostgreSQL:
For SQL injection:
# VULNERABLE
cursor.execute(
f"SELECT * FROM patients WHERE id = '{patient_id}'"
)
# FIXED (Django ORM - preferred)
patient = Patient.objects.get(id=patient_id)
# FIXED (raw query with parameterization)
cursor.execute(
"SELECT * FROM patients WHERE id = %s",
[patient_id]
)
Active Directory Remediation:
For Kerberoastable accounts:
# Identify Kerberoastable accounts
Get-ADUser -Filter {ServicePrincipalName -ne "$null"} -Properties ServicePrincipalName, PasswordLastSet
# Remediation: Use Group Managed Service Accounts (gMSA)
New-ADServiceAccount -Name "svc_webapp" -DNSHostName "svc-webapp.medsecure.local" -PrincipalsAllowedToRetrieveManagedPassword "WebServers$"
# For accounts that cannot use gMSA: set 25+ character random passwords
# and rotate every 90 days
Set-ADAccountPassword -Identity "svc_legacy" -NewPassword (ConvertTo-SecureString -AsPlainText "[generated 25-char password]" -Force)
Network Segmentation (Firewall Rules):
# Example: Deny medical device network access to payment VLAN
# Current (VULNERABLE):
permit ip 10.10.50.0/24 10.10.30.0/24
# Fixed (DENY by default, permit only required traffic):
deny ip 10.10.50.0/24 10.10.30.0/24 log
# Add specific permits only if legitimate traffic is identified
# (in this case, no medical device needs to reach the payment VLAN)
Including these technology-specific examples in your report gives the remediation team actionable guidance they can implement immediately, rather than generic advice they must research and adapt.
39.5.7 Cost Estimation for Remediation
Executives often ask "how much will remediation cost?" Including rough cost estimates in your remediation roadmap adds significant value:
Factors Affecting Remediation Cost: - Development effort (hours x developer rate) - Infrastructure changes (hardware, software licenses, configuration time) - Downtime during implementation (business impact) - Testing and validation after remediation - External consultant fees (if specialized expertise is needed)
Example Cost Estimates for Common Findings:
| Finding Type | Estimated Remediation Cost | Effort |
|---|---|---|
| SQL injection (single endpoint) | $2,000-$5,000 | 1-2 developer days |
| SQL injection (application-wide code review) | $15,000-$40,000 | 5-15 developer days |
| Missing patches (single server) | $500-$2,000 | 2-4 hours operations |
| Network segmentation redesign | $50,000-$200,000 | 2-8 weeks network engineering |
| MFA implementation (organization-wide) | $30,000-$100,000 | 4-12 weeks |
| WAF deployment | $20,000-$60,000/year | 1-4 weeks implementation |
These are rough estimates and should be presented as ranges, not precise figures. The client's procurement and operations teams will develop accurate quotes, but your estimates help them budget and prioritize.
39.6 Report Review and Quality Assurance
A report with errors, inconsistencies, or unclear language undermines your credibility and the client's confidence. Quality assurance is not optional.
39.6.1 The QA Process
Self-Review (Author): Before submitting to peer review, the author should: - Re-read every finding for completeness and accuracy - Verify all evidence links and screenshots work - Check all CVSS scores against the CVSS calculator - Spell-check and grammar-check the entire document - Verify all IP addresses and hostnames are correct - Ensure consistent formatting throughout
Peer Review (Second Tester): A second tester should review the report for: - Technical accuracy: Are the findings correct? Are the risk ratings appropriate? - Reproducibility: Can the reviewer reproduce the finding from the steps provided? - Completeness: Are there findings the reviewer would have expected to see? - Clarity: Is the report understandable to someone who wasn't on the engagement? - Consistency: Are similar findings rated similarly? Is the tone consistent?
Technical Lead Review: A senior technical lead should review for: - Strategic alignment: Does the executive summary accurately reflect the findings? - Risk calibration: Are the risk ratings appropriate for this client's context? - Remediation quality: Are the recommendations specific, actionable, and realistic? - Business impact accuracy: Is the business impact appropriately communicated?
Final Edit: A final pass (potentially by a non-technical editor) for: - Grammar, spelling, and punctuation - Formatting consistency - Table of contents accuracy - Page numbering - Screenshot numbering and referencing - Confidentiality markings
39.6.2 The Pre-Delivery Report Checklist
Before any report leaves your hands, run it through a structured checklist. This is not a substitute for the multi-stage review process described above --- it is the final gate before delivery.
Structure and Formatting: - [ ] Cover page has correct client name, engagement dates, and classification - [ ] Document control section has current version number, author, and reviewer - [ ] Table of contents matches actual headings and page numbers - [ ] All section numbers are sequential and consistent - [ ] Headers, fonts, and spacing are consistent throughout - [ ] Page numbers are present and correct - [ ] Company logo and branding are correct - [ ] Confidentiality markings appear on every page (header or footer)
Executive Summary: - [ ] Written for non-technical readers (no jargon) - [ ] Overall risk assessment is clearly stated - [ ] Key findings are expressed in business impact terms - [ ] Strategic recommendations include timelines - [ ] Length is 1-2 pages (not longer)
Scope and Methodology: - [ ] All in-scope targets are listed with correct IPs/hostnames - [ ] Out-of-scope items are documented - [ ] Testing dates and windows match actual engagement - [ ] Methodology is clearly described - [ ] Limitations and constraints are documented - [ ] Tools used are listed (including version numbers for significant tools)
Findings: - [ ] Every finding has a unique ID (sequential, no gaps) - [ ] Every finding has: title, severity, CVSS score, CVSS vector, affected system, status, description, business impact, technical detail, steps to reproduce, evidence, remediation, and references - [ ] CVSS scores have been verified against the FIRST CVSS calculator - [ ] Similar findings are rated consistently - [ ] Evidence screenshots are clear, annotated, and numbered - [ ] All screenshots referenced in text exist in the document - [ ] Sensitive data in screenshots is redacted - [ ] Steps to reproduce are detailed enough for someone else to follow - [ ] Remediation recommendations are specific and actionable
Technical Accuracy: - [ ] All IP addresses and hostnames are correct (verify against scope) - [ ] All URLs are correct and properly formatted - [ ] Code examples are syntactically correct - [ ] Tool output has not been altered (only cleaned for readability) - [ ] No client data from a different engagement appears in this report
Remediation Roadmap: - [ ] All findings appear in the roadmap - [ ] Priorities are logical (Critical first, then High, etc.) - [ ] Estimated effort figures are realistic - [ ] Team ownership assignments are reasonable
Final Checks: - [ ] Spell-check completed (including technical terms) - [ ] No tracked changes or comments remaining - [ ] No "[PLACEHOLDER]" or "[TODO]" text anywhere in the document - [ ] File size is reasonable (compress images if necessary) - [ ] PDF renders correctly (formatting, images, tables all intact)
Professional Tip: Some firms automate portions of this checklist. A simple Python script (see code/example-01-report-generator.py in this chapter's code directory) can verify that finding IDs are sequential, CVSS vectors are well-formed, and all required sections are present. Automation catches the mechanical errors, freeing your reviewers to focus on technical accuracy and narrative quality.
39.6.3 The Debrief Presentation
Report delivery is not just sending a PDF --- it typically includes a formal debrief presentation. How you deliver findings in person is as important as how you write them.
Structuring the Debrief:
Most debriefs follow a two-part structure:
Part 1: Executive Session (30-60 minutes) - Audience: CIO, CISO, CTO, VP of Engineering, legal counsel, risk manager - Content: Executive summary findings, overall risk posture, strategic recommendations - Tone: Business-focused, no command-line output or technical details - Goal: Decision-makers understand the risk and commit to remediation resources - Materials: 5-10 slide presentation distilled from the executive summary
Part 2: Technical Deep-Dive (60-120 minutes) - Audience: Security team, development leads, system administrators, network engineers - Content: Detailed walkthrough of each finding, live demonstration where possible, remediation guidance - Tone: Technical, collaborative, interactive - Goal: Remediation teams understand each finding well enough to begin work - Materials: The full report plus any supplementary evidence
Presentation Tips: - Bring the tester who performed the work --- they can answer detailed technical questions - Prepare for pushback: some teams will dispute findings or severity ratings - When challenged, refer to evidence: "Let me show you the request and response that demonstrates this" - Avoid being adversarial: you are a partner helping improve security, not an attacker shaming the team - Take notes during the debrief: client questions often reveal context that improves the report - If the client identifies a compensating control you were unaware of, document it and consider adjusting the risk rating in a report revision
Handling Difficult Conversations: Some findings create uncomfortable conversations, especially when they reveal: - Failures in processes that specific people own - Issues that were previously identified but not addressed - Violations of internal policy or compliance requirements - Vulnerabilities in recently deployed systems (implying process failures)
Navigate these with professionalism: present facts and evidence, avoid assigning blame to individuals, focus on systemic improvements rather than personal failings, and frame the conversation around "how do we fix this" rather than "how did this happen."
39.6.4 Common Report Deficiencies
Based on CREST assessor feedback and industry experience, these are the most common report quality issues:
Vague Findings: "The web application has multiple security issues" is not a finding. Every finding must be specific about what, where, and how.
Missing Business Impact: Technical findings without business impact analysis are useless to executives. Always translate the vulnerability into business terms.
Inconsistent Risk Ratings: Two similar findings rated differently without explanation. Use your risk rating methodology consistently and document any deviations.
Poor Evidence: Screenshots that are too small to read, command output without context, or findings with no evidence at all. Every finding needs sufficient evidence to be independently verified.
Copy-Paste Artifacts: Tool output pasted directly into findings without analysis or context. Scanner output is raw material, not a finished finding.
Missing Remediation: Finding a vulnerability without recommending a fix provides no value. Every finding needs a specific, actionable remediation recommendation.
Scope Confusion: The report tests systems not in scope, or fails to note that certain in-scope systems were not tested (and why). Always reconcile your findings against the scope.
39.6.5 Report Delivery
Secure Delivery: - Encrypt the report (password-protected PDF or PGP-encrypted email) - Send the password via a separate channel (SMS, phone call) - Verify the recipient's identity before sending - Use secure file transfer if the report is too large for email
Presentation: - Offer a debrief call or presentation to walk through findings - Prepare a separate slide deck for board-level presentation (5-10 slides) - Be prepared to answer questions and provide additional context - Bring the tester(s) who did the work --- they know the details best
Follow-Up: - Provide support for remediation questions (typically included in engagement fee for 30 days) - Document any scope clarifications or additional findings discovered during debrief - Issue report updates if errors are discovered after delivery (with version tracking)
MedSecure Report Delivery: We delivered the MedSecure report via encrypted email to Dr. Sarah Chen and Marcus Torres, with the decryption password communicated via a separate phone call. We conducted a two-hour debrief: the first hour was an executive summary presentation for Dr. Chen, the CTO, and the general counsel; the second hour was a technical deep-dive with Marcus and the development team lead, Priya Patel. During the technical session, Marcus asked about the network segmentation finding, and we were able to pull up our Nmap scans and firewall rule evidence in real time.
39.6.6 Report Versioning and Updates
Reports sometimes require updates after delivery. Handle this professionally:
Version Control: - Maintain a clear version history in the document control section - Every revision gets a new version number (1.0 = initial, 1.1 = minor correction, 2.0 = major revision) - Document what changed in each version and why - Maintain copies of all versions (never overwrite the original)
Common Reasons for Report Updates: - Factual errors discovered after delivery (wrong IP address, incorrect CVSS score) - Client provides additional context that changes a finding's assessment - Retesting reveals that a finding has been remediated (status update) - New information emerges that affects a finding's severity - Client requests a specific format for compliance purposes (separate PCI report, HIPAA mapping)
Post-Remediation Updates: After the client remediates findings and requests retesting, produce an updated report that: - Changes the status of remediated findings from "Open" to "Remediated" or "Retested - Pass" - Documents the retesting date and evidence of remediation - Identifies any findings that were not successfully remediated ("Retested - Fail") - Updates the executive summary to reflect the current posture - Keeps the original findings intact (do not delete them) for audit trail purposes
39.6.7 Reports as Legal Documents
Penetration testing reports have legal implications that affect how they should be written and handled:
Discovery Risk: In the event of litigation (class action from a data breach, regulatory investigation, insurance claim), penetration testing reports may be subject to legal discovery. This means: - Everything you write in the report could be read aloud in a courtroom - Be factual, professional, and precise --- avoid speculation, humor, or editorial commentary - Do not overstate or understate findings - Document what you tested and what you did not test
Attorney-Client Privilege: Some organizations engage penetration testing firms through their legal counsel, structuring the engagement as attorney work product to potentially protect the report under attorney-client privilege. While the legal protections vary by jurisdiction: - If the engagement is structured this way, the report should note that it was produced at the direction of legal counsel - Follow any specific handling instructions from the client's legal team - Be aware that privilege protections may not apply in all circumstances (regulatory investigations, for example)
Regulatory Evidence: Pentest reports serve as compliance evidence for PCI DSS, SOC 2, HIPAA, and other frameworks. Auditors and regulators will review reports to assess: - Was the testing methodology appropriate? - Was the scope adequate? - Were findings properly documented? - Were remediation recommendations provided? - Were findings tracked to resolution?
Write your report knowing that an auditor will evaluate it. A report that an auditor finds inadequate reflects poorly on both the tester and the client.
39.6.8 The Complete MedSecure Report: Bringing It All Together
The MedSecure penetration testing report, when assembled using all the techniques in this chapter, totals approximately 65 pages:
- Cover Page: 1 page
- Document Control: 1 page
- Table of Contents: 1 page
- Executive Summary: 2 pages
- Scope and Methodology: 3 pages
- Findings Summary: 2 pages
- Detailed Findings (10 findings x 3-4 pages each): 35 pages
- Positive Observations: 1 page
- Remediation Roadmap: 2 pages
- Appendices (scan results, glossary, severity definitions): 17 pages
This report serves multiple audiences: Dr. Chen presents the executive summary to MedSecure's board, Marcus Torres uses the technical findings to plan remediation, Priya Patel's development team follows the code-level remediation guidance, and the compliance team maps findings to HIPAA and PCI DSS requirements for their auditors.
The report is delivered encrypted, debriefed in person, and updated after retesting three months later. It becomes part of MedSecure's compliance evidence portfolio and risk management program. The technical work took ten days; the report will drive security improvements for the next twelve months.
39.7 Chapter Summary
Report writing is the skill that completes the penetration testing cycle. Without an effective report, even the most sophisticated testing is wasted effort.
Key Concepts Reviewed
Report Structure: - A professional report follows a consistent structure: cover page, document control, executive summary, scope/methodology, findings summary, technical findings, remediation roadmap, and appendices - Reports typically range from 40-80 pages for a standard engagement - PDF is the standard delivery format, but clients may request additional formats
Dual Audience Writing: - Executive summaries communicate risk in business terms (1-2 pages, no jargon) - Technical findings provide reproducible detail for remediation teams - Both audiences need to be served within a single document - Professional, objective tone throughout --- never condescending or alarmist
Finding Documentation: - Every finding follows a consistent template: ID, title, severity, CVSS, affected systems, description, business impact, technical detail, steps to reproduce, evidence, remediation, and references - The MedSecure F-001 SQL injection example demonstrates the expected level of detail - Proof of concept should demonstrate the vulnerability without unnecessary exploitation
Risk Ratings: - CVSS provides standardized numerical scores - Risk matrices that combine technical severity with business impact provide better context - Always contextualize risk ratings for the specific client and environment - Include the full CVSS vector string for transparency
Evidence Standards: - Screenshots must be clear, annotated, and consistently organized - Request/response pairs are essential for web application findings - Command output should be cleaned and annotated, never altered - Redact sensitive data but preserve vulnerability evidence
Remediation: - Recommendations must be specific, actionable, prioritized, realistic, and layered - Organize by team responsibility (development, operations, management) - Provide a consolidated remediation roadmap with priorities and timelines - Always recommend verification testing after remediation
Quality Assurance: - Four-stage review: self-review, peer review, technical lead review, final edit - Common deficiencies include vague findings, missing business impact, inconsistent ratings, and poor evidence - Secure delivery with encryption and separate password channel - Debrief presentation with both executive and technical sessions
What's Next
In Chapter 40, we will explore security compliance and governance --- the regulatory landscape that drives most penetration testing engagements. Understanding PCI DSS, HIPAA, SOC 2, ISO 27001, and emerging regulations like NIS2 and DORA will help you position your testing services effectively and ensure your work meets regulatory expectations.
Blue Team Perspective: As a defender, you are a consumer of pentest reports. Demand quality. If a report is vague, lacks evidence, or provides generic remediation, push back. A good report should give your team everything they need to understand, reproduce, and fix each finding. If you cannot reproduce a finding from the report, the report is inadequate.
Try It in Your Lab: Practice writing findings for vulnerabilities in your lab environment. Test your Metasploitable or DVWA instance, document three findings using the template from this chapter, and have a friend (ideally a non-technical friend for the executive summary) review them for clarity. The feedback will be invaluable.