Templates and Worksheets

This appendix provides ten practical templates used in professional penetration testing and security assessment engagements. Each template is designed to be adapted to your specific needs. Replace bracketed text [like this] with your information.

These templates are starting points, not legal documents. For authorization letters and contracts, always consult with legal counsel familiar with cybersecurity law in your jurisdiction.


Template 1: Penetration Test Scope Document

=====================================================
        PENETRATION TEST SCOPE DOCUMENT
=====================================================

Document ID:        [PROJ-YYYY-NNN]
Version:            [1.0]
Date:               [YYYY-MM-DD]
Classification:     [CONFIDENTIAL]

-----------------------------------------------------
1. ENGAGEMENT OVERVIEW
-----------------------------------------------------

Client Name:        [Full legal entity name]
Client Contact:     [Name, title, email, phone]
Testing Firm:       [Your company name]
Lead Tester:        [Name, email, phone]
Project Manager:    [Name, email, phone]

Engagement Type:    [ ] External Penetration Test
                    [ ] Internal Penetration Test
                    [ ] Web Application Assessment
                    [ ] Wireless Assessment
                    [ ] Social Engineering
                    [ ] Red Team Engagement
                    [ ] Physical Security Assessment
                    [ ] Other: [specify]

Start Date:         [YYYY-MM-DD]
End Date:           [YYYY-MM-DD]
Testing Window:     [e.g., Monday-Friday, 22:00-06:00 EST]

-----------------------------------------------------
2. IN-SCOPE ASSETS
-----------------------------------------------------

[List all assets explicitly authorized for testing]

IP Addresses / Ranges:
  - [192.168.1.0/24]
  - [10.0.0.50]
  - [203.0.113.10 - 203.0.113.20]

Domains / URLs:
  - [www.example.com]
  - [api.example.com]
  - [portal.example.com]

Web Applications:
  - [Application Name] at [URL] — [Description]
  - [Application Name] at [URL] — [Description]

Wireless Networks:
  - SSID: [network name] — [Location]

Physical Locations:
  - [Address, building, floor]

User Accounts Provided:
  - [Username] — [Role/Access Level]
  - [Username] — [Role/Access Level]

-----------------------------------------------------
3. OUT-OF-SCOPE ASSETS
-----------------------------------------------------

[List all assets explicitly excluded from testing]

  - [10.0.0.100 — Production database (do not touch)]
  - [mail.example.com — Third-party hosted email]
  - [Building B — Not authorized for physical access]
  - [Any Denial of Service (DoS) attacks]
  - [Any attacks against end users / customers]

-----------------------------------------------------
4. TESTING METHODOLOGY
-----------------------------------------------------

Authorized Testing Methods:
  [ ] Automated vulnerability scanning
  [ ] Manual exploitation
  [ ] Password attacks (online brute force)
  [ ] Password attacks (offline hash cracking)
  [ ] Social engineering (phishing)
  [ ] Social engineering (phone/vishing)
  [ ] Social engineering (physical)
  [ ] Wireless testing
  [ ] Code review
  [ ] Other: [specify]

Prohibited Methods:
  - [Denial of Service attacks]
  - [Modification of production data]
  - [Accessing customer/patient personal data beyond proof of access]
  - [Other: specify]

-----------------------------------------------------
5. TESTING CONSTRAINTS
-----------------------------------------------------

Maximum concurrent scan threads:     [e.g., 10]
Maximum request rate:                 [e.g., 100 requests/second]
Bandwidth limitations:                [specify if any]
Maintenance windows to avoid:         [dates/times]
Change freeze periods:                [dates]

-----------------------------------------------------
6. COMMUNICATION PLAN
-----------------------------------------------------

Primary Client Contact:    [Name] — [Phone] — [Email]
Emergency Contact:         [Name] — [Phone] — [Email]
24/7 Emergency Contact:    [Name] — [Phone]

Status Updates:             [ ] Daily  [ ] Weekly  [ ] As needed
Update Method:              [ ] Email  [ ] Phone  [ ] Secure portal

Critical Finding Notification:
  Findings rated [Critical/High] will be reported immediately to
  [email/phone] within [N] hours of discovery.

-----------------------------------------------------
7. DATA HANDLING
-----------------------------------------------------

Data Classification:        [CONFIDENTIAL]
Encryption Required:        [ ] In transit  [ ] At rest  [ ] Both
Data Retention Period:      [e.g., 90 days after report delivery]
Data Destruction Method:    [e.g., Secure deletion with verification]
Sensitive Data Encountered: [If PHI/PCI/PII is encountered, immediately
                            stop and notify client contact. Do not
                            copy, store, or exfiltrate.]

-----------------------------------------------------
8. DELIVERABLES
-----------------------------------------------------

  [ ] Executive Summary Report
  [ ] Technical Findings Report
  [ ] Vulnerability Remediation Guidance
  [ ] Raw scan data (upon request)
  [ ] Retest after remediation (within [N] days)
  [ ] Debrief presentation

Report Delivery Date:       [YYYY-MM-DD]
Report Format:              [ ] PDF  [ ] Word  [ ] Encrypted

-----------------------------------------------------
9. AUTHORIZATION
-----------------------------------------------------

By signing below, [Client Name] authorizes [Testing Firm]
to perform the security testing described in this document
against the in-scope assets listed in Section 2, subject
to the constraints in Sections 3-5.

Client Authorized Signatory:

Name:       ___________________________________
Title:      ___________________________________
Signature:  ___________________________________
Date:       ___________________________________

Testing Firm Representative:

Name:       ___________________________________
Title:      ___________________________________
Signature:  ___________________________________
Date:       ___________________________________

Guidance Notes: - The scope document should be signed before any testing begins. No exceptions. - If the scope changes during the engagement, create a scope amendment with new signatures. - Keep the scope document accessible during testing — refer to it whenever you are unsure whether an action is authorized. - The testing firm should retain a signed copy for at least the data retention period.


Template 2: Rules of Engagement (RoE)

=====================================================
            RULES OF ENGAGEMENT
=====================================================

Project:            [Project Name / ID]
Date:               [YYYY-MM-DD]
Version:            [1.0]

-----------------------------------------------------
1. GENERAL RULES
-----------------------------------------------------

1.1  All testing must remain within the scope defined in
     the Penetration Test Scope Document (ID: [reference]).

1.2  Testing will only occur during the authorized window:
     [Start Date] to [End Date], [Time Range] [Timezone].

1.3  Testers will use the following source IP addresses:
     - [IP address / range for external testing]
     - [IP address / range for internal testing]

     Client security teams may whitelist these IPs or use
     them to distinguish testing traffic from real attacks.

1.4  The tester will identify themselves using the
     codeword "[codeword]" if contacted by client
     security personnel during testing.

-----------------------------------------------------
2. ESCALATION PROCEDURES
-----------------------------------------------------

2.1  If testing causes an unintended service disruption:
     a) Stop the specific test activity immediately
     b) Notify [Client Contact] within [15] minutes
     c) Document the incident and contributing factors
     d) Resume testing only after client approval

2.2  If a critical vulnerability is discovered:
     a) Notify [Client Contact] within [4] hours
     b) Provide preliminary details via [encrypted email]
     c) Continue testing unless instructed otherwise

2.3  If evidence of a prior or active compromise is
     discovered (indicators of a real attacker):
     a) Stop testing immediately
     b) Preserve all evidence
     c) Notify [Client Contact] within [1] hour
     d) Do not attempt to remediate or interact with
        the attacker's infrastructure

2.4  If sensitive data (PII, PHI, PCI, credentials) is
     encountered:
     a) Document the finding (type and location, not content)
     b) Do not copy, store, or exfiltrate the data
     c) Take a screenshot showing access was possible
        without exposing actual data values
     d) Notify client if large-scale data exposure exists

-----------------------------------------------------
3. TECHNICAL RULES
-----------------------------------------------------

3.1  Denial of Service:
     [ ] Permitted — with advance notice to [Contact]
     [X] NOT Permitted under any circumstances

3.2  Brute Force / Password Attacks:
     [ ] Permitted — with lockout threshold of [N] attempts
     [ ] Permitted — only against test accounts
     [ ] NOT Permitted online; offline cracking only
     Lockout policy: [document the account lockout policy]

3.3  Social Engineering:
     [ ] Phishing permitted — targeting [N] employees
     [ ] Vishing permitted
     [ ] Physical social engineering permitted
     [ ] NOT Permitted

3.4  Exploitation:
     [ ] Exploitation to prove access is permitted
     [ ] Exploitation must stop at proof of concept
     [ ] Post-exploitation and pivoting are permitted
     [ ] Persistent backdoors: [ ] Permitted  [X] NOT Permitted

3.5  Data Exfiltration:
     [ ] Simulated exfiltration only (prove the path exists)
     [ ] Limited exfiltration of test data only
     [X] No actual data exfiltration

3.6  Automated Scanning:
     Maximum concurrent connections: [N]
     Maximum requests per second:    [N]
     Scan timing: [e.g., Nmap -T3 or below]

3.7  Physical Testing:
     [ ] Building access via social engineering: Permitted
     [ ] Lock picking: Permitted
     [ ] Badge cloning: Permitted
     Restrictions: [No access to data center, server rooms,
     or areas marked "Restricted" without escort]

-----------------------------------------------------
4. DOCUMENTATION REQUIREMENTS
-----------------------------------------------------

4.1  Testers will maintain a detailed activity log including:
     - Timestamp of each significant action
     - Source and destination IP addresses
     - Tools used and commands executed
     - Results and evidence collected

4.2  All evidence will be stored encrypted (AES-256 or
     equivalent) on tester-controlled systems.

4.3  Screen recordings [ ] Required  [ ] Optional

-----------------------------------------------------
5. RULES ACCEPTANCE
-----------------------------------------------------

Client Representative:

Name:       ___________________________________
Title:      ___________________________________
Signature:  ___________________________________
Date:       ___________________________________

Lead Tester:

Name:       ___________________________________
Signature:  ___________________________________
Date:       ___________________________________

Guidance Notes: - The RoE is the operational handbook during the engagement. Print it and keep it at your workstation. - If you encounter a situation not covered by the RoE, stop and consult the client contact before proceeding. - Update the RoE version number and re-sign if rules change mid-engagement.


Template 3: Authorization / Permission to Test Letter

=====================================================
       AUTHORIZATION TO PERFORM SECURITY TESTING
=====================================================

Date:               [YYYY-MM-DD]
Reference:          [Letter Reference Number]

To Whom It May Concern:

This letter confirms that [Client Full Legal Name], located
at [Client Address], hereby authorizes [Tester / Company Name],
located at [Tester Address], to perform security testing against
the systems, networks, and applications described below.

AUTHORIZED SCOPE:
- [IP addresses, domains, applications, physical locations]
- [Refer to attached Scope Document ID: PROJ-YYYY-NNN]

TESTING PERIOD:
- From: [YYYY-MM-DD HH:MM Timezone]
- To:   [YYYY-MM-DD HH:MM Timezone]

AUTHORIZED ACTIVITIES:
- Vulnerability scanning and assessment
- Penetration testing (manual and automated)
- [Additional authorized activities]

EXCLUDED ACTIVITIES:
- [List any prohibited testing methods]

POINT OF CONTACT:
- Name:  [Client Contact Name]
- Title: [Title]
- Phone: [Phone Number]
- Email: [Email Address]

This authorization is granted by the undersigned, who has
the authority to authorize such testing on behalf of
[Client Full Legal Name]. This letter may be presented to
law enforcement or internet service providers as proof
of authorization if questions arise during the testing period.

The authorized testing party agrees to:
1. Test only within the defined scope
2. Comply with all applicable laws
3. Report findings only to authorized contacts
4. Handle all data in accordance with the agreed data
   handling procedures
5. Destroy all testing data within [N] days of engagement
   completion

Authorized By:

Name:           ___________________________________
Title:          ___________________________________
Organization:   ___________________________________
Signature:      ___________________________________
Date:           ___________________________________

Witness (optional):

Name:           ___________________________________
Signature:      ___________________________________
Date:           ___________________________________


[Company Letterhead / Seal]

Guidance Notes: - This letter is your "get out of jail free" card. Carry a copy (physical and digital) at all times during the engagement. - The signer must have actual authority to authorize testing. A random employee's signature is not sufficient — verify the signer's authority. - For physical testing, carry a printed copy with a contact number the client can verify if you are stopped by security. - This template should be reviewed by legal counsel before use in a real engagement.


Template 4: Vulnerability Assessment Report Template

=====================================================
         VULNERABILITY ASSESSMENT REPORT
=====================================================

                  [CONFIDENTIAL]

Client:             [Client Name]
Assessment Date:    [YYYY-MM-DD to YYYY-MM-DD]
Report Date:        [YYYY-MM-DD]
Report Version:     [1.0]
Prepared By:        [Tester Name, Company]
Report ID:          [RPT-YYYY-NNN]

=====================================================
TABLE OF CONTENTS
=====================================================

1. Executive Summary
2. Assessment Scope and Methodology
3. Findings Summary
4. Detailed Findings
5. Remediation Priority Matrix
6. Appendix A: Scan Configuration
7. Appendix B: Raw Scan Data (if requested)

=====================================================
1. EXECUTIVE SUMMARY
=====================================================

[2-3 paragraphs summarizing:
 - What was assessed and why
 - Overall security posture assessment
 - Total findings by severity (Critical/High/Medium/Low/Info)
 - Top 3 most significant findings in business terms
 - Strategic recommendations (3-5 bullets)
]

Overall Risk Rating:  [ ] Critical  [ ] High  [ ] Medium  [ ] Low

Findings Summary:
+----------+-------+
| Severity | Count |
+----------+-------+
| Critical |  [N]  |
| High     |  [N]  |
| Medium   |  [N]  |
| Low      |  [N]  |
| Info     |  [N]  |
+----------+-------+
| Total    |  [N]  |
+----------+-------+

=====================================================
2. ASSESSMENT SCOPE AND METHODOLOGY
=====================================================

2.1 Scope
  - [List all assets scanned]
  - [IP ranges, domains, applications]

2.2 Methodology
  - [Standards followed: NIST SP 800-115, OWASP, etc.]
  - [Tools used: Nessus, OpenVAS, Nuclei, etc.]
  - [Scan type: Authenticated / Unauthenticated]

2.3 Limitations
  - [Any systems unavailable during scanning]
  - [Network segments not reachable]
  - [Time constraints]

=====================================================
3. FINDINGS SUMMARY
=====================================================

[Table listing all findings sorted by severity]

+----+----------+------------------------------------+---------+------+
| #  | Severity | Finding                            | CVSS    | Host |
+----+----------+------------------------------------+---------+------+
|  1 | Critical | [Finding title]                    | [Score] | [IP] |
|  2 | High     | [Finding title]                    | [Score] | [IP] |
|  3 | High     | [Finding title]                    | [Score] | [IP] |
| .. | ...      | ...                                | ...     | ...  |
+----+----------+------------------------------------+---------+------+

=====================================================
4. DETAILED FINDINGS
=====================================================

[Repeat for each finding]

--- FINDING [#] -------------------------------------------

Title:          [Descriptive title]
Severity:       [Critical / High / Medium / Low / Info]
CVSS Score:     [N.N] — Vector: [CVSS:3.1/AV:.../AC:.../...]
Affected Host:  [IP / hostname]
Affected Port:  [Port/Protocol]
CVE:            [CVE-YYYY-NNNN] (if applicable)

Description:
  [What the vulnerability is and how it was identified.
   Include technical details about the vulnerable component.]

Evidence:
  [Scanner output, version information, or manual verification
   results. Include screenshots where applicable.]

Impact:
  [What an attacker could achieve by exploiting this
   vulnerability. Describe in business terms.]

Remediation:
  [Specific, actionable steps to fix the vulnerability.
   Include patch references, configuration changes, or
   workarounds.]

References:
  - [CVE link]
  - [Vendor advisory link]
  - [Remediation guide link]

-------------------------------------------------------

=====================================================
5. REMEDIATION PRIORITY MATRIX
=====================================================

[Organize findings by priority considering severity,
 exploitability, and business impact]

Priority 1 (Immediate — within 72 hours):
  - Finding #[N]: [Title]
  - Finding #[N]: [Title]

Priority 2 (Short-term — within 30 days):
  - Finding #[N]: [Title]

Priority 3 (Medium-term — within 90 days):
  - Finding #[N]: [Title]

Priority 4 (Long-term — within 180 days):
  - Finding #[N]: [Title]

=====================================================
APPENDICES
=====================================================

Appendix A: Scan Configuration
  - Scanner: [Tool and version]
  - Scan policy: [Policy name]
  - Credentials used: [Yes/No — do not include actual credentials]
  - Scan duration: [Start time — End time]
  - Scan settings: [Thread count, timing, etc.]

Appendix B: Raw Scan Data
  [Available upon request in encrypted format]

Guidance Notes: - A vulnerability assessment report differs from a penetration test report — it focuses on identification, not exploitation. - Every finding needs a clear remediation recommendation. "Patch the system" is not sufficient — specify which patch. - Include false positive analysis — note which scanner findings were verified and which were confirmed as false positives.


Template 5: Penetration Test Report (Executive + Technical)

=====================================================
          PENETRATION TEST REPORT
=====================================================

                  [CONFIDENTIAL]

Client:             [Client Name]
Engagement Period:  [YYYY-MM-DD to YYYY-MM-DD]
Report Date:        [YYYY-MM-DD]
Report Version:     [1.0]
Prepared By:        [Lead Tester, Company]
Reviewed By:        [QA Reviewer, Company]
Report ID:          [RPT-YYYY-NNN]

=====================================================
TABLE OF CONTENTS
=====================================================

PART I: EXECUTIVE REPORT
  1. Executive Summary
  2. Scope and Objectives
  3. Key Findings
  4. Risk Assessment
  5. Strategic Recommendations

PART II: TECHNICAL REPORT
  6. Methodology
  7. Detailed Findings
  8. Attack Narratives
  9. Remediation Details
  10. Appendices

=====================================================
PART I: EXECUTIVE REPORT
=====================================================

1. EXECUTIVE SUMMARY
-----------------------------------------------------

[Purpose]
[Client Name] engaged [Testing Firm] to perform a
[type] penetration test of [target description] from
[start date] to [end date].

[Overall Assessment]
The penetration test identified [N] vulnerabilities:
[N] Critical, [N] High, [N] Medium, [N] Low.

[Key Achievements — what the tester was able to do]
  - [e.g., Gained domain administrator access from an
    unauthenticated external position]
  - [e.g., Accessed customer database containing N records]
  - [e.g., Bypassed multi-factor authentication]

[Overall Risk Rating]
Based on the findings, the overall security posture of
the tested environment is assessed as: [Critical/High/
Medium/Low/Acceptable].

[Positive Observations — what was done well]
  - [e.g., Network segmentation prevented lateral
    movement to payment processing systems]
  - [e.g., Incident response team detected testing
    within N hours]

2. SCOPE AND OBJECTIVES
-----------------------------------------------------

Testing Objectives:
  1. [Identify vulnerabilities in external-facing systems]
  2. [Determine if unauthorized access to internal
     network is possible]
  3. [Assess the security of the web application]
  4. [Test employee susceptibility to social engineering]

Scope: [Refer to Scope Document ID: PROJ-YYYY-NNN]

3. KEY FINDINGS (NON-TECHNICAL)
-----------------------------------------------------

[3-5 key findings explained in business language]

Finding 1: [Title in business terms]
  Risk:   [Critical/High/Medium/Low]
  Impact: [Business impact in plain language]
  Action: [High-level remediation recommendation]

Finding 2: [Title]
  Risk:   [Level]
  Impact: [Description]
  Action: [Recommendation]

[Continue for top findings]

4. RISK HEATMAP
-----------------------------------------------------

             |  Low Impact  |  Med Impact  | High Impact
  -----------+--------------+--------------+------------
  Likely     |   [count]    |   [count]    |  [count]
  Possible   |   [count]    |   [count]    |  [count]
  Unlikely   |   [count]    |   [count]    |  [count]

5. STRATEGIC RECOMMENDATIONS
-----------------------------------------------------

  1. [Immediate: Patch critical vulnerabilities within 72 hours]
  2. [Short-term: Implement MFA across all administrative access]
  3. [Medium-term: Deploy network segmentation for sensitive systems]
  4. [Long-term: Establish a vulnerability management program]
  5. [Ongoing: Conduct regular security awareness training]


=====================================================
PART II: TECHNICAL REPORT
=====================================================

6. METHODOLOGY
-----------------------------------------------------

6.1 Approach:  [ ] Black Box  [ ] Gray Box  [ ] White Box
6.2 Standards: [PTES / OWASP / OSSTMM / NIST]
6.3 Tools Used:
    - Reconnaissance: [Nmap, Amass, theHarvester, Shodan]
    - Scanning: [Nessus, Nuclei, Nikto]
    - Exploitation: [Metasploit, Burp Suite, sqlmap]
    - Post-Exploitation: [Mimikatz, BloodHound, Rubeus]
    - Reporting: [Custom, Dradis, Serpico]

6.4 Testing Timeline:
    [Date] - [Phase and activities]
    [Date] - [Phase and activities]
    [Date] - [Phase and activities]

7. DETAILED FINDINGS
-----------------------------------------------------

[Repeat for each finding — sorted by severity]

--- FINDING [ID]: [TITLE] -------------------------

Severity:          [Critical / High / Medium / Low]
CVSS 3.1 Score:    [N.N]
CVSS Vector:       [CVSS:3.1/AV:.../AC:.../.../]
CWE:               [CWE-NNN: Name]
Affected Asset:    [IP/URL/Application]
Status:            [Confirmed / Potential]

Description:
  [Technical description of the vulnerability]

Steps to Reproduce:
  1. [Step 1 with exact command/request]
  2. [Step 2]
  3. [Step 3]
  [Include HTTP requests, commands, or tool output]

Evidence:
  [Screenshots, request/response pairs, command output]
  [Redact sensitive data — show proof of access, not
   actual data content]

Impact Analysis:
  [Technical impact: what access does exploitation provide?]
  [Business impact: what is the real-world consequence?]

Remediation:
  [Specific technical fix — patches, configuration changes,
   code modifications]
  [Include vendor advisories or patch numbers]
  [Suggest compensating controls if patching is not
   immediately possible]

References:
  - [CVE, CWE, OWASP references]
  - [Vendor advisories]
  - [Tool documentation]

-------------------------------------------------------

8. ATTACK NARRATIVES
-----------------------------------------------------

[Describe the full attack chain from initial access to
 objective achievement as a narrative]

8.1 External to Internal Access
  [Step-by-step narrative of how the tester progressed
   from an external, unauthenticated position to internal
   network access. Include timeline and tools used.]

8.2 Privilege Escalation Path
  [Narrative of how initial access was escalated to
   domain admin / root / database access.]

8.3 Objective Achievement
  [Narrative of how the engagement objectives were met —
   data accessed, systems compromised, etc.]

[Include an attack path diagram in ASCII or reference
 an attached diagram]

9. REMEDIATION DETAILS
-----------------------------------------------------

[Prioritized remediation plan with specific actions,
 responsible parties, and suggested timelines]

| Priority | Finding | Remediation | Owner | Timeline |
|----------|---------|-------------|-------|----------|
| P1       | [ID]    | [Action]    | [Who] | 72 hrs   |
| P1       | [ID]    | [Action]    | [Who] | 72 hrs   |
| P2       | [ID]    | [Action]    | [Who] | 30 days  |
| P3       | [ID]    | [Action]    | [Who] | 90 days  |

10. APPENDICES
-----------------------------------------------------

A. Scope Document (attached)
B. Rules of Engagement (attached)
C. Testing Activity Log
D. Tool Output / Raw Data (upon request)
E. Glossary of Terms

Guidance Notes: - The executive section should be understandable by a non-technical business leader. No jargon, no acronyms without explanation. - The technical section should be detailed enough that another tester could reproduce every finding. - Attack narratives are the most valuable part for many clients — they show real-world impact. - Have every report reviewed by a second tester before delivery.


Template 6: Risk Rating Worksheet

=====================================================
          RISK RATING WORKSHEET
=====================================================

Finding Title:  [_________________________________________]
Finding ID:     [_________]
Assessed By:    [_________]
Date:           [_________]

=====================================================
STEP 1: LIKELIHOOD ASSESSMENT
=====================================================

Rate each factor 0-9 (0 = lowest, 9 = highest):

THREAT AGENT FACTORS:
  Skill Level:        [___] (How technically skilled is the
                             attacker? 1=script kiddie, 9=APT)
  Motive:             [___] (How motivated? 1=low reward,
                             9=high reward)
  Opportunity:        [___] (How accessible is the target?
                             1=difficult, 9=trivial access)
  Size:               [___] (How many potential attackers?
                             1=few, 9=entire internet)

VULNERABILITY FACTORS:
  Ease of Discovery:  [___] (1=requires research, 9=automated
                             tools find it)
  Ease of Exploit:    [___] (1=complex multi-step, 9=point
                             and click)
  Awareness:          [___] (1=unknown, 9=public knowledge
                             with exploit code)
  Intrusion Detection:[___] (1=always detected, 9=never
                             detected)

LIKELIHOOD SCORE: Sum / 8 = [___]
  0-3 = LOW    4-6 = MEDIUM    7-9 = HIGH

=====================================================
STEP 2: IMPACT ASSESSMENT
=====================================================

Rate each factor 0-9:

TECHNICAL IMPACT:
  Confidentiality:    [___] (1=minimal data, 9=all data
                             disclosed)
  Integrity:          [___] (1=minor corruption, 9=complete
                             data manipulation)
  Availability:       [___] (1=brief interruption, 9=complete
                             service loss)
  Accountability:     [___] (1=fully traceable, 9=completely
                             anonymous)

BUSINESS IMPACT:
  Financial Damage:   [___] (1=insignificant, 9=bankruptcy)
  Reputation Damage:  [___] (1=minor, 9=brand destruction)
  Non-Compliance:     [___] (1=minor violation, 9=major
                             regulatory penalty)
  Privacy Violation:  [___] (1=one individual, 9=millions
                             of records)

IMPACT SCORE: Sum / 8 = [___]
  0-3 = LOW    4-6 = MEDIUM    7-9 = HIGH

=====================================================
STEP 3: OVERALL RISK RATING
=====================================================

              |  LOW Impact  |  MED Impact  | HIGH Impact
  ------------+--------------+--------------+------------
  HIGH Likeli.|    Medium    |     High     |  Critical
  MED  Likeli.|     Low      |    Medium    |    High
  LOW  Likeli.|     Note     |     Low      |   Medium

OVERALL RISK: [________________]

=====================================================
STEP 4: CVSS 3.1 CROSS-CHECK
=====================================================

Attack Vector (AV):        [ ] N  [ ] A  [ ] L  [ ] P
Attack Complexity (AC):    [ ] L  [ ] H
Privileges Required (PR):  [ ] N  [ ] L  [ ] H
User Interaction (UI):     [ ] N  [ ] R
Scope (S):                 [ ] U  [ ] C
Confidentiality (C):       [ ] N  [ ] L  [ ] H
Integrity (I):             [ ] N  [ ] L  [ ] H
Availability (A):          [ ] N  [ ] L  [ ] H

CVSS Base Score: [___]
Vector String:   CVSS:3.1/AV:_/AC:_/PR:_/UI:_/S:_/C:_/I:_/A:_

=====================================================
STEP 5: CONTEXTUAL ADJUSTMENTS
=====================================================

Does the affected system:
  [ ] Handle sensitive data (PII, PHI, PCI)?    -> Increase
  [ ] Face the internet?                         -> Increase
  [ ] Have compensating controls?                -> Decrease
  [ ] Require authentication to reach?           -> Decrease
  [ ] Have known active exploitation in wild?    -> Increase

ADJUSTED RISK RATING: [________________]

JUSTIFICATION:
[Write 2-3 sentences explaining the final rating,
 especially if adjusted from the calculated rating.]

Guidance Notes: - Use this worksheet for each finding to ensure consistent risk ratings across the engagement. - The OWASP Risk Rating Methodology (Steps 1-3) and CVSS (Step 4) may produce different results — this is normal. The contextual adjustment in Step 5 is where you apply professional judgment. - Document your reasoning — a rating that the client questions is only defensible if you can explain how you arrived at it.


Template 7: Incident Response Plan Template

=====================================================
         INCIDENT RESPONSE PLAN
=====================================================

Organization:       [Organization Name]
Document Version:   [1.0]
Last Updated:       [YYYY-MM-DD]
Plan Owner:         [Name, Title]
Next Review Date:   [YYYY-MM-DD]

=====================================================
1. PURPOSE AND SCOPE
=====================================================

This plan establishes procedures for detecting,
responding to, containing, eradicating, and recovering
from cybersecurity incidents affecting [Organization Name].

Scope: [All systems, networks, and data owned or
managed by the organization, including cloud services
and third-party integrations.]

=====================================================
2. INCIDENT CLASSIFICATION
=====================================================

Severity 1 — CRITICAL:
  - Active data breach with confirmed exfiltration
  - Ransomware affecting critical systems
  - Compromise of domain admin / root access
  - Regulatory reportable incident
  Response Time: Immediate (within 15 minutes)

Severity 2 — HIGH:
  - Confirmed unauthorized access to systems
  - Malware detected on multiple endpoints
  - Successful phishing with credential compromise
  Response Time: Within 1 hour

Severity 3 — MEDIUM:
  - Single endpoint malware infection (contained)
  - Suspicious activity under investigation
  - Vulnerability actively being exploited
  Response Time: Within 4 hours

Severity 4 — LOW:
  - Policy violation without security impact
  - Failed attack attempts
  - Vulnerability discovered (not exploited)
  Response Time: Within 24 hours

=====================================================
3. INCIDENT RESPONSE TEAM
=====================================================

Role                | Name        | Contact
--------------------|-------------|------------------
IR Team Lead        | [________]  | [Phone / Email]
Security Analyst    | [________]  | [Phone / Email]
Network Engineer    | [________]  | [Phone / Email]
System Admin        | [________]  | [Phone / Email]
Legal Counsel       | [________]  | [Phone / Email]
Communications Lead | [________]  | [Phone / Email]
Executive Sponsor   | [________]  | [Phone / Email]
External Forensics  | [________]  | [Phone / Email]

=====================================================
4. INCIDENT RESPONSE PHASES
=====================================================

PHASE 1: PREPARATION
  [ ] IR team trained and roles assigned
  [ ] Contact lists current and tested
  [ ] Forensic tools ready (jump bag)
  [ ] Communication channels established (out-of-band)
  [ ] Playbooks written for common scenarios
  [ ] Backup integrity verified

PHASE 2: DETECTION & ANALYSIS
  [ ] Alert received from: [SIEM / EDR / User / External]
  [ ] Initial triage completed
  [ ] Incident classified (Severity 1-4)
  [ ] Scope of compromise assessed
  [ ] Evidence preservation initiated
  [ ] Timeline construction started

PHASE 3: CONTAINMENT
  Short-term containment:
    [ ] Affected systems isolated from network
    [ ] Compromised accounts disabled
    [ ] Firewall rules updated to block attacker IPs
    [ ] DNS changes to sinkhole malicious domains

  Long-term containment:
    [ ] Clean systems provisioned
    [ ] Enhanced monitoring deployed
    [ ] Segmentation increased

PHASE 4: ERADICATION
  [ ] Root cause identified
  [ ] Malware removed from all affected systems
  [ ] Vulnerabilities patched
  [ ] Compromised credentials reset
  [ ] Persistence mechanisms removed
  [ ] Systems rebuilt if necessary

PHASE 5: RECOVERY
  [ ] Systems restored from clean backups
  [ ] Systems returned to production (monitored)
  [ ] Enhanced monitoring for re-infection (30 days min)
  [ ] User access restored
  [ ] Service functionality verified

PHASE 6: LESSONS LEARNED
  [ ] Post-incident review meeting (within 2 weeks)
  [ ] Timeline and findings documented
  [ ] What worked well?
  [ ] What needs improvement?
  [ ] Action items assigned with deadlines
  [ ] IR plan updated based on lessons learned

=====================================================
5. COMMUNICATION PLAN
=====================================================

Internal Communication:
  - IR team: [Secure messaging platform]
  - Executive leadership: [Within N hours for Sev 1-2]
  - All employees: [If applicable, through official channels]

External Communication:
  - Law enforcement: [If criminal activity confirmed]
  - Regulatory bodies: [Within N hours per regulation]
    - GDPR: 72 hours
    - HIPAA: 60 days
    - PCI DSS: Immediately to acquirer
    - State breach notification: [Per state law]
  - Customers: [As required by law and policy]
  - Media: [All media inquiries through Communications Lead]

=====================================================
6. DOCUMENTATION
=====================================================

For every incident, document:
  - Date/time of detection
  - Date/time of each response action
  - Personnel involved
  - Systems affected
  - Evidence collected and chain of custody
  - Actions taken
  - Decisions made and rationale
  - Communications sent
  - Lessons learned

Guidance Notes: - This template is for penetration testers to recommend to clients who lack an IR plan, and for understanding what incident response looks like from the defender's perspective. - An IR plan that has never been tested is unreliable. Recommend tabletop exercises quarterly and a full simulation annually. - The IR plan should be stored in a location accessible even if the network is compromised (printed copies, secure cloud storage outside the corporate network).


Template 8: Security Assessment Checklist

=====================================================
       SECURITY ASSESSMENT CHECKLIST
=====================================================

Target:             [System/Network/Application Name]
Assessor:           [Name]
Date:               [YYYY-MM-DD]

-----------------------------------------------------
PHASE 1: PRE-ENGAGEMENT
-----------------------------------------------------

[ ] Scope document signed and filed
[ ] Rules of engagement agreed and signed
[ ] Authorization letter obtained
[ ] NDA signed (if required)
[ ] Emergency contacts documented
[ ] Testing environment verified (VPN, access, credentials)
[ ] Tools updated to latest versions
[ ] Backup communication channel established
[ ] Client notified of testing start

-----------------------------------------------------
PHASE 2: RECONNAISSANCE
-----------------------------------------------------

Passive Reconnaissance:
  [ ] WHOIS and registrar information gathered
  [ ] DNS records enumerated (A, MX, NS, TXT, CNAME)
  [ ] Certificate Transparency logs checked (crt.sh)
  [ ] Google dorking performed
  [ ] Shodan/Censys searches completed
  [ ] Social media/LinkedIn reconnaissance completed
  [ ] GitHub/GitLab code repository search completed
  [ ] Historical data reviewed (Wayback Machine)

Active Reconnaissance:
  [ ] Subdomain enumeration completed
  [ ] DNS zone transfer attempted
  [ ] Web application fingerprinting completed
  [ ] Technology stack identified

-----------------------------------------------------
PHASE 3: SCANNING AND ENUMERATION
-----------------------------------------------------

Network Scanning:
  [ ] Host discovery completed
  [ ] TCP port scan completed (all 65535 or top 1000+)
  [ ] UDP port scan completed (top 100+)
  [ ] Service version detection completed
  [ ] OS fingerprinting completed
  [ ] NSE scripts run (safe, default, vuln)

Enumeration:
  [ ] SMB enumeration (shares, users, null sessions)
  [ ] SNMP enumeration (if port 161 open)
  [ ] LDAP enumeration (if port 389/636 open)
  [ ] DNS enumeration (zone transfers, brute force)
  [ ] SMTP enumeration (VRFY, EXPN)
  [ ] Web server enumeration (directories, files)
  [ ] Database port enumeration

Vulnerability Scanning:
  [ ] Automated vulnerability scan completed
  [ ] Results reviewed and triaged
  [ ] False positives identified and documented

-----------------------------------------------------
PHASE 4: WEB APPLICATION TESTING
-----------------------------------------------------

(Complete if web applications are in scope)

Configuration:
  [ ] SSL/TLS configuration tested
  [ ] HTTP security headers checked
  [ ] Directory listing tested
  [ ] Default credentials tested
  [ ] Error handling tested (verbose errors?)

Authentication:
  [ ] Brute force protection tested
  [ ] Password policy tested
  [ ] Account lockout tested
  [ ] Session management tested
  [ ] MFA bypass tested (if MFA present)
  [ ] Password reset flow tested

Authorization:
  [ ] Horizontal privilege escalation (IDOR) tested
  [ ] Vertical privilege escalation tested
  [ ] API authorization tested
  [ ] Direct object reference tested

Injection:
  [ ] SQL injection tested (all input points)
  [ ] XSS tested (reflected, stored, DOM)
  [ ] Command injection tested
  [ ] LDAP injection tested
  [ ] Template injection (SSTI) tested
  [ ] XXE tested

Other:
  [ ] SSRF tested
  [ ] CSRF tested
  [ ] File upload tested
  [ ] Insecure deserialization tested
  [ ] Business logic flaws tested
  [ ] API-specific vulnerabilities tested

-----------------------------------------------------
PHASE 5: EXPLOITATION
-----------------------------------------------------

[ ] Exploitation attempted for confirmed vulnerabilities
[ ] Proof of concept developed for each finding
[ ] Evidence captured (screenshots, logs, request/response)
[ ] Impact of each exploitation documented
[ ] No out-of-scope systems accessed
[ ] No production data modified or exfiltrated

-----------------------------------------------------
PHASE 6: POST-EXPLOITATION (if authorized)
-----------------------------------------------------

[ ] Local privilege escalation attempted
[ ] Credential harvesting performed
[ ] Lateral movement attempted
[ ] Domain escalation attempted (if AD environment)
[ ] Data access demonstrated (without exfiltration)
[ ] Persistence mechanisms identified (not deployed
    unless authorized)
[ ] Pivoting to additional network segments attempted

-----------------------------------------------------
PHASE 7: CLEANUP AND REPORTING
-----------------------------------------------------

Cleanup:
  [ ] All test accounts removed or reported to client
  [ ] All uploaded files/shells removed
  [ ] All configuration changes reverted
  [ ] All persistence mechanisms removed
  [ ] Client notified of any lingering artifacts

Reporting:
  [ ] All findings documented with evidence
  [ ] Risk ratings assigned (using Risk Rating Worksheet)
  [ ] Remediation recommendations provided
  [ ] Executive summary written
  [ ] Technical details complete and reproducible
  [ ] Report reviewed by second tester
  [ ] Report encrypted before transmission
  [ ] Report delivered to client
  [ ] Debrief meeting scheduled

Post-Engagement:
  [ ] Testing data securely stored (encrypted)
  [ ] Data destruction scheduled per retention policy
  [ ] Lessons learned documented
  [ ] Retest scheduled (if applicable)

Guidance Notes: - Use this checklist to ensure you do not miss any testing phases during an engagement. - Not every item will apply to every engagement — skip items that are out of scope. - Check off items as you complete them and note any deviations or special circumstances. - This checklist aligns with PTES and OWASP Testing Guide methodologies.


Template 9: Bug Bounty Report Template

=====================================================
          BUG BOUNTY REPORT
=====================================================

Platform:           [HackerOne / Bugcrowd / Intigriti / Direct]
Program:            [Program Name]
Report Date:        [YYYY-MM-DD]
Reporter:           [Your username / handle]

=====================================================
REPORT DETAILS
=====================================================

Title:
  [Clear, specific title — include the vulnerability type
   and affected endpoint]
  Example: "Stored XSS in user profile bio field allows
  account takeover via session cookie theft"

Severity:
  [Critical / High / Medium / Low]
  CVSS 3.1 Score: [N.N]
  Vector: [CVSS:3.1/AV:N/AC:L/PR:L/UI:R/S:C/C:H/I:H/A:N]

Vulnerability Type:
  [CWE-NNN: Name]
  [e.g., CWE-79: Improper Neutralization of Input During
   Web Page Generation (Cross-site Scripting)]

Affected Asset:
  [Exact URL, endpoint, or component]
  [e.g., https://app.target.com/api/v2/profile]

=====================================================
DESCRIPTION
=====================================================

[2-3 paragraphs explaining:
 - What the vulnerability is
 - Why it exists (root cause)
 - What makes it exploitable]

=====================================================
STEPS TO REPRODUCE
=====================================================

Prerequisites:
  - [Account type needed, if any]
  - [Browser/tool requirements]
  - [Any setup steps]

Steps:
1. [Navigate to https://app.target.com/profile/edit]
2. [In the "Bio" field, enter the following payload:]
   ```
   [exact payload here]
   ```
3. [Click "Save Profile"]
4. [Navigate to https://app.target.com/users/[username]]
5. [Observe that the payload executes in the context of
    any user viewing the profile]

=====================================================
PROOF OF CONCEPT
=====================================================

[Include one or more of:]
- HTTP request/response (from Burp Suite)
- Screenshots showing the vulnerability
- Video demonstration (link to recording)
- Script that demonstrates the issue

HTTP Request:

POST /api/v2/profile HTTP/1.1 Host: app.target.com Authorization: Bearer [REDACTED] Content-Type: application/json

{"bio": "[payload]"}


HTTP Response:

HTTP/1.1 200 OK Content-Type: application/json

{"status": "success", "bio": "[reflected payload]"}


[Screenshot: profile page showing payload execution]

=====================================================
IMPACT
=====================================================

[Explain the real-world impact — be specific]

An attacker can:
1. [Steal session cookies of any user who views the
    attacker's profile]
2. [Perform actions as the victim user, including
    changing email, password, and linked payment methods]
3. [Access sensitive account information including
    personal data and transaction history]

Affected users: [All users who view the attacker's profile]
Data at risk: [Session tokens, personal information,
payment details]

=====================================================
SUGGESTED REMEDIATION
=====================================================

1. [Sanitize user input in the bio field using a
    whitelist approach — allow only plain text]
2. [Implement output encoding (HTML entity encoding)
    when rendering user-supplied content]
3. [Add Content-Security-Policy header to prevent
    inline script execution]
4. [Set HttpOnly flag on session cookies to mitigate
    impact even if XSS exists]

=====================================================
ADDITIONAL NOTES
=====================================================

- [This was tested on Chrome 120 and Firefox 121]
- [The vulnerability also exists in the "About" field
    on the company page — same root cause]
- [I did not attempt to exploit this against other users]
- [Related to previously reported issue #12345? Specify]

Guidance Notes: - The quality of your report directly affects triage time and bounty amount. - Always include exact steps to reproduce. If the triager cannot reproduce it, the report will be marked "Needs More Info" or closed. - Record a video demonstration for complex vulnerabilities — it significantly improves triage speed. - Do not include unnecessary padding or boilerplate. Be concise and precise. - If you find multiple instances of the same vulnerability class, report them together with all affected endpoints listed.


Template 10: Red Team Campaign Planning Worksheet

=====================================================
   RED TEAM CAMPAIGN PLANNING WORKSHEET
=====================================================

Campaign Name:      [Operation Name]
Campaign Period:    [YYYY-MM-DD to YYYY-MM-DD]
Red Team Lead:      [Name]
Blue Team Aware:    [ ] Yes (Purple Team)  [ ] No (Full Red Team)

=====================================================
1. CAMPAIGN OBJECTIVES
=====================================================

Primary Objectives:
  1. [e.g., Obtain domain administrator access from
      external, unauthenticated position]
  2. [e.g., Access the financial reporting database]
  3. [e.g., Exfiltrate simulated sensitive data without
      detection]

Secondary Objectives:
  1. [e.g., Test employee susceptibility to phishing]
  2. [e.g., Test physical security controls at HQ]
  3. [e.g., Evaluate SOC detection and response time]

Success Criteria:
  [ ] Access to [specific system/data]
  [ ] Persistence maintained for [N] days
  [ ] Objectives achieved without SOC detection
  [ ] [Other measurable criteria]

=====================================================
2. THREAT PROFILE / ADVERSARY EMULATION
=====================================================

Emulated Threat Actor:  [e.g., APT29 / FIN7 / Generic
                         external attacker]

MITRE ATT&CK Techniques Planned:

Tactic              | Technique ID | Technique Name
--------------------|-------------|---------------------------
Initial Access      | T1566.001   | [Spearphishing Attachment]
Execution           | T1059.001   | [PowerShell]
Persistence         | T1547.001   | [Registry Run Keys]
Privilege Escalation| T1068       | [Exploitation for Priv Esc]
Defense Evasion     | T1027       | [Obfuscated Files]
Credential Access   | T1003.001   | [LSASS Memory]
Discovery           | T1087       | [Account Discovery]
Lateral Movement    | T1021.002   | [SMB/Windows Admin Shares]
Collection          | T1005       | [Data from Local System]
Exfiltration        | T1048       | [Exfil Over Alt Protocol]
Impact              | [N/A]       | [Not planned]

=====================================================
3. INFRASTRUCTURE PLANNING
=====================================================

C2 Infrastructure:
  Primary C2:       [Tool: e.g., Sliver / Mythic]
  C2 Domain:        [domain.com — categorized as ____]
  C2 IP:            [IP address]
  Backup C2:        [domain/IP]
  Protocol:         [HTTPS / DNS / other]

Phishing Infrastructure:
  Sending Domain:   [domain.com — SPF/DKIM configured]
  Sending IP:       [IP — warmed up? Y/N]
  Landing Page:     [URL]
  Tracking:         [GoPhish / custom]

Redirectors:
  [IP/domain for traffic redirection]

Staging Servers:
  [IP/domain for payload hosting]

=====================================================
4. OPERATIONAL TIMELINE
=====================================================

Week 1: Reconnaissance
  [ ] Passive OSINT on target organization
  [ ] Identify key personnel for phishing
  [ ] Map external attack surface
  [ ] Identify technologies and vulnerabilities

Week 2: Infrastructure Setup
  [ ] Procure and configure C2 domains
  [ ] Set up phishing infrastructure
  [ ] Create and test payloads
  [ ] Configure redirectors

Week 3-4: Initial Access
  [ ] Launch phishing campaign (Wave 1: [N] targets)
  [ ] Attempt external exploitation (if applicable)
  [ ] Establish initial foothold

Week 5-6: Post-Exploitation
  [ ] Internal reconnaissance
  [ ] Privilege escalation
  [ ] Lateral movement
  [ ] Credential harvesting

Week 7-8: Objective Achievement
  [ ] Access target systems/data
  [ ] Demonstrate data exfiltration path
  [ ] Document full attack chain

Week 9: Cleanup and Reporting
  [ ] Remove all implants and persistence
  [ ] Remove infrastructure
  [ ] Write report
  [ ] Prepare debrief presentation

=====================================================
5. OPSEC CONSIDERATIONS
=====================================================

  [ ] C2 traffic blends with normal traffic
  [ ] Domains are categorized and aged
  [ ] Payloads tested against client's AV/EDR
  [ ] Activities avoid honeypots/canary tokens
  [ ] Lateral movement uses legitimate admin tools
  [ ] No actions during off-hours (suspicious)
  [ ] Logs on red team infrastructure secured

Burn Criteria (when to abandon an approach):
  - [If blue team identifies C2 domain — switch to backup]
  - [If phishing reported by 3+ employees — adjust approach]
  - [If SOC escalates to incident — coordinate with white team]

=====================================================
6. DECONFLICTION / WHITE TEAM
=====================================================

White Team Contact:     [Name — Phone — Email]
Deconfliction Process:  [How to verify that blue team
                         alerts are from the red team
                         engagement, not a real attack]
Codeword:               [Secret codeword for identification]
Check-in Schedule:       [Daily / Weekly status to white team]

=====================================================
7. RISK MANAGEMENT
=====================================================

Risk                          | Mitigation
------------------------------|-----------------------------
Accidental service disruption | Test payloads in lab first;
                              | avoid DoS techniques
Real attacker concurrent      | Immediately escalate to white
with red team                 | team; preserve evidence
Phishing targets executive    | Exclude C-suite unless
leadership                    | explicitly authorized
Data exposure during testing  | Use simulated data markers;
                              | never exfiltrate real data

=====================================================
8. DELIVERABLES
=====================================================

  [ ] Full engagement report (executive + technical)
  [ ] Attack path diagram
  [ ] MITRE ATT&CK coverage heatmap (planned vs. executed)
  [ ] Detection gap analysis
  [ ] Remediation recommendations
  [ ] Debrief presentation
  [ ] Purple team follow-up plan (if applicable)

Guidance Notes: - Red team campaigns require significantly more planning than standard penetration tests. This worksheet ensures nothing is overlooked. - The white team (trusted insiders who know the campaign is happening) is essential for deconfliction — they prevent the blue team from escalating the exercise into a real incident response. - OPSEC is critical for realistic testing. If the red team is detected immediately, the exercise provides limited value. - Always have a rollback plan for every action. If something goes wrong, you need to undo it quickly.


All templates in this appendix are provided under a Creative Commons Attribution 4.0 International License. You are free to adapt them for commercial and non-commercial use with attribution.