44 min read

Every firewall, intrusion detection system, and encryption algorithm in the world shares one fundamental limitation: they are designed, configured, maintained, and used by humans. And humans are, reliably, the weakest link in any security chain...

Learning Objectives

  • Apply human intelligence (HUMINT) gathering techniques to ethical hacking engagements
  • Map organizational structures and identify key personnel for social engineering campaigns
  • Develop effective pretexts based on OSINT findings for authorized social engineering tests
  • Execute physical reconnaissance to assess facility security posture
  • Understand and recognize elicitation techniques used in social engineering
  • Assess the threat posed by deepfakes and synthetic media in social engineering attacks
  • Build comprehensive social engineering campaign plans from reconnaissance data
  • Navigate the ethical and legal boundaries of social engineering testing

Chapter 9: Social Engineering Reconnaissance

"There is no patch for human stupidity." — Kevin Mitnick

Every firewall, intrusion detection system, and encryption algorithm in the world shares one fundamental limitation: they are designed, configured, maintained, and used by humans. And humans are, reliably, the weakest link in any security chain. Social engineering reconnaissance is the systematic process of gathering intelligence about the human element of an organization — its people, processes, culture, and physical environment — to prepare for authorized social engineering assessments.

In the previous two chapters, we learned to map an organization's digital attack surface through passive and active technical reconnaissance. In this chapter, we turn our attention to the human attack surface. We will learn to profile employees, map organizational hierarchies, develop convincing pretexts, assess physical security, and understand how emerging technologies like deepfakes are transforming the social engineering landscape.

This is perhaps the most sensitive area of ethical hacking. Social engineering tests involve deceiving real people — your client's employees. The techniques described in this chapter must only be used within the explicit authorization of a signed engagement contract, with clear rules of engagement, and with the ethical awareness that you are testing human vulnerabilities, not exploiting human beings.

9.1 Human Intelligence Gathering

9.1.1 The Human Attack Surface

Every organization's security is ultimately dependent on the decisions made by its people. A perfectly configured firewall is meaningless if an employee clicks a phishing link. Multi-factor authentication fails when an employee hands their authentication token to a convincing caller. Physical access controls are defeated when an employee holds the door for someone carrying a box.

The human attack surface consists of:

  • Employees: From the CEO to the newest intern, every employee is a potential target
  • Processes: The procedures people follow (or fail to follow) for security-critical tasks
  • Culture: The organization's attitude toward security, authority, helpfulness, and trust
  • Physical environment: Office layouts, access controls, visitor procedures, and physical security measures
  • Communication patterns: How the organization communicates internally and externally, what channels are used, and what language and tone are expected

Social engineering reconnaissance maps this human attack surface, identifying: - Who are the most vulnerable targets? - What pretexts will be most convincing? - Which processes can be exploited? - Where are the physical security gaps? - How does the organization's culture create or prevent security vulnerabilities?

9.1.2 HUMINT Principles for Ethical Hackers

HUMINT — Human Intelligence — is a discipline borrowed from the intelligence community. While ethical hackers operate in a very different context than intelligence operatives, several HUMINT principles apply:

The Collection Cycle: Like all intelligence gathering, HUMINT follows a cycle of planning, collection, processing, analysis, and dissemination. You plan what you need to know, collect information through various means, process it into usable intelligence, analyze it for actionable insights, and report it to stakeholders.

Source Evaluation: Not all human-derived information is equally reliable. An employee's LinkedIn profile is generally accurate for their current role but may embellish their responsibilities. A disgruntled employee's Glassdoor review may be biased but could reveal genuine security issues. Always evaluate the reliability of sources and the accuracy of information.

Need to Know: Collect only the information you need for the engagement. The goal is to test the organization's security, not to build invasive dossiers on individual employees. Respect privacy boundaries.

The MICE Framework: Intelligence agencies have long understood that people provide information due to Money, Ideology, Coercion, or Ego (MICE). In social engineering, we see similar motivations: - Money: Employees may click on "bonus notification" phishing emails - Ideology: Whistleblowers or disgruntled employees may share information willingly - Coercion: Pressure, urgency, and authority can compel compliance - Ego: Flattery and appeals to expertise can lead people to share more than they should

⚖️ Legal Note: Social engineering testing requires explicit, written authorization that specifically covers social engineering activities. This authorization should define: (1) which types of social engineering are permitted (phishing, vishing, physical), (2) which employees or departments can be targeted, (3) what information can be collected, (4) how collected information will be stored and destroyed, (5) escalation procedures if an employee becomes distressed, and (6) whether the engagement includes physical access testing. Many organizations exclude certain individuals (C-suite, legal team, HR) or scenarios (anything involving threats or intimidation) from social engineering scope.

9.1.3 Cialdini's Principles of Influence

Dr. Robert Cialdini's research on persuasion identifies six (later seven) principles of influence that social engineers consistently exploit. Understanding these principles is essential for both attacking and defending:

  1. Reciprocity: People feel obligated to return favors. "I helped you with that report last week — can you let me into the server room? I left my badge at my desk."

  2. Commitment and Consistency: Once people commit to something, they tend to follow through. "You said you'd be happy to help with the audit. Can you just send me the network diagram?"

  3. Social Proof: People follow the behavior of others. "Everyone in your department has already completed the security verification. You are the last one."

  4. Authority: People obey authority figures. "This is James from IT Security. The CISO has asked me to verify your system access immediately."

  5. Liking: People are more easily influenced by people they like. Building rapport and finding common ground makes targets more compliant.

  6. Scarcity: People value things that are scarce or available for a limited time. "This security update must be installed before 5 PM today or your account will be locked."

  7. Unity (added later): People are influenced by shared identity and group membership. "As fellow members of the engineering team..."

💡 Intuition: These principles work because they are shortcuts the human brain uses to make decisions efficiently. In everyday life, these shortcuts serve us well — we should generally respect authority, reciprocate favors, and follow social norms. Social engineers exploit these shortcuts by creating situations where following the shortcut leads to a security breach.

9.2 Organizational Mapping and Employee Profiling

9.2.1 Building the Organizational Map

Before you can design an effective social engineering campaign, you need to understand the organization's structure. Organizational mapping involves identifying:

Hierarchy and Reporting Lines - Who are the executives and department heads? - How many layers of management exist? - Which departments interact frequently? - Who has authority to make decisions?

Key Departments for Social Engineering Targeting - IT/Help Desk: Often the first target for pretexting calls. Help desk staff are trained to be helpful, which can be exploited. - Human Resources: Has access to employee data and is accustomed to receiving resumes and documents (potential phishing vectors). - Finance/Accounting: Has authority to process payments and wire transfers (Business Email Compromise targets). - Reception/Front Desk: Controls physical access and can be targeted for tailgating and impersonation. - Executive Assistants: Have access to executive communications and calendars. - New Employees: Less familiar with security procedures and organizational norms.

Sources for Organizational Mapping: - LinkedIn: Search for "[Company Name]" and filter by current employees. LinkedIn reveals titles, departments, reporting relationships (based on title hierarchy), and tenure. - Company website: "About Us," "Our Team," and "Leadership" pages often list executives and key personnel. - SEC filings: Public companies must disclose executive officers and board members. - Press releases: Announce new hires, promotions, and organizational changes. - Conference presentations: Speakers are identified by name, title, and organization. - Patent filings: Name inventors and their organizations. - Academic publications: Name researchers and their affiliations.

9.2.2 Employee Profiling

Once you have mapped the organization, you select specific employees for deeper profiling. The goal is to understand each target well enough to craft a convincing pretext.

Profile Components:

Category Information Sources
Professional Name, title, department, responsibilities, tenure LinkedIn, company website
Technical Skills, certifications, technologies used LinkedIn, GitHub, conference talks
Social Social media presence, interests, hobbies Facebook, Twitter, Instagram
Personal Education, hometown, family (use ethically) Social media, public records
Behavioral Communication style, posting frequency, engagement Social media analysis
Network Professional connections, group memberships LinkedIn connections, org chart

For MedSecure Health Systems, we might profile:

Target 1: Help Desk Analyst - Name: Alex Chen - Role: IT Help Desk Analyst (Level 1) - Tenure: 8 months (relatively new) - LinkedIn: Lists certifications in CompTIA A+ and Network+ - Social media: Active on Twitter, posts about gaming and tech - SE Angle: New employee, eager to help, may not be fully familiar with all procedures

Target 2: Finance Manager - Name: Sarah Williams - Role: Finance Manager, Accounts Payable - Tenure: 6 years - LinkedIn: MBA from State University, CPA - Social media: Active on LinkedIn, posts about leadership and finance - SE Angle: Authority compliance — email appearing to come from the CFO requesting urgent wire transfer

Target 3: Receptionist - Name: Maria Garcia - Role: Front Desk Receptionist - Tenure: 3 years - Social media: Instagram with frequent posts, including office photos - SE Angle: Physical access — arrive with a delivery, ask to be let into a restricted area

⚠️ Common Pitfall: It is easy to become overly invested in individual profiling and cross ethical boundaries. Remember: you are collecting only what is necessary for the authorized engagement. You do not need to know an employee's medical history, relationship status, or financial situation unless your client has specifically authorized a deep-dive assessment targeting these individuals. Even then, collect the minimum necessary.

9.2.3 Email Address Generation and Verification

A critical output of organizational mapping is a list of valid email addresses. In Chapter 7, we discussed passive email discovery. Here we combine that with organizational mapping to generate a comprehensive email list:

Step 1: Determine the email format

Common formats: - first.last@medsecure.com (john.smith@medsecure.com) - firstlast@medsecure.com (johnsmith@medsecure.com) - first_last@medsecure.com (john_smith@medsecure.com) - flast@medsecure.com (jsmith@medsecure.com) - firstl@medsecure.com (johns@medsecure.com) - first@medsecure.com (john@medsecure.com)

Step 2: Generate addresses for identified employees

Using the email format and employee names gathered from LinkedIn and company websites, generate a list of probable email addresses.

Step 3: Verify addresses

Several techniques can verify email addresses without sending actual emails: - SMTP verification: Connect to the mail server and use the VRFY or RCPT TO commands - Hunter.io: API-based email verification - Email validation services: Services that check MX records and mailbox existence

# SMTP verification (active — interacts with target mail server)
# Note: Many mail servers no longer support VRFY
telnet mail.medsecure.com 25
HELO test.com
MAIL FROM:<test@test.com>
RCPT TO:<john.smith@medsecure.com>
# 250 response = valid address
# 550 response = invalid address

🔵 Blue Team Perspective: Organizations should configure mail servers to reject VRFY commands and return generic responses to RCPT TO commands for both valid and invalid addresses. This prevents email enumeration. Additionally, implementing a catch-all mailbox or using uniform bounce messages makes it impossible for attackers to distinguish valid from invalid addresses through SMTP probing.

9.3 Pretexting and Elicitation

9.3.1 The Art of Pretexting

A pretext is a fabricated scenario designed to engage a target and manipulate them into revealing information or performing an action. Effective pretexts are:

  • Believable: They match the target's expectations and experiences
  • Relevant: They relate to the target's role, concerns, or interests
  • Urgent but not alarming: They create a sense of urgency without triggering suspicion
  • Difficult to verify: The target cannot easily check the pretext's legitimacy in the moment
  • Aligned with organizational norms: They fit the organization's culture and communication patterns

Common Pretext Categories:

IT Support/Help Desk - "I'm from the IT department. We've detected unusual activity on your account and need to verify your identity." - "We're rolling out a mandatory security update. I need your current password to ensure the migration doesn't lock you out." - "Your email account has been flagged for a security review. Can you click this link to verify your identity?"

Authority/Executive - An email appearing to be from the CEO: "I need you to process an urgent wire transfer. I'll explain in detail later — this is time-sensitive." - "This is the CISO. We have a security incident and I need you to provide me with access to [system] immediately."

Vendor/Third-Party - "I'm calling from [known vendor]. We need to update your account information for the new billing cycle." - "This is the building management company. We need access to the server room for a scheduled HVAC inspection."

New Employee/Contractor - "Hi, I'm the new contractor starting in the IT department. I don't have my badge yet — can you let me in?" - "I'm starting next week and HR told me to come in early to set up my workstation. Can you point me to the IT department?"

Survey/Research - "We're conducting a security awareness survey on behalf of your company. Can you answer a few questions about your daily workflows?"

9.3.2 Building Pretexts from OSINT

The most effective pretexts are built from OSINT findings. Here is how to transform reconnaissance data into social engineering pretexts:

OSINT Finding: MedSecure uses Salesforce (discovered from DNS TXT records and job postings).

Pretext: "Hi, this is the Salesforce support team. We noticed your organization's instance is scheduled for a critical security patch tonight. To ensure your data isn't affected, we need your admin to verify the current configuration. Can you connect me with your Salesforce administrator?"

OSINT Finding: MedSecure recently posted job listings for a "Security Operations Center Analyst" (discovered from LinkedIn/Indeed).

Pretext: Email to HR: "Thank you for posting the SOC Analyst position. I'm very interested and have attached my resume. Could you also tell me more about the security tools and SIEM platform your team uses? I want to make sure my experience is aligned." (The attachment contains a tracking pixel or benign payload; the response reveals security tool information.)

OSINT Finding: MedSecure's CFO is speaking at a healthcare finance conference next week (discovered from conference website).

Pretext: Email to the CFO's executive assistant, appearing to come from the conference organizer: "We need to update the presentation materials for [CFO name]'s session. Could you send us the latest version of the slides?" (Establishes communication channel for follow-up social engineering.)

📊 Real-World Application: In a penetration test of a financial services firm, we discovered through LinkedIn that the company had recently hired a new IT director. Using this information, we crafted a phishing email that appeared to come from the new IT director, introducing themselves and asking employees to update their credentials on a "new employee portal." The success rate was 34% — significantly higher than a generic phishing test — because the pretext was grounded in real organizational intelligence.

9.3.3 Elicitation Techniques

Elicitation is the art of extracting information from people through casual conversation, without them realizing they are being interrogated. Unlike direct questioning, elicitation feels like natural dialogue.

Key Elicitation Techniques:

Assumed Knowledge: Pretend you already know the answer and are confirming. "So you're still running the Palo Alto firewalls, right?" Even if they correct you ("No, we switched to Fortinet last year"), you have gained intelligence.

Deliberate False Statement: Make a deliberately incorrect statement that the target will feel compelled to correct. "I heard MedSecure's network is completely on-premise, no cloud at all." "Actually, we moved most of our infrastructure to AWS two years ago."

Flattery and Appeal to Expertise: "You clearly know a lot about network security. I'm curious — what does your security stack look like? I'm always trying to learn from organizations that really have it figured out."

Quid Pro Quo: Share (innocuous) information first to create reciprocity. "At my last job, we had terrible luck with our MDM solution — Jamf kept crashing. What do you all use for device management?" "Oh, we use Intune and it works great."

Bracketing: Provide a high and low estimate to get the target to reveal the actual number. "MedSecure must have, what, 10,000 employees? Or maybe closer to 500?" "We're about 2,500 actually."

The Naivete Play: Play ignorant to encourage the target to explain in detail. "I don't really understand how VPNs work. How does your team connect remotely?" The target may explain their VPN solution, authentication method, and remote access architecture.

Common Ground: Establish shared experiences or interests to build rapport and lower defenses. If you discover through LinkedIn that a target attended the same university, referencing that shared experience creates an instant bond.

⚖️ Legal Note: During authorized social engineering assessments, elicitation is a legitimate technique. However, you must stay within the boundaries defined in your scope of work. Never use elicitation to extract information about individuals' personal lives, financial situations, health conditions, or other sensitive personal matters. The goal is to test organizational security, not to invade personal privacy.

9.3.4 Telephone-Based Elicitation (Vishing Reconnaissance)

While elicitation can occur in any conversation, the telephone is the primary vehicle for remote elicitation in penetration testing. Phone-based elicitation — often called vishing reconnaissance — has distinct advantages and challenges:

Advantages of telephone elicitation: - Real-time interaction: Unlike email, you can adjust your approach based on the target's responses, tone of voice, and level of engagement. If one line of questioning meets resistance, you can smoothly pivot to another topic. - Urgency conveyance: The human voice conveys urgency and emotion far more effectively than text. A caller who sounds stressed, confused, or authoritative triggers faster, less-considered responses. - No digital forensics: Phone conversations leave minimal forensic evidence compared to emails. There is no phishing link to analyze, no sender header to inspect, and no attachment to sandbox. - Caller ID spoofing: Tools like SpoofCard or VoIP services allow you to display any caller ID number, making it appear that you are calling from an internal extension, the corporate headquarters, or a trusted vendor.

Telephone elicitation framework:

  1. Opening: Establish your pretext within the first 15 seconds. State who you are (your cover identity), why you are calling, and create a reason for the target to stay on the line. "Hi, this is Michael from IT support. We're seeing some unusual network activity from your workstation and I just need to verify a few things to make sure your account hasn't been compromised."

  2. Rapport building: Spend 30-60 seconds building rapport before asking for anything sensitive. Acknowledge their time, show empathy, and use their name. People are more helpful to people they feel a connection with.

  3. Elicitation phase: Use the techniques described above — assumed knowledge, deliberate false statements, bracketing — embedded naturally in conversation. Each piece of information should feel like a natural part of the dialogue, not an interrogation question.

  4. Graceful exit: End the call in a way that does not arouse suspicion and leaves the door open for future contact. "Thanks so much for your help, I've confirmed everything looks good on your end. If you notice anything unusual, just call the help desk and reference ticket number HD-47293."

📊 Real-World Application: Professional social engineers often conduct vishing reconnaissance calls weeks before the main campaign. These calls are not designed to extract credentials or gain access — they are purely intelligence-gathering operations. A 5-minute call to the help desk asking about password reset procedures reveals whether the organization requires identity verification, what questions they ask, and what information would be needed to impersonate an employee in a future call. This reconnaissance call directly informs the design of the actual vishing attack.

9.4 Physical Reconnaissance

9.4.1 Assessing the Physical Attack Surface

Physical security is often the forgotten dimension of cybersecurity. Many organizations invest heavily in technical controls while leaving physical access relatively open. Physical reconnaissance assesses:

  • Facility locations: Main offices, data centers, remote offices, warehouses
  • Access controls: Badge readers, security guards, mantraps, turnstiles, biometrics
  • Surveillance: CCTV cameras, coverage gaps, monitoring practices
  • Perimeter security: Fences, gates, lighting, landscaping
  • Entry points: Main entrance, service entrances, parking garage, loading dock
  • Visitor procedures: Sign-in requirements, escort policies, badge issuance
  • Employee behaviors: Tailgating prevalence, badge visibility, door-holding habits
  • Waste disposal: Dumpster locations, shredding practices, document handling

9.4.2 External Physical Reconnaissance

External physical recon can be conducted from public areas without entering the target's premises:

Google Maps/Street View: Examine the building exterior, entry points, parking areas, and surrounding environment. Historical Street View images may show construction, renovations, or security changes over time.

Satellite Imagery: Google Earth and similar services provide overhead views revealing: - Building layout and multiple structures - Loading docks and service entrances - Dumpster locations - Parking lot configuration (employee badges may be visible in high-resolution imagery — though this is rare) - HVAC systems on rooftops (can indicate server room locations)

On-Site Observation (from public areas): - Observe employee entry and exit patterns - Note shift change times - Watch for tailgating behavior at entrances - Identify security guard schedules and patrol routes - Note delivery schedules and vendor access procedures - Observe whether employees wear badges visibly - Identify smoking areas (where employees congregate and may be talkative) - Check for wifi network names (SSIDs) that reveal information about internal networks

# Wardriving for wifi reconnaissance (from public areas)
# Using airodump-ng
sudo airodump-ng wlan0mon --write medsecure_wifi

# Network names like "MedSecure-Internal", "MedSecure-Guest",
# "EPIC-EHR-Wireless" reveal internal network segments and applications

9.4.3 Physical Social Engineering Techniques

Authorized physical social engineering tests may include:

Tailgating/Piggybacking: Following an authorized person through a secure door. This exploits the social norm of holding doors for others. "Thanks! These boxes are heavy — I appreciate the help."

Impersonation: Posing as someone with a legitimate reason to be on-site: - Delivery person (carry a box with a shipping label) - IT contractor ("I'm here for the network maintenance scheduled for today") - Building inspector ("Fire marshal's office — I need to inspect the server room's fire suppression system") - Cleaning crew (often has after-hours access) - Job applicant ("I have an interview with HR — can you point me in the right direction?")

Dumpster Diving: Examining discarded materials in dumpsters or recycling bins. Organizations that do not shred sensitive documents may discard: - Internal memos and reports - Network diagrams and configuration printouts - Employee directories - Customer data - Discarded hard drives or storage media - Post-it notes with passwords

Shoulder Surfing: Observing employees entering passwords, viewing sensitive information on screens, or reading confidential documents in public spaces (cafeterias, lobbies, coffee shops near the office).

USB Drop Attacks: Leaving USB drives in parking lots, lobbies, or common areas. When employees plug in the found drive (curiosity is powerful), the USB may: - Execute malicious payloads (Rubber Ducky, Bash Bunny) - Contain files with tracking beacons - Present a convincing "employee directory" or "salary information" file that captures credentials

🧪 Try It in Your Lab: Practice physical reconnaissance against your own home or a friend's business (with permission). Walk the perimeter, identify entry points, note security cameras and access controls, and assess lighting. Write up a physical security assessment. This exercise builds the observation skills needed for professional physical penetration testing.

9.4.4 Badge Cloning and Access Card Analysis

During physical reconnaissance, you may observe the types of access cards or badges used:

  • Proximity cards (125kHz): HID ProxCard and similar low-frequency cards are easily clonable. A Proxmark3 device can read these cards from several feet away (long-range readers can work from even further).
  • Smart cards (13.56MHz): MIFARE, DESFire, and similar high-frequency cards offer better security but may still be vulnerable depending on the implementation.
  • Magstripe cards: Easily read and cloned with inexpensive readers.
  • Mobile credentials: Phone-based access (Bluetooth/NFC) is increasingly common and generally more secure.

⚠️ Common Pitfall: Badge cloning during physical penetration testing requires extremely precise scope authorization. You must have explicit written permission to clone employee access cards, and you should understand the legal implications in your jurisdiction. Some jurisdictions treat access card cloning similarly to key duplication of restricted key systems.

9.5 Building Social Engineering Pretexts from OSINT

9.5.1 The OSINT-to-Pretext Pipeline

The most effective social engineering campaigns transform OSINT findings into highly targeted pretexts. Here is a systematic process:

Step 1: Identify High-Value OSINT Findings

From Chapters 7 and 8, select findings that reveal: - Technology platforms and vendors used by the target - Recent organizational changes (mergers, new hires, office moves) - Business relationships and vendor partnerships - Employee interests, concerns, and communication patterns - Physical locations and operational procedures

Step 2: Map Findings to Influence Principles

For each finding, consider which of Cialdini's principles it enables:

OSINT Finding Influence Principle Pretext Concept
New CISO hired last month Authority Email from "new CISO" requiring security action
Company uses Okta for SSO Urgency + Authority "Okta security alert — verify your account"
Annual charity drive in October Social Proof + Liking Phishing email about "charity drive registration"
CFO speaking at conference Authority + Scarcity "Urgent request from CFO" while they are traveling
Recent acquisition Unity + Authority Communication about "integration procedures"

Step 3: Design the Attack Chain

A social engineering campaign is rarely a single action. Design a multi-step attack chain:

  1. Initial contact: Low-commitment interaction to establish legitimacy
  2. Rapport building: Develop trust and credibility
  3. Escalation: Make the actual request (click a link, provide credentials, grant access)
  4. Exploitation: Use the obtained access or information
  5. Exfiltration: Demonstrate the impact of successful social engineering

Example: MedSecure Phishing Campaign

Based on OSINT findings: - MedSecure uses Microsoft 365 (from DNS records) - Annual HIPAA compliance training occurs in Q1 (from job posting mentions) - The training department uses KnowBe4 for security awareness (from LinkedIn)

Campaign design: 1. Register a domain similar to MedSecure's training portal (e.g., medsecure-training.com) 2. Create a convincing Microsoft 365 login page 3. Send a phishing email: "HIPAA Compliance Training — Annual Requirement" 4. Email references KnowBe4 and mimics the organization's training communication style 5. Timing: Send in Q1 when employees expect annual training 6. Landing page captures credentials and redirects to the real training portal

9.5.2 Spear Phishing Campaign Design

Spear phishing — targeted phishing attacks against specific individuals — is the most common social engineering technique in penetration testing. Effective spear phishing requires:

Email Infrastructure: - Register a convincing sender domain (lookalike domain or compromised email) - Configure SPF, DKIM, and DMARC to improve deliverability - Set up email tracking to measure open rates and click rates - Create a believable landing page

Email Content: - Use the target's name and role - Reference real projects, vendors, or events - Include appropriate logos, signatures, and formatting - Create urgency without desperation - Include a clear call to action (click a link, open an attachment, reply with information)

Landing Pages: - Mirror the legitimate service's login page - Use HTTPS (free certificates from Let's Encrypt) - Capture credentials and log access - Redirect to the legitimate service after credential capture (reducing suspicion)

Tools for Phishing Campaigns: - GoPhish: Open-source phishing framework with campaign management, email templates, and tracking - King Phisher: Full-featured phishing campaign toolkit - Evilginx2: Advanced phishing framework that can capture session tokens (bypassing 2FA) - SET (Social Engineering Toolkit): Metasploit-integrated social engineering framework

# GoPhish setup
# Download and start GoPhish
./gophish

# Access the admin interface
# https://localhost:3333
# Configure: SMTP settings, email templates, landing pages, target groups
# Launch and monitor campaigns

🔴 Red Team Perspective: The most successful spear phishing campaigns we have conducted used a combination of: (1) a lookalike domain registered weeks in advance to age it, (2) email content based on specific OSINT about the target's current projects and communications, (3) a pretext aligned with a real organizational event (quarterly review, compliance deadline, software migration), and (4) targeting of 3-5 carefully selected employees rather than mass phishing. Quality dramatically outperforms quantity in spear phishing.

9.5.3 Vishing (Voice Phishing) Campaign Design

Voice phishing — social engineering over the phone — is often more effective than email phishing because it: - Creates immediate pressure (the target cannot delay responding) - Allows real-time adaptation based on the target's responses - Conveys authority and urgency through tone of voice - Makes verification more difficult (phone numbers can be spoofed)

Vishing Call Structure:

  1. Introduction: Establish who you are (the pretext identity)
  2. Verification: Ask the target to verify themselves (creates a false sense that this is a legitimate security process)
  3. Problem statement: Explain the issue that requires their action
  4. Urgency: Create time pressure ("this needs to be resolved before end of business")
  5. Request: Ask for the specific action or information
  6. Follow-up: Provide a callback number or email for "documentation"

Example Vishing Script (MedSecure Help Desk Pretext):

"Hi, this is [name] from the IT security team. I'm calling because we've detected what appears to be unauthorized access to your email account from an IP address in [country]. Before I can take any action on the account, I need to verify your identity. Can you confirm your employee ID and the email address on your account?"

[After verification] "Thank you. To secure your account, I need to reset your password. I'm going to generate a temporary password and email it to your personal email address. What personal email should I send it to?"

[The goal is to obtain the employee's personal email — establishing a secondary communication channel outside the organization's security controls.]

Best Practice: Always record vishing calls (with appropriate legal authorization) during penetration tests. These recordings serve as evidence in your report and can be used in the client's security awareness training. Ensure that your engagement contract and local laws permit call recording before proceeding.

9.6 Deepfakes and Synthetic Media in Reconnaissance

9.6.1 The Emerging Threat Landscape

Deepfakes and synthetic media represent a rapidly evolving threat to social engineering defenses. Advances in artificial intelligence have made it possible to:

  • Clone voices: Generate convincing audio that sounds like a specific person using only a few minutes of sample audio
  • Create video deepfakes: Produce realistic video of someone saying things they never said
  • Generate synthetic photographs: Create entirely fictitious but realistic-looking people
  • Produce synthetic text: Generate emails and messages in a specific person's writing style

These technologies dramatically amplify the effectiveness of social engineering by making pretexts more convincing and harder to detect.

9.6.2 Voice Cloning in Social Engineering

Voice cloning technology has reached a level of sophistication where a few seconds of audio — from a YouTube conference talk, a podcast interview, or a voicemail greeting — can be used to generate convincing synthetic speech.

The Attack Scenario:

  1. Attacker identifies the CEO's voice from a conference presentation on YouTube
  2. Attacker uses voice cloning software to create a model of the CEO's voice
  3. Attacker calls the CFO using the cloned voice, with a spoofed caller ID showing the CEO's phone number
  4. The "CEO" urgently requests a wire transfer to a new vendor
  5. The CFO, recognizing the CEO's voice and seeing the correct caller ID, processes the transfer

This is not theoretical. In 2019, criminals used AI-generated voice technology to impersonate a CEO and trick a UK energy company's executive into transferring $243,000 to a fraudulent account. The executive believed he was speaking with his boss because the voice, accent, and speech patterns matched perfectly.

Voice Cloning Tools (for authorized testing only): - Real-Time Voice Cloning: Open-source tool requiring minimal training data - Resemble.ai: Commercial voice cloning with API access - ElevenLabs: Advanced voice synthesis platform - Bark: Open-source text-to-audio model

⚖️ Legal Note: The use of deepfake technology in penetration testing and social engineering assessments requires extremely careful legal consideration. Creating synthetic audio or video of real people may violate privacy laws, impersonation statutes, or fraud regulations, even in an authorized testing context. Always obtain explicit legal approval, beyond the standard penetration testing authorization, before using deepfake technology in any engagement.

9.6.3 Synthetic Identity in OSINT

Synthetic media is not only a tool for attacking — it is also changing how reconnaissance is conducted:

Fake LinkedIn Profiles: Adversaries create convincing fake LinkedIn profiles using AI-generated photos (from services like thispersondoesnotexist.com) to: - Connect with employees and gather organizational intelligence - Establish credibility for social engineering pretexts - Conduct elicitation through LinkedIn messages

Detecting Synthetic Photos: - Look for asymmetries in earrings, glasses, and hair - Check for blurred or inconsistent backgrounds - Examine eyes for reflection consistency - Use reverse image search (genuine photos appear elsewhere; synthetic photos do not) - Use AI detection tools (which are in an arms race with generation tools)

Synthetic Email and Messaging: Large language models can generate phishing emails that are grammatically perfect, contextually appropriate, and tailored to the target's communication style. This eliminates the traditional "poor grammar" indicator that helped people identify phishing.

9.6.4 Defending Against Deepfake Social Engineering

As ethical hackers, we need to both understand deepfake attacks and help organizations defend against them:

Organizational Controls: - Implement multi-channel verification for sensitive actions (voice request + email confirmation + in-person verification) - Establish code words or phrases for verifying identity in sensitive communications - Create policies requiring multiple approvals for financial transactions above a threshold - Train employees on deepfake awareness

Technical Controls: - Deploy deepfake detection tools for voice and video communications - Implement email authentication (SPF, DKIM, DMARC) to prevent domain spoofing - Use digital signatures for sensitive communications - Monitor for impersonation attempts

📊 Real-World Application: In 2024, an employee at a multinational firm was tricked into paying out $25 million after attending a video call with what appeared to be the company's CFO and other colleagues — all of whom turned out to be deepfake recreations. This case demonstrated that deepfakes had advanced beyond voice-only attacks to full video conferences. Organizations must adapt their verification procedures to account for this capability.

9.7 Open-Source Intelligence for Credential Discovery

9.7.1 Breach Data and Password Intelligence

One of the most immediately actionable outputs of social engineering reconnaissance is credential intelligence. When combined with organizational mapping, breach data creates a powerful attack capability:

Have I Been Pwned Domain Search: HIBP's domain search feature reveals which employee email addresses have appeared in known data breaches:

# Check domain for breached accounts (requires API key)
curl -s -H "hibp-api-key: YOUR_KEY" \
  "https://haveibeenpwned.com/api/v3/breachedaccount/john.smith@medsecure.com"

For MedSecure, this might reveal that 47 employee accounts appeared in the LinkedIn breach (2012), 23 in the Dropbox breach (2016), and 8 in a healthcare-specific breach (2023). This intelligence informs: - Password spraying: Employees who reuse passwords from breached services may use the same password at work - Social engineering pretexts: "We've detected that your account credentials may have been exposed in a recent data breach" is a highly effective phishing pretext because it is often actually true - Security posture assessment: The number of employee accounts in breaches indicates the organization's overall credential exposure risk

Password Pattern Analysis: While ethical hackers should never use actual breached passwords, understanding common password patterns for an organization helps design password auditing approaches. Public research on breach data reveals that employees often use: - Company name variations: MedSecure2024!, medsecure123 - Location-based: SanFrancisco1!, California2024 - Industry terms: Healthcare2024!, HIPAA2024 - Seasonal patterns: Summer2024!, Winter2023 - Sports teams: Local team names with year and special character

⚖️ Legal Note: Accessing or using actual breached credential databases (beyond HIBP-style existence checks) raises serious legal and ethical concerns. In most jurisdictions, possessing or using stolen credentials — even for authorized penetration testing — is legally questionable. Your engagement contract should explicitly address whether breach data analysis is in scope, and your legal counsel should review the applicable laws in your jurisdiction.

9.7.2 Social Media Intelligence for Credential Hints

Social media profiles inadvertently provide credential intelligence:

Personal Information for Password Guessing: People commonly base passwords on personal information discoverable through social media: - Pet names (Instagram pet photos with tagged names) - Children's names and birthdates (Facebook family posts) - Favorite sports teams (Twitter follows, profile bios) - Anniversary dates and birthdays (Facebook events) - Alma mater (LinkedIn education section) - Hometown (Facebook "About" section)

Security Question Answers: "What is your mother's maiden name?" "What is the name of your first pet?" "What city were you born in?" These common security question answers are routinely discoverable through social media and public records.

Multi-Factor Authentication Intelligence: Social media and public information can reveal MFA weaknesses: - An employee complaining about "annoying text message codes" on Twitter reveals SMS-based MFA (vulnerable to SIM swapping) - A LinkedIn post about "loving my new YubiKey" reveals hardware token usage (much harder to bypass) - Job postings requiring "experience with Duo Security" reveal the MFA platform in use

9.7.3 Operational Security Failures

Employees regularly commit operational security (OpSec) failures on social media that provide valuable reconnaissance data:

Office Photos: Employees posting photos from their workplace inadvertently reveal: - Badge designs and access card types (proximity, smart card) - Visitor badge procedures - Screen contents visible in the background - Network equipment and server room layouts - Physical security measures (cameras, badge readers) - Desk layouts and clean desk policy compliance

Conference and Event Posts: Employees at conferences may: - Show their badges (revealing full name, company, and title) - Share slides from internal presentations - Post photos of demonstration environments - Discuss internal projects and challenges in Q&A sessions

Work-From-Home Photos: Remote workers sharing home office setups may reveal: - VPN client names visible on screens - Internal application interfaces - Work email visible in screenshots - Browser bookmarks showing internal URLs

🔴 Red Team Perspective: During a red team engagement, we found a target employee's Instagram story showing their home office setup with a monitor clearly displaying the company's internal ticketing system. The URL bar was visible enough to read the hostname of the internal application, and the ticket content revealed the naming convention for internal projects. This single Instagram story provided more operational intelligence than hours of technical reconnaissance.

9.8 Planning the Social Engineering Campaign

9.7.1 Campaign Framework

A professional social engineering assessment follows a structured framework:

Phase 1: Reconnaissance and Intelligence (Weeks 1-2) - Organizational mapping and employee profiling (this chapter) - OSINT collection and analysis (Chapter 7) - Active reconnaissance of communication channels (Chapter 8) - Physical reconnaissance of facilities

Phase 2: Campaign Design (Week 3) - Select target employees based on profiles - Design pretexts based on OSINT findings - Build email templates, landing pages, and phone scripts - Set up campaign infrastructure (phishing domains, email servers, call equipment) - Conduct peer review of campaign materials - Obtain final client approval for campaign design

Phase 3: Execution (Weeks 4-5) - Launch phishing emails in waves (not all at once) - Conduct vishing calls at appropriate times - Execute physical social engineering (if in scope) - Monitor and record all results - Be prepared to stop if the client requests it

Phase 4: Analysis and Reporting (Week 6) - Compile success/failure rates for each vector - Analyze which pretexts were most effective and why - Identify organizational patterns (departments, roles, tenure) that correlate with vulnerability - Develop recommendations for security awareness training - Prepare sanitized case studies for the client's training program

9.7.2 Rules of Engagement for Social Engineering

Social engineering testing requires particularly clear rules of engagement:

  1. No threats or intimidation: Never use pretexts involving threats to safety, job security, or personal wellbeing.
  2. No exploitation of personal information: Do not use discovered personal information (health issues, family problems, financial difficulties) as leverage.
  3. Escalation procedures: Define what happens when an employee becomes upset, suspicious, or distressed.
  4. Stop conditions: Identify conditions under which testing must immediately cease.
  5. Credential handling: Define how captured credentials are stored, used, and destroyed.
  6. Physical safety: Physical social engineering must never endanger anyone's safety.
  7. Data protection: Employee data collected during the assessment must be handled according to privacy regulations.
  8. De-anonymization: Define whether results will identify specific employees by name in the report.
  9. Training integration: Plan how results will be used to improve security, not punish employees.

Best Practice: The most effective social engineering assessments end with education, not punishment. When an employee falls for a social engineering test, they should receive immediate, constructive feedback explaining what happened and how to recognize similar attacks in the future. Organizations that punish employees for failing social engineering tests create a culture of fear that actually decreases security (employees become afraid to report real incidents).

9.7.3 Applying Social Engineering Recon to Our Running Examples

MedSecure Health Systems:

Social engineering reconnaissance reveals: - 340 employees on LinkedIn, with 28 IT staff - Help desk operates 7 AM to 7 PM, outsourced after hours - Email format: first.last@medsecure.com - Organization recently migrated to Microsoft 365 (from LinkedIn posts) - Annual HIPAA compliance training creates a natural phishing pretext - Physical security uses HID proximity cards (observed from office photos) - The loading dock has no badge reader (observed from Google Maps)

Campaign Recommendation: 1. Phishing: Microsoft 365 credential harvest using HIPAA training pretext (target all 340 employees) 2. Spear phishing: Vendor impersonation targeting the finance department (target 5 employees) 3. Vishing: IT support pretext targeting help desk for credential reset (target 3 analysts) 4. Physical: Tailgating via loading dock to plant a network implant (target facility)

ShopStack E-commerce:

Social engineering reconnaissance reveals: - 150 employees, primarily remote workforce - Extensive use of Slack (mentioned in job postings and employee social media) - Engineering team active on GitHub and tech conferences - CEO has a strong social media presence with predictable communication style - No physical offices to target (fully remote company)

Campaign Recommendation: 1. Phishing: GitHub notification impersonation targeting engineering team (target 40 developers) 2. Spear phishing: Slack notification phishing with workspace migration pretext (target all employees) 3. Vishing: CEO impersonation targeting finance for wire transfer (target 2 employees)

9.8 Ethical Considerations in Social Engineering Testing

9.8.1 The Human Cost

Social engineering testing is fundamentally different from technical testing because it directly involves deceiving real people. This creates ethical obligations that do not exist when scanning a web application:

Psychological Impact: Employees who fall for social engineering tests may feel embarrassed, anxious, or angry. These are real emotional responses that must be considered and managed.

Professional Consequences: If social engineering results are not handled sensitively, employees may face negative professional consequences. This is counterproductive — punishing people for being human makes the organization less secure, not more.

Trust Erosion: Frequent or poorly managed social engineering tests can erode trust within an organization. Employees may become suspicious of all communications, including legitimate ones.

Privacy Concerns: Social engineering reconnaissance involves collecting personal information about real people. This data must be handled with the same care as any sensitive personal data.

9.8.2 Ethical Guidelines

  1. Minimum necessary collection: Collect only the personal information needed for the engagement
  2. Secure storage: Encrypt and access-control all collected personal data
  3. Time-limited retention: Destroy personal data after the engagement concludes
  4. No public disclosure: Never share individual employee results publicly
  5. Constructive use: Use results to improve training and processes, not to punish
  6. Informed consent (organizational): The organization's leadership has consented on behalf of employees, but be mindful of this power dynamic
  7. Proportional testing: The intrusiveness of social engineering tests should be proportional to the risk being assessed

🔗 Connection: The ethical considerations discussed here connect to the broader themes of authorization and legality that we have emphasized throughout this book. Social engineering testing is where the "ethical" in "ethical hacking" is most directly tested. The techniques we learn are powerful tools for improving security — but they are also powerful tools for manipulation. The difference lies entirely in authorization, intent, and ethical execution.

9.8.3 When to Stop

There are situations where social engineering testing should be immediately suspended:

  • An employee becomes visibly distressed or upset
  • An employee threatens self-harm or expresses severe anxiety
  • You discover evidence of genuine criminal activity (report to client immediately)
  • The pretext inadvertently touches on a real ongoing situation (e.g., a phishing test about layoffs during actual layoffs)
  • The client requests testing to stop for any reason
  • You discover you are targeting an individual who should be excluded from testing (as defined in the scope)

9.9 Measuring Social Engineering Effectiveness

9.9.1 Campaign Metrics and Analysis

Professional social engineering assessments require rigorous measurement to provide actionable intelligence to the client. Key metrics include:

Email Phishing Metrics: - Open rate: Percentage of recipients who opened the email (tracked via embedded pixel). Industry average: 60-80% for targeted phishing simulations. - Click rate: Percentage who clicked on the phishing link. Industry average: 15-25% for moderate-difficulty pretexts. - Credential submission rate: Percentage who entered credentials on the phishing page. Industry average: 5-15% depending on pretext sophistication. - Report rate: Percentage who reported the phishing email to the security team. This is often the most important metric — a high report rate indicates strong security culture. Industry average: 10-30%. - Time to first click: How quickly the first employee clicked. Short times indicate impulsive responses. - Time to first report: How quickly the first employee reported. Short times indicate effective awareness training.

Vishing Metrics: - Call completion rate: How many targets answered and completed the call without hanging up - Information disclosure rate: How many provided the requested information - Escalation rate: How many transferred the call to a supervisor or security team - Average call duration: Longer calls generally indicate more successful rapport building

Physical SE Metrics: - Entry success rate: Percentage of attempts that achieved physical access - Time to entry: How long from initial approach to gaining access - Challenge rate: How many employees challenged or questioned the tester - Reporting rate: How many employees reported the suspicious activity

9.9.2 Analyzing Results by Demographics

Breaking down results by employee demographics reveals organizational patterns:

By Department: Which departments are most vulnerable? Finance teams may click on invoice-themed phishing at higher rates. IT teams may be more susceptible to technology-themed pretexts because they engage with the content more carefully.

By Tenure: New employees (less than 6 months) consistently fail phishing tests at higher rates than experienced employees, validating the need for onboarding security training.

By Seniority: Executives often have higher click rates than expected — their busy schedules lead to fast, uncritical email processing.

By Previous Training: Employees who completed security awareness training within the past 3 months show significantly lower susceptibility rates, demonstrating the importance of regular training cadence.

By Location/Division: Remote workers may have different susceptibility patterns than office workers. International offices may face language-specific phishing risks.

9.9.3 Presenting Results to Stakeholders

Social engineering assessment reports should:

  1. Lead with aggregate data: Present organization-wide metrics before individual results
  2. Benchmark against industry: Compare results to industry averages for context
  3. Highlight positive findings: Celebrate high report rates and employees who challenged testers
  4. Recommend specific improvements: Training programs, technical controls, and process changes
  5. Protect individual privacy: De-identify individual results unless the engagement contract specifically requires named reporting
  6. Include sanitized examples: Show the phishing emails and pretexts used, so the organization can use them in training
  7. Track trends: If this is a repeat engagement, show improvement (or regression) over time

Best Practice: The most valuable social engineering reports do not just tell the client "23% of your employees clicked the phishing link." They explain why those employees clicked, what organizational factors contributed to the vulnerability, and what specific, actionable steps will reduce that rate. The report should be a roadmap for security culture improvement, not a blame document.

9.10 Social Engineering in the Age of AI

9.9.1 AI-Powered Social Engineering

Artificial intelligence is transforming social engineering in several ways:

Automated Profiling: AI tools can rapidly process thousands of social media profiles, identifying patterns, interests, and potential vulnerabilities at a scale that would be impossible manually.

Personalized Phishing at Scale: Large language models can generate unique, personalized phishing emails for thousands of targets simultaneously. Each email can reference the specific target's role, interests, and recent activities.

Real-Time Adaptation: AI-powered chatbots can conduct sophisticated social engineering conversations, adapting in real time to the target's responses.

Predictive Targeting: Machine learning models can analyze organizational data to predict which employees are most likely to fall for specific types of social engineering attacks.

9.9.2 AI-Assisted Defense

The same AI technologies that power attacks can also power defenses:

Behavioral Analytics: AI systems can identify unusual patterns in employee behavior (unexpected login times, unusual file access) that may indicate a successful social engineering attack.

Phishing Detection: Machine learning models can analyze incoming emails for social engineering indicators that rule-based systems miss.

Real-Time Coaching: AI-powered systems can provide real-time warnings to employees when they are about to take potentially risky actions.

Deepfake Detection: AI models trained to detect synthetic media can be integrated into communication platforms to flag potential deepfakes.

9.9.3 The Arms Race

Social engineering is now an AI-versus-AI arms race. Attackers use AI to create more convincing pretexts, more realistic deepfakes, and more targeted campaigns. Defenders use AI to detect these attacks, analyze behavior patterns, and provide real-time protection. As ethical hackers, we need to understand both sides of this arms race to provide effective testing and recommendations.

9.12 Applying Social Engineering Recon to the Running Examples

9.12.1 MedSecure Detailed Campaign Plan

Integrating all reconnaissance findings for MedSecure, our social engineering assessment plan crystallizes:

Wave 1 — Broad Phishing (All 340 employees): We send a phishing email impersonating the Microsoft 365 administration portal. The email announces "mandatory security updates to your Microsoft 365 account" — a pretext grounded in the OSINT finding that MedSecure uses Microsoft 365. The landing page clones the Microsoft 365 login page. We time this for Tuesday at 10:00 AM, when email engagement is highest. Expected click rate: 18-25%.

Wave 2 — Targeted IT Department (28 IT staff): Two weeks after Wave 1, we send a spear phishing email impersonating Palo Alto Networks (discovered as their firewall vendor through job postings). The email references a "critical GlobalProtect VPN update" with a link to download the "updated client." Since we know they use GlobalProtect VPN from OSINT, this pretext is highly specific. Expected click rate: 30-40%.

Wave 3 — Executive Spear Phishing (5 executives): We craft individual emails for each executive, personalized based on their LinkedIn activity and role. The CFO receives an email about a "board financial review document." The CISO receives a fake alert from CrowdStrike about a "critical threat detection." Each email is unique. Expected click rate: 20-30%.

Wave 4 — Vishing (3 help desk analysts): We call the help desk impersonating a senior physician who is "locked out of the EHR system" and needs an immediate password reset. Healthcare help desks are particularly susceptible because clinician access to patient data is considered urgent for patient safety. We expect at least 1 of 3 analysts to comply.

Physical Assessment: If authorized, we attempt to enter through the loading dock (identified via Google Maps as lacking badge readers) carrying a box labeled with a medical supply company name. We aim to reach the server room or plant a network monitoring device.

9.12.2 ShopStack Campaign Considerations

ShopStack presents different challenges as a fully remote company:

  • No physical assessment possible: 100% remote workforce eliminates physical SE vectors
  • Slack-centric communication: The primary pretext should involve Slack workspace management or migration
  • Developer-heavy workforce: Technical pretexts involving GitHub, npm packages, or CI/CD pipeline notifications will be most effective
  • Strong remote work culture: Employees are accustomed to digital-only communication, making email and messaging pretexts more natural

The campaign would focus on: GitHub notification phishing targeting developers, Slack workspace phishing targeting all employees, and CEO impersonation via email targeting the finance team.

9.12.3 Student Lab Exercises

For practicing social engineering reconnaissance in your lab:

  1. Profile yourself: Conduct a complete social engineering profile of your own online presence. What could an attacker learn about you? What pretexts could they craft? This exercise builds empathy for targets and awareness of your own exposure.

  2. Mock organizational mapping: Choose a publicly traded company and build an organizational map using only LinkedIn and the company website. Identify the top 10 social engineering targets and explain your ranking.

  3. Pretext design workshop: Given a set of OSINT findings (provided by your instructor or from the MedSecure example), design three phishing pretexts. Have classmates evaluate which pretext they would most likely fall for and why.

  4. GoPhish lab: Set up GoPhish in your lab and send phishing emails to your own email account. Practice creating templates, building landing pages, and analyzing campaign results.

Summary

Social engineering reconnaissance is the bridge between technical intelligence gathering and human-focused attack execution. By systematically mapping organizations, profiling employees, developing pretexts from OSINT findings, and assessing physical security, we build the foundation for effective social engineering assessments.

The key principles to remember:

  1. People are the attack surface: Technical controls are only as strong as the humans who manage and use them
  2. OSINT feeds pretexting: The most effective pretexts are built from real intelligence about the target
  3. Influence is science: Cialdini's principles provide a systematic framework for understanding and exploiting human decision-making
  4. Physical security matters: Do not neglect the physical dimension of security assessment
  5. Deepfakes change the game: Synthetic media is making social engineering attacks dramatically more convincing
  6. Ethics are non-negotiable: Social engineering tests must be conducted with respect for the people being tested
  7. Education, not punishment: The goal is to improve security culture, not to embarrass individuals

With reconnaissance complete — passive, active, and social — we are ready to move into the next phase of the penetration testing lifecycle. In Part 3, we will take the intelligence gathered during reconnaissance and use it to systematically scan for and identify vulnerabilities in the target's systems.