> "To catch a thief, you must think like a thief." — Attributed to Eugène François Vidocq, the criminal-turned-detective who founded the French Sûreté in 1811
Learning Objectives
- Define ethical hacking and distinguish it from malicious hacking
- Trace the evolution of hacking culture from phone phreaks to modern bug bounties
- Classify hackers using the white/black/gray hat taxonomy and understand the nuances between categories
- Describe the five phases of the penetration testing lifecycle
- Articulate the business case for ethical hacking to both technical and non-technical stakeholders
- Identify the legal and ethical boundaries that govern authorized security testing
- Analyze the MedSecure Health Systems environment and identify initial areas of security concern
In This Chapter
- Chapter Overview
- 1.1 What Is Ethical Hacking?
- 1.2 The Evolution of Hacking Culture
- 1.3 The White, Black, and Gray Hat Taxonomy
- 1.4 The Penetration Testing Lifecycle
- 1.5 The Business Case for Ethical Hacking
- 1.6 Ethical Hacking Career Paths
- 1.7 Introducing MedSecure Health Systems
- 1.8 Types of Penetration Tests
- 1.9 The Human Factor in Security
- 1.10 The Mindset of an Ethical Hacker
- 1.11 The Attacker vs. Defender Asymmetry
- 1.12 Ethical Frameworks for Security Professionals
- 1.13 A Day in the Life of a Penetration Tester
- Chapter Summary
- What's Next
Chapter 1: Introduction to Ethical Hacking
"To catch a thief, you must think like a thief." — Attributed to Eugène François Vidocq, the criminal-turned-detective who founded the French Sûreté in 1811
Chapter Overview
On a quiet Monday morning in March 2023, a security researcher sat in a hospital cafeteria in Phoenix, Arizona, sipping terrible coffee and scrolling through network traffic on a laptop. To a casual observer, he looked like any other IT worker. But this researcher was doing something extraordinary: he was systematically breaking into the hospital's computer systems — its patient records database, its connected medical devices, its billing infrastructure — and he was doing it with the full written authorization of the hospital's board of directors. By Thursday, he had found seventeen critical vulnerabilities, including one that would have allowed an attacker to alter medication dosage data on networked infusion pumps. That single finding may have saved lives.
This is ethical hacking. It is the practice of using the same tools, techniques, and mindset as malicious attackers, but channeling those skills toward defense. It is one of the most important, most misunderstood, and most rapidly growing disciplines in modern technology. And it is what this book is about.
In this opening chapter, we will build the conceptual foundation you need for everything that follows. We will define ethical hacking precisely, trace its remarkable history from the phone phreaks of the 1960s to today's billion-dollar bug bounty ecosystem, and establish the taxonomy that distinguishes different types of hackers. We will walk through the penetration testing lifecycle that structures professional engagements, make the business case for why organizations pay people to attack them, and introduce you to MedSecure Health Systems — the fictional healthcare organization whose network we will systematically learn to test throughout this book.
Whether you are a computer science student curious about security, an IT professional looking to pivot into penetration testing, or a manager trying to understand what your security team actually does, this chapter will give you the language, the framework, and the motivation to go deeper.
Let us begin.
1.1 What Is Ethical Hacking?
At its core, ethical hacking is the authorized practice of identifying vulnerabilities in computer systems, networks, and applications before malicious actors can exploit them. The word "authorized" is not a decoration — it is the single most important word in that definition. Authorization is the bright line that separates a security professional from a criminal.
1.1.1 Defining Our Terms
Let us be precise about several terms that are often used interchangeably but carry distinct meanings:
Ethical hacking is the broadest term. It encompasses any activity where a person uses offensive security techniques — the same techniques a malicious hacker would use — with explicit permission from the system owner, for the purpose of improving security. This includes penetration testing, vulnerability assessments, red team exercises, bug bounty hunting, and security research.
Penetration testing (often shortened to "pentesting") is a specific, structured engagement in which a tester or team of testers attempts to exploit vulnerabilities in a defined scope of systems within a defined time window. A pentest has a formal beginning and end, a written scope document, and a deliverable report. Think of it as a focused, time-boxed ethical hacking engagement.
Vulnerability assessment is a systematic process of identifying and cataloging vulnerabilities, typically using automated scanning tools, without necessarily attempting to exploit them. Where a pentest asks "can I actually break in?", a vulnerability assessment asks "what weaknesses exist?" The two are complementary, not interchangeable.
Red teaming is a broader, more adversarial form of testing that simulates a real-world attack over an extended period. Unlike a pentest, which usually targets specific systems, a red team exercise may target the entire organization — including its people (through social engineering), its physical facilities, and its digital infrastructure. Red team engagements often have fewer constraints and longer timeframes.
Bug bounty hunting is the practice of finding and reporting vulnerabilities in systems that have public bug bounty programs. Companies like Google, Microsoft, and thousands of others invite researchers to test their systems and pay rewards ("bounties") for valid findings. This is ethical hacking at scale, crowdsourced across the global security community.
💡 Intuition: Think of the difference between these terms like medical practices. Ethical hacking is "medicine." Penetration testing is "surgery" — focused, scoped, with a specific procedure. Vulnerability assessment is "diagnostic imaging" — finding problems without cutting. Red teaming is a "full health evaluation" that tests everything. Bug bounty hunting is like "crowd-sourced diagnostics" where thousands of doctors look at your case.
1.1.2 The Authorization Imperative
We cannot overstate this: without explicit, written authorization, hacking is a crime. In the United States, the Computer Fraud and Abuse Act (CFAA) of 1986 makes it a federal offense to intentionally access a computer without authorization or to exceed authorized access. Similar laws exist in virtually every jurisdiction worldwide — the UK's Computer Misuse Act 1990, the EU's Directive on Attacks Against Information Systems, Australia's Criminal Code Act 1995, and many others.
Authorization typically comes in the form of several documents:
- Statement of Work (SOW): Defines the business relationship, payment terms, and general scope of the engagement.
- Rules of Engagement (ROE): Specifies exactly what the tester is and is not allowed to do. Can they use social engineering? Can they test during business hours? Are denial-of-service tests permitted? What systems are explicitly off-limits?
- Scope Document: Lists the specific IP addresses, domains, applications, and/or physical locations that are in scope for testing.
- Get-Out-of-Jail Letter: A signed letter from an authorized executive confirming that the tester has permission to conduct the engagement. Pentesters often carry this letter during physical penetration tests in case they are detained by security or law enforcement.
⚖️ Legal Note: Authorization must come from someone with the actual authority to grant it. A mid-level IT manager may not have the authority to authorize a pentest of systems they don't own. If you test a cloud-hosted application, you may need authorization from both the application owner and the cloud provider. Amazon Web Services, Microsoft Azure, and Google Cloud all have specific policies and notification requirements for penetration testing on their platforms. Always verify the chain of authorization.
1.1.3 The Ethical Dimension
Authorization is necessary but not sufficient for truly ethical hacking. Ethics goes beyond legality. An ethical hacker must also:
- Minimize harm: Even with authorization, you should avoid causing unnecessary damage. If you can prove a vulnerability exists without crashing a production server, do so. If you discover patient data during a healthcare pentest, handle it with the same care a medical professional would.
- Protect confidentiality: You will see things during a pentest that could be devastating if leaked — unpatched systems, weak passwords, sensitive data. This information must be protected with the same rigor you would expect of your own secrets.
- Report honestly: If you find a critical vulnerability, you report it — even if the client would prefer not to hear it. If you fail to find vulnerabilities, you say so honestly rather than inflating your findings to justify your fee.
- Stay within scope: If you discover a path that leads outside your authorized scope, you stop and report the potential vulnerability without exploiting it. Curiosity is not a legal defense.
- Consider downstream impact: Your testing may affect systems and people beyond your direct target. A denial-of-service test on a hospital's network could affect patient care. An aggressive scan of a shared hosting environment could impact other tenants.
⚠️ Common Pitfall: New pentesters sometimes discover a vulnerability that leads to systems outside their scope and feel compelled to "just take a quick look." This is a career-ending mistake. If your scope says you can test the web application at app.example.com and you discover a path to the internal database server at 10.0.1.50, you document the finding and stop. You do not access that database server, no matter how tempting it is. Scope creep is the number one way ethical hackers accidentally cross the line into unauthorized access.
1.2 The Evolution of Hacking Culture
To understand where ethical hacking is today, we need to understand where it came from. The history of hacking is a fascinating story of curiosity, rebellion, crime, and — eventually — professionalization.
1.2.1 The Phone Phreaks (1960s–1970s)
The story begins not with computers but with telephones. In the 1960s, a loose community of technology enthusiasts discovered that the analog telephone network could be manipulated using specific audio tones. The most famous of these tones was 2600 Hz, which could be used to seize control of long-distance trunk lines and make free calls.
John Draper, known as "Captain Crunch," discovered that a toy whistle included in boxes of Cap'n Crunch cereal produced a perfect 2600 Hz tone. Using this whistle and later a homemade electronic "blue box," Draper and others could place free calls anywhere in the world. A young Steve Wozniak and Steve Jobs famously built and sold blue boxes before founding Apple Computer.
Phone phreaking was driven by curiosity and a desire to understand complex systems, not primarily by criminal intent (though free long-distance calls were certainly a perk). This spirit — the desire to understand how systems work by exploring their boundaries — would become the philosophical foundation of hacker culture.
1.2.2 The Early Computer Hackers (1970s–1980s)
As computers became more accessible through university mainframes and early personal computers, the phreaking community evolved into the hacking community. The term "hacker" originally had no negative connotation — at MIT's Tech Model Railroad Club and later its AI Lab, a "hack" was an elegant or clever technical achievement.
The 1980s saw the emergence of hacking groups like the Legion of Doom and the Masters of Deception, and the publication of Phrack magazine (first issue: 1985) and the founding of the 2600: The Hacker Quarterly magazine (1984). The 1983 film WarGames, in which a teenage hacker nearly starts World War III by accessing a military computer, brought hacking into mainstream consciousness and prompted the first serious legislative response.
The CFAA was passed in 1986, originally targeting unauthorized access to government and financial computers. It has been amended multiple times since and remains the primary U.S. federal anti-hacking statute — and one of the most controversial, as its broad language has been used to prosecute activities many security researchers consider legitimate.
1.2.3 The Dark Years (1990s–Early 2000s)
The 1990s and early 2000s were a turbulent period. High-profile hackers like Kevin Mitnick (whom we will study in depth in this chapter's first case study), Kevin Poulsen, and Adrian Lamo made headlines for increasingly sophisticated intrusions. The Internet's explosive growth created vast new attack surfaces, and a wave of worms and viruses — Code Red, Nimda, SQL Slammer, Blaster — caused billions of dollars in damage.
During this period, the security community was deeply divided on the ethics of vulnerability research. If you discovered a vulnerability, what should you do? The "full disclosure" movement argued that publishing vulnerabilities publicly forced vendors to fix them quickly. Vendors, predictably, argued that public disclosure was irresponsible and endangered users. This tension would eventually lead to the "responsible disclosure" and "coordinated disclosure" models we use today.
1.2.4 The Professionalization Era (2000s–2010s)
Several developments transformed hacking from a subcultural activity into a legitimate profession:
Certifications: The Certified Ethical Hacker (CEH) certification, launched by EC-Council in 2003, gave ethical hacking formal recognition. The OSCP (Offensive Security Certified Professional), launched in 2006, became the gold standard for hands-on penetration testing skills. GIAC certifications from SANS Institute provided additional professional credentials.
Security conferences: DEF CON (founded 1993) and Black Hat (founded 1997) grew from small gatherings into massive industry events. These conferences created spaces where hackers, corporate security teams, law enforcement, and academics could interact and share knowledge.
Regulatory drivers: Regulations like PCI DSS (Payment Card Industry Data Security Standard, first released 2004) began requiring regular penetration testing for organizations that process credit cards. HIPAA, SOX, and other regulations created additional compliance drivers for security testing.
The consulting industry: Companies like @stake, iSEC Partners, Foundstone, and many others built successful businesses providing penetration testing services. Major consultancies like Deloitte, PwC, and KPMG added cybersecurity practices. Ethical hacking became a career path with salaries, benefits, and retirement plans.
1.2.5 The Bug Bounty Revolution (2010s–Present)
The launch of bug bounty platforms — Bugcrowd in 2012 and HackerOne in 2012 — democratized ethical hacking in ways that traditional consulting never could. Suddenly, a skilled researcher in Lagos or Bangalore could earn significant income by finding vulnerabilities in major technology companies, without the need for formal employment, certifications, or even a college degree.
As of 2024, HackerOne alone has facilitated over $300 million in bounty payouts to hackers in more than 170 countries. Individual researchers have earned over $1 million on the platform. Companies including Google, Microsoft, Apple, the U.S. Department of Defense, Goldman Sachs, and Starbucks all operate bug bounty programs.
📊 Real-World Application: The U.S. Department of Defense launched "Hack the Pentagon" in 2016 as the federal government's first-ever bug bounty program. Within 24 hours, the first vulnerability was reported. By the end of the program, 138 unique vulnerabilities were identified. The program cost $150,000 in bounties — a fraction of what the same testing would have cost through traditional contracting. The success led to "Hack the Army," "Hack the Air Force," and ongoing programs managed through HackerOne and Bugcrowd.
1.2.6 The Current State: AI, Automation, and the Talent Gap
Today, ethical hacking exists in a complex ecosystem. Several trends are shaping the field:
- The cybersecurity talent shortage: There are an estimated 3.5 million unfilled cybersecurity positions globally. This shortage has driven salaries upward and created enormous demand for skilled pentesters.
- AI and automation: Tools powered by machine learning are augmenting human hackers for vulnerability discovery, but they are far from replacing the creativity and contextual understanding that human testers bring.
- Cloud and DevOps transformation: As organizations move to cloud-native architectures and CI/CD pipelines, the attack surface has changed dramatically. Ethical hackers must now understand containers, serverless functions, infrastructure-as-code, and API security.
- Ransomware epidemic: The explosion of ransomware attacks has made cybersecurity a board-level priority, increasing investment in proactive security testing.
- Regulatory expansion: The EU's NIS2 Directive, the SEC's new cybersecurity disclosure rules, and similar regulations worldwide are creating new compliance drivers for penetration testing.
1.3 The White, Black, and Gray Hat Taxonomy
The hacking community has long used a color-coded taxonomy to categorize different types of hackers. While this framework is imperfect and increasingly nuanced, it provides a useful starting vocabulary.
1.3.1 White Hat Hackers
White hat hackers are security professionals who use their skills with explicit authorization to improve security. They follow established legal and ethical frameworks, report their findings to system owners through proper channels, and operate within defined scopes.
White hat hackers include: - Penetration testers employed by consulting firms or internal security teams - Bug bounty hunters who report findings through authorized programs - Security researchers who discover and responsibly disclose vulnerabilities - Red team members conducting authorized adversarial simulations - Security engineers who use offensive techniques to test defenses they are building
The key defining characteristic is authorization and intent. A white hat hacker's goal is to make systems more secure.
1.3.2 Black Hat Hackers
Black hat hackers are individuals who access computer systems without authorization, typically for personal gain, malicious intent, or both. This includes:
- Cybercriminals who steal data for financial gain
- Ransomware operators who encrypt systems and demand payment
- State-sponsored hackers who conduct espionage or sabotage (sometimes called "Advanced Persistent Threats" or APTs)
- Hacktivists with malicious methods (though they would argue their intent is righteous)
- Insiders who abuse their legitimate access for unauthorized purposes
The key defining characteristic is the absence of authorization and/or the presence of malicious intent.
1.3.3 Gray Hat Hackers
Gray hat hackers occupy the ambiguous middle ground. They may access systems without authorization but without malicious intent — perhaps to discover vulnerabilities and report them to the owner. They may cross ethical lines but stop short of clearly criminal behavior.
Common gray hat scenarios include: - Finding a vulnerability in a company's website without authorization, then notifying the company - Accessing a system to prove it is insecure, without stealing or destroying data - Publishing vulnerability details after a vendor refuses to patch, to pressure them into action - Security researchers who discover vulnerabilities as a side effect of other work
⚠️ Common Pitfall: Many aspiring ethical hackers believe that good intentions are a legal defense. They are not. If you access a system without authorization, you may be prosecuted regardless of your intent. The CFAA does not have a "but I was trying to help" exception. If you want to test a system's security, get written authorization first. If you stumble upon a vulnerability, report it through the organization's responsible disclosure program or through a platform like HackerOne. Never access systems beyond what is necessary to confirm the vulnerability exists.
1.3.4 Beyond the Taxonomy
The hat metaphor, borrowed from Western films where heroes wore white hats and villains wore black, is increasingly recognized as overly simplistic. Real-world scenarios often defy clean categorization:
- Nation-state hackers may be heroes to their own country and villains to others. Is a Chinese APT member a black hat or a patriotic defender?
- Hacktivists like those who exposed corruption may have broken the law but served the public interest. Anonymous operations against oppressive regimes raise difficult ethical questions.
- Vulnerability brokers who buy and sell zero-day exploits operate in a legal gray zone. Companies like Zerodium pay millions for zero-days and sell them to governments. Is this white hat, black hat, or something else entirely?
- Former black hats like Kevin Mitnick, Frank Abagnale, and many others have become respected security consultants. Does past behavior define someone permanently?
The taxonomy remains useful as a starting point, but as you progress through this book, we will encourage you to think about hacking ethics in more nuanced terms — considering intent, authorization, impact, and context rather than relying solely on hat colors.
🔗 Connection: The tension between offense and defense is a theme we will revisit throughout this book. In Chapter 18, we will explore red team vs. blue team dynamics. In Chapter 37, we will examine the ethics of vulnerability research and disclosure in depth. The foundation we build here — understanding that the same skills can serve protective or destructive purposes, and that the difference lies in authorization and intent — will guide us through increasingly complex scenarios.
1.4 The Penetration Testing Lifecycle
Professional penetration testing follows a structured methodology. While different frameworks exist (OWASP, PTES, OSSTMM, NIST SP 800-115), they all share the same fundamental phases. We will use a five-phase model that maps closely to industry practice.
1.4.1 Phase 1: Planning and Reconnaissance
Every engagement begins with planning. Before a single packet is sent, the tester and the client must agree on scope, rules of engagement, timing, communication protocols, and legal authorizations.
Planning activities include: - Defining the scope (which systems, networks, applications, and physical locations are in bounds) - Establishing rules of engagement (what techniques are permitted, what is off-limits) - Setting the timeline (start date, end date, testing windows) - Defining communication channels (who to contact if a critical vulnerability is found, emergency procedures) - Signing legal documents (SOW, NDA, authorization letters) - Establishing success criteria (what does a "successful" test look like?)
Reconnaissance (often called "recon") is the process of gathering information about the target. It is divided into two types:
- Passive reconnaissance involves gathering information without directly interacting with the target. This includes searching public records, social media, job postings, DNS records, WHOIS data, financial filings, and leaked data. Passive recon leaves no trace on the target's systems.
- Active reconnaissance involves directly interacting with the target — port scanning, service enumeration, banner grabbing, and similar activities. Active recon is detectable by the target's security systems.
🔴 Red Team Perspective: Reconnaissance is where most real attacks begin and where most pentesters underinvest time. Sophisticated attackers may spend weeks or months gathering intelligence before launching a single exploit. A job posting that mentions "We use Oracle 12c on RHEL 7" tells an attacker exactly what database and OS to target. A LinkedIn profile revealing that the company's sysadmin is on vacation tells them when defenses may be weakest. Never rush recon.
1.4.2 Phase 2: Scanning and Enumeration
Once reconnaissance has identified potential targets, the tester moves to active scanning and enumeration to build a detailed map of the attack surface.
Scanning activities include: - Network scanning: Identifying live hosts, open ports, and running services (tools: Nmap, Masscan) - Vulnerability scanning: Identifying known vulnerabilities in discovered services (tools: Nessus, OpenVAS, Qualys) - Web application scanning: Identifying web-specific vulnerabilities like SQL injection, XSS, and misconfigured headers (tools: Burp Suite, OWASP ZAP, Nikto) - Service enumeration: Extracting detailed information from discovered services — version numbers, configurations, user lists, share names
Enumeration goes deeper than scanning. Where scanning asks "what ports are open?", enumeration asks "what can I learn from the services behind those ports?" Enumerating an SMB service might reveal share names and access permissions. Enumerating an LDAP service might reveal the entire Active Directory structure.
🔵 Blue Team Perspective: Every technique in this phase generates network traffic that a well-configured SIEM (Security Information and Event Management) system should detect. If you are a defender, ask yourself: would we detect an Nmap scan of our network? Would we notice someone enumerating our Active Directory? If the answer is "no," you have found a gap in your monitoring.
1.4.3 Phase 3: Gaining Access (Exploitation)
This is the phase most people think of when they imagine hacking — the moment of actually exploiting a vulnerability to gain unauthorized access. But in professional pentesting, exploitation is a carefully controlled process, not a reckless smash-and-grab.
Common exploitation techniques include: - Exploiting software vulnerabilities (buffer overflows, SQL injection, remote code execution) - Credential attacks (password spraying, brute forcing, credential stuffing with leaked passwords) - Social engineering (phishing, pretexting, physical intrusion — if in scope) - Misconfigurations (default credentials, overly permissive access controls, exposed management interfaces) - Client-side attacks (malicious documents, browser exploits, watering hole attacks)
A key principle of professional exploitation is to use the minimum access necessary to demonstrate impact. If you can prove a SQL injection vulnerability allows data extraction by retrieving one row, you do not need to dump the entire database. If you can demonstrate remote code execution by creating a harmless file, you do not need to install a persistent backdoor.
✅ Best Practice: Document every step of your exploitation chain meticulously. Screenshots, command logs, timestamps, and network captures are your evidence. If a client questions whether a vulnerability is real or disputes your findings, your documentation is your proof. It also helps the client understand exactly how the attack worked so they can fix it effectively.
1.4.4 Phase 4: Maintaining Access and Post-Exploitation
Once initial access is gained, the tester explores what an attacker could do with that foothold. This phase includes:
- Privilege escalation: Moving from a low-privilege user account to administrator or root access
- Lateral movement: Moving from the initially compromised system to other systems on the network
- Data exfiltration (simulated): Demonstrating that sensitive data could be stolen
- Persistence (carefully): Showing how an attacker could maintain long-term access (backdoors, scheduled tasks, rootkits) — though in most pentest engagements, actually installing persistent mechanisms is done with extreme caution and explicit authorization
This phase is crucial because it demonstrates the real-world impact of the initial vulnerability. A single compromised web server might not seem critical, but if that server can be used to reach the domain controller, which gives access to every system in the organization, the impact is catastrophic.
🔴 Red Team Perspective: Real attackers do not stop at initial access. The median dwell time — the time between initial compromise and detection — was 16 days in 2023, according to Mandiant's M-Trends report. During those 16 days, attackers escalate privileges, map the internal network, identify high-value targets, and exfiltrate data. Your pentest should demonstrate what an attacker could accomplish in that window.
1.4.5 Phase 5: Reporting and Remediation
Many pentesters consider the report the most important deliverable of the entire engagement. A brilliant exploitation that is poorly documented helps no one.
A professional pentest report typically includes:
- Executive summary: A non-technical overview for senior leadership, explaining the overall risk posture and key findings in business terms
- Methodology: What was tested, how it was tested, and what frameworks were followed
- Findings: Each vulnerability documented with:
- Severity rating (typically using CVSS or a similar framework)
- Description of the vulnerability
- Steps to reproduce
- Evidence (screenshots, logs, captured data)
- Business impact (what could an attacker do with this?)
- Remediation recommendations (how to fix it)
- Positive findings: What the organization is doing well — this context helps prioritize remediation
- Strategic recommendations: Longer-term improvements to the organization's security program
✅ Best Practice: Write your report for two audiences simultaneously. The executive summary should be understandable by a CEO who has no technical background. The technical findings should be detailed enough that a systems administrator can reproduce the issue and verify the fix. Many firms deliver a formal out-brief presentation in addition to the written report.
1.5 The Business Case for Ethical Hacking
Understanding the business case for ethical hacking is crucial — whether you are a tester who needs to explain your value, a manager seeking budget approval, or a student trying to understand why this profession exists.
1.5.1 The Cost of Breaches
IBM's Cost of a Data Breach Report 2024 found that the average cost of a data breach reached $4.88 million globally, with healthcare breaches averaging $9.77 million — the highest of any industry for the fourteenth consecutive year. These costs include:
- Detection and escalation costs
- Notification costs (informing affected individuals and regulators)
- Post-breach response (credit monitoring, legal fees, regulatory fines)
- Lost business (customer churn, reputational damage, operational downtime)
A penetration test typically costs between $10,000 and $100,000 depending on scope and complexity. Even a $100,000 pentest that prevents a single significant breach represents an extraordinary return on investment.
1.5.2 Compliance and Regulatory Requirements
Many regulations and standards require or strongly encourage penetration testing:
| Regulation/Standard | Requirement |
|---|---|
| PCI DSS 4.0 | Annual penetration testing required (Requirement 11.4) |
| HIPAA | Risk analysis required; pentesting is a recognized method |
| SOC 2 | Penetration testing supports multiple trust criteria |
| ISO 27001 | Technical vulnerability management (Annex A.12.6) |
| NIST Cybersecurity Framework | Supports Identify and Protect functions |
| EU NIS2 Directive | Regular testing and auditing of security measures |
| NY DFS Cybersecurity Regulation | Annual penetration testing required |
| DORA (EU Financial Sector) | Threat-led penetration testing required |
For many organizations, penetration testing is not optional — it is a compliance obligation.
1.5.3 Insurance and Due Diligence
Cyber insurance has become a critical risk management tool, and insurers increasingly require evidence of proactive security testing. A clean pentest report can lower premiums, while a history of testing demonstrates due diligence that may limit liability in the event of a breach.
Similarly, mergers and acquisitions increasingly involve cybersecurity due diligence. A pentest of the target company's systems may be part of the pre-acquisition evaluation, and significant findings can affect the deal's valuation or terms.
1.5.4 Competitive Advantage
Organizations that can demonstrate strong security posture gain competitive advantages, particularly in B2B markets. Enterprise customers increasingly require vendors to provide evidence of security testing. SOC 2 reports, pentest attestations, and bug bounty programs have become selling points.
1.5.5 Building a Security Culture
Perhaps the most underappreciated benefit of penetration testing is its cultural impact. When developers see their code exploited in a pentest report, they write more secure code. When executives see a demonstration of how their crown jewels could be stolen, they invest in security. Pentesting transforms abstract security risks into concrete, visceral demonstrations.
📊 Real-World Application: A Fortune 500 financial institution reported that after implementing a quarterly pentest program, the number of critical vulnerabilities found per test decreased by 67% over three years. The testing was not just finding problems — it was driving systematic improvement in the organization's development and operations practices.
1.6 Ethical Hacking Career Paths
Before we introduce our running example, let us briefly survey the career landscape. Understanding where ethical hacking skills can take you may help motivate the intensive learning ahead.
1.6.1 Penetration Tester / Security Consultant
The most direct career path. Pentesters work for consulting firms (large consultancies, boutique security firms, or managed security service providers) or as part of internal security teams. Salaries in the U.S. range from $80,000 for entry-level positions to $200,000+ for senior testers, with top consultants earning considerably more.
1.6.2 Red Team Operator
Red teamers conduct advanced, long-term adversarial simulations. This is the elite end of offensive security — red team operators develop custom tools, conduct sophisticated social engineering campaigns, and simulate nation-state-level threats. Red team roles often require several years of pentesting experience.
1.6.3 Bug Bounty Hunter
A growing number of researchers make their living (or supplement their income) through bug bounty programs. The top bug bounty hunters earn over $500,000 per year. The lifestyle offers freedom and flexibility — you can work from anywhere, on your own schedule, testing the targets that interest you most. However, income can be unpredictable, and there is no employer-provided health insurance or retirement plan.
1.6.4 Security Engineer / Architect
Many ethical hackers eventually transition to the defensive side, using their offensive knowledge to design and build secure systems. Understanding how attacks work makes you exceptionally effective at preventing them.
1.6.5 Security Leadership (CISO, VP of Security)
Senior security leadership positions increasingly value offensive security backgrounds. CISOs who have personally conducted penetration tests bring a depth of technical understanding that strengthens their strategic decision-making.
1.6.6 Certifications That Matter
Several certifications are particularly valued in ethical hacking careers:
- OSCP (Offensive Security Certified Professional): The most respected hands-on pentesting certification. Requires passing a 24-hour practical exam where you must compromise multiple machines.
- CEH (Certified Ethical Hacker): Widely recognized, particularly in government and compliance contexts. More knowledge-based than OSCP.
- GPEN (GIAC Penetration Tester): Rigorous certification from SANS Institute.
- eJPT / eCPPT (eLearnSecurity): Increasingly popular practical certifications with lower barriers to entry.
- OSWE, OSEP, OSED (Offensive Security): Advanced specializations in web, enterprise, and exploit development.
- PNPT (Practical Network Penetration Tester): From TCM Security, valued for its practical, real-world approach.
- CompTIA PenTest+: A solid foundational certification for those earlier in their careers.
1.7 Introducing MedSecure Health Systems
Throughout this book, we will use a fictional organization called MedSecure Health Systems as our primary running example. MedSecure will serve as the target for our discussions, examples, and many exercises. Let us meet them.
1.7.1 Company Profile
MedSecure Health Systems is a mid-sized regional healthcare provider based in Phoenix, Arizona. Founded in 2003, MedSecure operates: - Three hospitals (one Level II trauma center) - Twelve outpatient clinics - Two urgent care facilities - A growing telehealth platform
MedSecure employs approximately 4,500 people, including 800 physicians, 1,200 nurses, and a 45-person IT department with a 6-person security team. Annual revenue is approximately $1.2 billion.
1.7.2 Technology Environment
MedSecure's technology environment is typical of mid-sized healthcare organizations — a complex mix of modern and legacy systems:
Network Infrastructure: - Corporate network: 10.10.0.0/16 (approximately 3,000 endpoints) - Medical device network: 10.20.0.0/16 (approximately 500 connected medical devices) - Guest Wi-Fi: 192.168.0.0/16 (segregated from corporate network) - VPN for remote access (Cisco AnyConnect) - Site-to-site VPN connecting all facilities
Server Infrastructure: - Windows Active Directory domain (MEDSECURE.LOCAL) with approximately 200 servers - Windows Server 2019 for most production workloads - Several Windows Server 2012 R2 systems running legacy applications (known issue, migration planned) - Ubuntu 22.04 LTS for Linux workloads - Two CentOS 7 servers running a legacy radiology application (end of life, no vendor support) - VMware vSphere 7.0 virtualization platform
Cloud Infrastructure: - AWS (primary cloud provider) - EC2 instances running patient portal backend - RDS (PostgreSQL) for patient portal database - S3 buckets for medical imaging storage - Lambda functions for appointment scheduling APIs - Microsoft 365 for email and productivity
Applications: - Epic Systems EHR (Electronic Health Record) — the crown jewel, containing all patient data - Patient portal (custom web application: React frontend, Node.js backend, PostgreSQL database) - Telehealth platform (third-party SaaS with custom API integrations) - Billing and revenue cycle management (legacy Java application) - Mobile app for patients (iOS and Android, communicates with patient portal API)
Medical Devices: - Networked infusion pumps (multiple vendors) - Patient monitoring systems - MRI and CT scanners with network connectivity - Medication dispensing cabinets (Pyxis) - Nurse call systems
📊 Real-World Application: MedSecure's environment is based on real healthcare IT architectures we have seen in dozens of assessments. The mix of modern cloud services and legacy on-premises systems, the challenge of securing medical devices that cannot be easily patched, the regulatory burden of HIPAA — these are the real-world conditions that make healthcare one of the most challenging (and rewarding) sectors for ethical hackers.
1.7.3 Security Posture
MedSecure has invested in security but, like many organizations, has gaps:
What they do well: - Annual third-party penetration test (compliance-driven) - Endpoint detection and response (CrowdStrike Falcon) on corporate endpoints - Multi-factor authentication for VPN access - Security awareness training (quarterly phishing simulations) - SIEM (Splunk) monitoring with a two-person SOC during business hours
Known challenges: - Medical device network segmentation is incomplete — some devices can reach the corporate network - Legacy systems (Windows Server 2012 R2, CentOS 7) are difficult to patch - Shadow IT: Several departments have deployed unauthorized SaaS applications - Physician resistance to security controls that slow down patient care - Limited budget for security staff (SOC has no weekend or overnight coverage) - Third-party vendor access management is informal - No bug bounty program - No red team capability (only annual compliance-focused pentests)
1.7.4 Why MedSecure Matters
MedSecure represents the real-world complexity that makes ethical hacking both challenging and essential. Throughout this book, when we learn a new technique, we will ask: "How would this apply to MedSecure?" When we discuss a vulnerability class, we will consider what it means in a healthcare context where a breach could endanger patient lives.
⚖️ Legal Note: Healthcare penetration testing carries unique responsibilities. HIPAA requires the protection of Protected Health Information (PHI). If during a pentest you access systems containing real patient data, that data must be handled in compliance with HIPAA's Privacy and Security Rules. Many healthcare pentests use test data or operate in staging environments to avoid this issue, but you should always understand the regulatory implications of the data you may encounter.
🔗 Connection: We will develop MedSecure's environment progressively. In Chapter 2, we will analyze MedSecure's threat landscape. In Chapter 3, you will build a lab environment that simulates key aspects of MedSecure's network. By the time we reach exploitation techniques in later chapters, you will have a deep familiarity with the target environment that mirrors how a real penetration tester builds knowledge of their client's systems over time.
1.8 Types of Penetration Tests
Not all penetration tests are the same. Understanding the different types helps you choose the right approach for each engagement and communicate clearly with clients about what they are getting.
1.8.1 By Knowledge Level
Black box testing simulates an external attacker with no prior knowledge of the target. The tester starts with nothing more than a company name or IP range and must discover everything through reconnaissance. This approach tests the full attack chain, including the reconnaissance phase, but is time-intensive and may miss internal vulnerabilities.
White box testing (also called "crystal box" or "clear box") gives the tester full knowledge of the target environment — network diagrams, source code, credentials, configuration files. This approach maximizes coverage in a limited time because no time is spent on discovery. It is particularly valuable for code review and architecture assessment.
Gray box testing provides partial knowledge — perhaps network ranges and low-privilege user credentials, but no source code or network diagrams. This simulates a compromised insider or an attacker who has gained initial access and is the most common approach in professional penetration testing.
💡 Intuition: Think of these like testing a bank's physical security. A black box test is like hiring someone to try to rob the bank with no inside information. A white box test is like giving the tester the blueprints, alarm codes, and guard schedules and asking "can you still find weaknesses?" A gray box test is like saying "here is a customer-level access card — how far can you get?"
1.8.2 By Target
Network penetration testing targets network infrastructure — routers, switches, firewalls, servers, and the services running on them. This includes both external (Internet-facing) and internal (requires network access) testing.
Web application penetration testing focuses on web applications and APIs. This is the most common type of pentest, reflecting the fact that most organizations' primary attack surface is their web presence.
Mobile application penetration testing examines iOS and Android applications and their backend APIs, including data storage, authentication, and communication security.
Wireless penetration testing assesses Wi-Fi networks, including encryption strength, rogue access points, and client-side attacks.
Social engineering testing evaluates the human element through phishing campaigns, pretexting calls, physical intrusion attempts, and other manipulation techniques.
Physical penetration testing tests physical security controls — locks, access cards, cameras, guards, and dumpster diving.
Cloud penetration testing assesses cloud infrastructure configurations, IAM policies, and cloud-specific attack vectors on platforms like AWS, Azure, and GCP.
1.8.3 By Perspective
External testing simulates an attacker on the Internet targeting the organization's public-facing systems. The tester operates from outside the network perimeter.
Internal testing simulates an attacker who has already gained access to the internal network — either through a compromised system, a malicious insider, or physical access. Internal tests typically reveal far more vulnerabilities than external tests because the internal network is less hardened.
Purple team exercises combine red team (offensive) and blue team (defensive) activities in a collaborative framework. Rather than the traditional adversarial model where the red team tries to evade detection, purple team exercises have both teams working together, with the red team executing techniques and the blue team learning to detect and respond to them in real time.
🔵 Blue Team Perspective: Purple team exercises are often more valuable than traditional red team engagements for building defensive capability. When the blue team can observe exactly what the red team is doing and get immediate feedback on their detection capabilities, the learning cycle is dramatically accelerated. Many mature organizations are shifting from adversarial red team engagements to collaborative purple team exercises.
1.8.4 Choosing the Right Approach for MedSecure
If MedSecure were to engage a penetration testing firm, the optimal approach would likely be a multi-phase engagement:
- External network pentest (black box) to assess the Internet-facing attack surface
- Web application pentest (gray box) of the patient portal, with authenticated and unauthenticated testing
- Internal network pentest (gray box, with low-privilege domain credentials) to assess Active Directory security and lateral movement potential
- Social engineering (phishing campaign) targeting employees across departments
- Medical device assessment (white box, with network diagrams and device documentation) given the safety-critical nature of these systems
This multi-phase approach provides comprehensive coverage while being realistic about time and budget constraints.
1.9 The Human Factor in Security
No discussion of ethical hacking is complete without acknowledging the central role of human beings in both causing and preventing security breaches.
1.9.1 Why Humans Are the Weakest Link
Despite billions of dollars spent on technical security controls, humans remain the most frequently exploited attack vector. The reasons are rooted in psychology:
Cognitive biases: Humans are subject to systematic errors in thinking. Authority bias makes us comply with requests from perceived authority figures — a social engineer posing as IT support. Urgency bias makes us act quickly without careful evaluation — a phishing email warning of an imminent account lockout. Reciprocity bias makes us feel obligated to help someone who has helped us — a pretexting scenario where the attacker provides a small favor before making a request.
Habituation: When employees encounter the same security warnings repeatedly, they learn to dismiss them automatically. This "alert fatigue" is why the 500th "Are you sure you want to allow this application?" dialog is clicked through without reading.
Competing priorities: Healthcare workers at MedSecure are focused on patient care. When a security control slows down access to a patient's records during an emergency, the healthcare worker will find a workaround — writing passwords on sticky notes, sharing login credentials, or propping open secured doors. These workarounds are rational from the worker's perspective but create security gaps.
1.9.2 Social Engineering in Penetration Testing
Social engineering testing evaluates how well an organization's people resist manipulation. Common techniques include:
- Phishing simulations: Sending crafted emails that mimic real phishing attacks to measure how many employees click links, open attachments, or enter credentials on fake login pages
- Vishing (voice phishing): Calling employees and using pretexts to extract information or gain access
- Physical social engineering: Attempting to gain unauthorized physical access through tailgating, impersonation, or pretexting at reception desks
- USB drop tests: Leaving USB drives labeled "Employee Salaries Q4" in parking lots or common areas to test if employees plug them into corporate computers
- Baiting and pretexting: Creating elaborate scenarios to manipulate targets, such as posing as a new employee needing help, a vendor performing maintenance, or a delivery person
⚖️ Legal Note: Social engineering testing can be emotionally distressing for employees who fall for simulated attacks. Ethical social engineering testing should be designed to educate, not humiliate. Results should be reported in aggregate, not used to single out individuals for punishment. The goal is to improve organizational resilience, not to shame specific employees. Always ensure social engineering is explicitly included in your scope of engagement before conducting any such tests.
1.9.3 Building a Security Culture
The most effective defense against social engineering is not technology — it is culture. Organizations that foster security awareness, encourage reporting of suspicious activity without blame, and treat security as everyone's responsibility are far more resilient than those that rely solely on technical controls.
As an ethical hacker, your social engineering findings should always come with recommendations for cultural improvement alongside technical fixes. A phishing simulation that results in a 40% click rate should lead to enhanced training programs, simplified reporting mechanisms, and cultural changes — not just better email filtering.
📊 Real-World Application: The SANS 2024 Security Awareness Report found that organizations with mature security awareness programs experienced 70% fewer security incidents than those without. The most effective programs combine regular phishing simulations with positive reinforcement (rewarding employees who report suspicious emails) rather than punitive approaches.
1.10 The Mindset of an Ethical Hacker
Technical skills are necessary but not sufficient. The best ethical hackers share a distinctive mindset that sets them apart.
1.8.1 Think Like an Attacker
The most fundamental mental shift in ethical hacking is learning to see systems from the attacker's perspective. When a developer looks at a login form, they see authentication. When an ethical hacker looks at the same form, they see a potential entry point for SQL injection, credential stuffing, brute force attacks, session hijacking, and password reset abuse.
This adversarial thinking does not come naturally to most people, especially those trained in defensive IT. It must be cultivated through practice. Every system you encounter — every website you visit, every application you use, every network you connect to — is an opportunity to think: "How could this be abused?"
1.8.2 Be Methodical, Not Just Creative
While hacking requires creativity, professional penetration testing requires methodology. The pentesters who find the most vulnerabilities are not the most creative — they are the most thorough. They follow a systematic process, check every port, test every parameter, and document every finding.
1.8.3 Never Stop Learning
The technology landscape changes constantly. New vulnerabilities are discovered daily. New attack techniques emerge monthly. New technologies create new attack surfaces yearly. The best ethical hackers are perpetual students who invest significant time in continuous learning through CTF (Capture the Flag) competitions, lab practice, conference talks, research papers, and community engagement.
1.8.4 Communicate Effectively
A vulnerability that is found but poorly communicated is almost as useless as a vulnerability that is never found. Ethical hackers must be able to explain technical issues to non-technical audiences, write clear reports, and present findings persuasively. If you can compromise an entire network but cannot explain why it matters to the CFO who controls the remediation budget, your skills are not reaching their full potential.
1.8.5 Maintain Integrity
You will have access to sensitive data. You will know about vulnerabilities that, if disclosed improperly, could cause enormous harm. You will be trusted with the digital keys to organizations' most valuable assets. This trust is the foundation of the profession, and violating it — even once — will end your career and potentially result in criminal prosecution.
💡 Intuition: Think of ethical hacking like martial arts. You learn techniques that could cause serious harm, but the discipline teaches you when and how to use those techniques responsibly. The skills are powerful precisely because they can be dangerous, and the ethical framework is what makes you a protector rather than a threat.
1.11 The Attacker vs. Defender Asymmetry
One of the most important concepts in cybersecurity is the fundamental asymmetry between attackers and defenders.
1.11.1 The Defender's Dilemma
Defenders must protect everything. Every server, every application, every endpoint, every user, every network connection — any one of them could be the entry point an attacker uses. A defender's job is never done because the attack surface is always changing as new systems are deployed, new software is installed, new employees are hired, and new vulnerabilities are discovered.
1.11.2 The Attacker's Advantage
Attackers only need to find one weakness. They can choose when, where, and how to attack. They can spend months preparing. They do not have to follow rules, meet compliance requirements, or justify their budget to a board of directors. They can use any tool, technique, or deception available to them.
1.11.3 Why This Matters for Ethical Hackers
This asymmetry is precisely why ethical hacking is so valuable. By thinking like an attacker and finding that one weakness before a malicious actor does, ethical hackers help level a playing field that is inherently tilted against defenders. Every vulnerability found and fixed is one fewer opportunity for an attacker.
🔵 Blue Team Perspective: Understanding the attacker's advantage should inform defensive strategy. Rather than trying to prevent every possible attack (an impossible task), effective defense focuses on making attacks harder (increasing cost), detecting attacks faster (reducing dwell time), and limiting the impact of successful attacks (containment and resilience). Ethical hackers test all three of these defensive layers.
1.12 Ethical Frameworks for Security Professionals
As an ethical hacker, you will face decisions that are not covered by any scope document or rule of engagement. Having a personal ethical framework will guide you through ambiguous situations.
1.12.1 The ACM Code of Ethics
The Association for Computing Machinery's Code of Ethics provides a foundation. Key principles relevant to ethical hacking include: - Contribute to society and to human well-being - Avoid harm - Be honest and trustworthy - Respect privacy - Honor confidentiality
1.12.2 The (ISC)² Code of Ethics
For certified security professionals, (ISC)² requires adherence to four canons: 1. Protect society, the common good, necessary public trust and confidence, and the infrastructure 2. Act honorably, honestly, justly, responsibly, and legally 3. Provide diligent and competent service to principals 4. Advance and protect the profession
1.12.3 Practical Ethics Decision Framework
When you face an ethical dilemma in security work, consider these questions: 1. Is it legal? Does this action comply with applicable laws and regulations? 2. Is it authorized? Do I have explicit permission to do this? 3. Is it proportional? Am I using the minimum action necessary to achieve my legitimate objective? 4. What are the consequences? Who could be harmed, and how? Can I mitigate those harms? 5. Would I be comfortable if my actions were public? If a journalist wrote about what I am doing, would I be proud or embarrassed? 6. What would a reasonable professional do? Would my peers consider this action appropriate?
⚖️ Legal Note: Throughout this book, we will consistently emphasize the legal dimensions of ethical hacking. The law in this area is complex, varies by jurisdiction, and is evolving rapidly. We are not providing legal advice. When in doubt about the legality of a specific activity, consult a lawyer who specializes in cybersecurity law.
1.13 A Day in the Life of a Penetration Tester
To make the concepts in this chapter tangible, let us walk through a realistic day in the life of a professional penetration tester. This composite is based on the experiences of dozens of working pentesters.
Monday Morning: Pre-Engagement
You arrive at the office (or open your laptop at home — many pentesters work remotely) and review the kick-off materials for a new engagement. The client is a regional bank with 50 branches. You have been authorized to conduct an external network pentest and a web application assessment of their online banking portal.
You read the scope document carefully: twenty external IP addresses, one web application, testing permitted Monday through Friday between 8 PM and 6 AM to minimize impact on customers. Social engineering is not in scope. The client has provided low-privilege credentials for authenticated testing of the web application.
Your first task is reconnaissance. You spend two hours gathering information about the bank from public sources: WHOIS records, DNS entries, job postings (which reveal technology choices), LinkedIn profiles of IT staff, and cached web pages. You document everything in your note-taking tool.
Tuesday: Scanning and Enumeration
You run Nmap scans against the twenty external IPs during the authorized window. You discover 47 open ports across the targets, running services including web servers, mail servers, VPN gateways, and a few surprising finds — a development server with directory listing enabled and an administrative interface for the network firewall.
You methodically enumerate each service, recording versions, configurations, and potential vulnerabilities. The development server is particularly interesting — it appears to be running an outdated version of Apache with directory indexing enabled, exposing backup files.
Wednesday: Exploitation
Using the intelligence gathered, you begin testing. The development server yields a database backup file containing hashed passwords. You crack several of them using hashcat. One password belongs to an employee who has reused it for the VPN — a finding you will rate as Critical in your report.
On the web application, you discover a stored cross-site scripting (XSS) vulnerability in the account messaging feature and a broken access control issue that allows authenticated users to view other customers' transaction summaries by modifying an account ID parameter.
Thursday: Documentation and Reporting
You spend the day writing your report. Each finding includes a description, reproduction steps, evidence (screenshots, request/response pairs), a CVSS score, business impact analysis, and remediation recommendations. The executive summary explains the bank's overall risk posture in non-technical terms, emphasizing the password reuse finding that could provide VPN access to an attacker.
Friday: Debrief
You present your findings to the client's CISO, IT director, and two members of the development team. The executive summary generates serious concern — the CISO immediately requests budget for a multi-factor authentication rollout. The development team asks detailed questions about the XSS and access control findings, which leads to a productive discussion about secure development practices.
This is the rewarding part of the work: seeing your findings drive real security improvements that protect the bank's customers and their money.
🔗 Connection: This scenario illustrates every phase of the penetration testing lifecycle we described in Section 1.4: Planning (Monday), Scanning and Enumeration (Tuesday), Exploitation (Wednesday), Reporting (Thursday-Friday). The example also demonstrates the importance of communication skills, documentation, and the business impact framing that distinguishes a professional pentest from amateur hacking.
Chapter Summary
In this chapter, we have established the foundation for our study of ethical hacking:
-
Ethical hacking is the authorized practice of identifying vulnerabilities using the same techniques as malicious attackers, distinguished from criminal hacking by explicit permission and constructive intent.
-
The history of hacking traces an arc from the curiosity-driven phone phreaks of the 1960s through the professionalization of the 2000s to today's bug bounty ecosystem, reflecting a gradual integration of offensive skills into legitimate security practice.
-
The white/black/gray hat taxonomy provides a useful but imperfect framework for categorizing hackers, and real-world ethics are more nuanced than any simple classification.
-
The penetration testing lifecycle consists of five phases: Planning and Reconnaissance, Scanning and Enumeration, Gaining Access, Maintaining Access and Post-Exploitation, and Reporting and Remediation.
-
The business case for ethical hacking is compelling, driven by breach costs averaging $4.88 million, regulatory requirements, insurance considerations, and competitive advantage.
-
MedSecure Health Systems — our running example — is a mid-sized healthcare organization with a typical mix of modern and legacy technology, presenting the kind of complex, realistic target we will learn to test.
-
The ethical hacker's mindset combines adversarial thinking, methodical thoroughness, continuous learning, effective communication, and uncompromising integrity.
What's Next
In Chapter 2, we will shift our focus from the ethical hacker to the threat landscape they operate in. We will examine the motivations and capabilities of different threat actors, learn the MITRE ATT&CK framework and Cyber Kill Chain that structure our understanding of attacks, survey the most common attack vectors, and introduce our second running example — ShopStack, an e-commerce startup that presents a very different attack surface from MedSecure. Understanding the threat landscape is essential because you cannot defend against threats you do not understand, and you cannot test for attacks you have not studied.
Turn the page. The threats are waiting.