44 min read

On June 17, 2016, someone began draining Ether from the largest crowdfund in history. Over the course of several hours, 3.6 million ETH — worth roughly $60 million at the time and billions at later valuations — flowed out of a smart contract called...

Learning Objectives

  • Identify and exploit reentrancy vulnerabilities in smart contracts and implement the checks-effects-interactions fix
  • Explain how flash loan attacks work and why they represent a novel attack vector unique to DeFi
  • Conduct a systematic smart contract audit using both manual review and automated tools (Slither, Mythril)
  • Analyze real-world exploit transactions and trace the attack vector from vulnerability to exploitation
  • Evaluate the economics of smart contract security: audit costs vs. potential losses, bug bounties, and insurance

Chapter 15: Smart Contract Security: Vulnerabilities, Exploits, and Auditing

$10 Billion Lost and Counting

On June 17, 2016, someone began draining Ether from the largest crowdfund in history. Over the course of several hours, 3.6 million ETH — worth roughly $60 million at the time and billions at later valuations — flowed out of a smart contract called "The DAO" into an attacker's child contract. The Ethereum community watched in real time on block explorers, powerless to stop it. The code was doing exactly what it was written to do. The bug was not in the Ethereum Virtual Machine. The bug was in the smart contract, and smart contracts, once deployed, cannot be patched.

That event was the first major smart contract exploit. It was not the last.

By 2024, the cumulative value lost to smart contract vulnerabilities, exploits, and hacks exceeded $10 billion. The DeFi ecosystem alone saw over $3.8 billion stolen in 2022, its worst year. Some of the largest individual events read like heists from a thriller: $625 million from Ronin Bridge (validator key compromise), $326 million from Wormhole (signature verification bypass), $197 million from Euler Finance (flash loan attack), $182 million from Beanstalk (governance manipulation via flash loan). These are not hypothetical scenarios from a textbook exercise. These are real losses, real stolen funds, and in many cases, real people's life savings.

What makes smart contract security fundamentally different from traditional software security is immutability. When a web application has a security vulnerability, the development team can deploy a patch within hours. When a smart contract has a vulnerability, the code cannot be changed. It sits on the blockchain, exposed to every attacker in the world, 24 hours a day, 7 days a week, with no downtime, no maintenance window, and no ability to roll back transactions. The code is the law until the code is exploited.

There is a second factor that makes smart contract security uniquely challenging: composability. In traditional software, your application runs in relative isolation. It calls APIs, but those APIs are maintained by their providers. In DeFi, your smart contract is deployed into an ecosystem where any other contract can call your public functions, where flash loans provide any attacker with unlimited temporary capital, and where the order of transactions within a block can be manipulated by validators and specialized bots. Your contract must be secure not only against direct attacks but against interactions with contracts that did not exist when yours was deployed.

A third factor is economic incentive. Traditional software bugs cause inconvenience, data loss, or reputational damage. Smart contract bugs cause direct, immediate, irreversible financial loss. A vulnerability in a DeFi protocol is a vault door left open with a neon sign pointing to it. The incentive to find and exploit vulnerabilities is measured in millions of dollars, and the attacker can be anyone, anywhere, at any time. Bug bounty programs attempt to redirect this incentive toward responsible disclosure, but the race between white-hat researchers and malicious actors is constant.

This chapter will not make you a security expert — that requires years of dedicated practice. But it will give you the vocabulary, the mental models, and the systematic methodology to think about smart contract security rigorously. You will learn the major vulnerability classes, study the actual code behind real exploits, use professional auditing tools, and develop the healthy paranoia that every smart contract developer needs.

Warning

The vulnerability patterns and exploit techniques in this chapter are presented for educational purposes. Exploiting vulnerabilities in deployed smart contracts without authorization is illegal in most jurisdictions and unethical in all of them. The goal is to learn to defend, not to attack. Authorized security research, bug bounty programs, and responsible disclosure are the legitimate paths for applying this knowledge.

Let us begin with the vulnerability that started it all.


15.1 Reentrancy: The Original Sin

Reentrancy is the single most important vulnerability class in smart contract history. It powered The DAO hack, and despite being well-known for nearly a decade, variations of it continue to appear in new exploits. Understanding reentrancy deeply — not just the surface-level pattern but the fundamental reason it exists — is essential for any smart contract developer.

15.1.1 The Fundamental Problem

In traditional programming, when function A calls function B, function A waits for B to complete before continuing. The execution is sequential and predictable. In Solidity, however, when your contract sends Ether to another address, you are making an external call. If that address is a contract (not an externally owned account), the receiving contract's receive() or fallback() function executes. That receiving contract can then call back into your contract before your original function has finished executing.

This is reentrancy: the attacker's contract re-enters your contract's function while the first invocation is still in progress. If your contract updates its state (like recording that the withdrawal has been made) after sending Ether rather than before, the attacker can withdraw repeatedly before the balance is ever decremented.

Consider this simplified vulnerable contract:

// VULNERABLE - DO NOT USE IN PRODUCTION
contract VulnerableVault {
    mapping(address => uint256) public balances;

    function deposit() external payable {
        balances[msg.sender] += msg.value;
    }

    function withdraw() external {
        uint256 amount = balances[msg.sender];
        require(amount > 0, "No balance");

        // VULNERABILITY: Sending ETH before updating state
        (bool success, ) = msg.sender.call{value: amount}("");
        require(success, "Transfer failed");

        // State update happens AFTER the external call
        balances[msg.sender] = 0;
    }
}

The problem is on the line with msg.sender.call{value: amount}(""). When this line executes, control passes to the recipient. If the recipient is a malicious contract, its receive() function can call withdraw() again. Since balances[msg.sender] has not yet been set to zero (that line has not executed yet), the require(amount > 0) check passes, and the contract sends Ether again. This loop continues until the contract's Ether balance is drained or the gas runs out.

The attacker's contract looks like this:

contract Attacker {
    VulnerableVault public vault;
    uint256 public attackCount;

    constructor(address _vault) {
        vault = VulnerableVault(_vault);
    }

    function attack() external payable {
        vault.deposit{value: msg.value}();
        vault.withdraw();
    }

    receive() external payable {
        if (address(vault).balance >= vault.balances(address(this))) {
            attackCount++;
            vault.withdraw(); // Re-enter the withdraw function
        }
    }
}

The attacker deposits 1 ETH, then calls withdraw(). The vault sends 1 ETH to the attacker contract, triggering receive(), which calls withdraw() again. Each iteration drains another 1 ETH. If the vault holds 100 ETH, the attacker walks away with 100 ETH after depositing only 1 ETH.

15.1.2 The DAO Hack: The Full Story

The DAO (Decentralized Autonomous Organization) launched in April 2016 as a venture capital fund governed by smart contract code. Token holders could vote on proposals to fund projects. By May 2016, it had raised 12.7 million ETH from over 11,000 investors — approximately $150 million and roughly 14% of all Ether in existence at that time. It was, by a wide margin, the largest crowdfund in history.

The DAO's code included a "split" function that allowed investors to withdraw their funds into a "child DAO" if they disagreed with the majority's investment decisions. This function had a reentrancy vulnerability nearly identical to the simplified example above. The split function sent ETH to the user's designated address before updating the user's token balance.

On June 17, 2016, an unknown attacker began exploiting this vulnerability. The attacker created a malicious child DAO contract whose fallback function recursively called the split function. Over several hours, the attacker drained 3.6 million ETH (about $60 million) into this child DAO.

The Ethereum community faced an existential choice. The DAO's code had a 28-day waiting period before funds in a child DAO could be withdrawn, creating a window for response. Three options emerged:

  1. Do nothing. The code executed as written. "Code is law" means accepting this outcome, however painful.
  2. Soft fork. Blacklist the attacker's address so miners would refuse to process their transactions. This was proposed but found to have its own vulnerability (a denial-of-service attack vector against miners).
  3. Hard fork. Change the Ethereum protocol to move the stolen funds to a recovery contract where DAO token holders could claim refunds. This would effectively reverse a transaction — something blockchains are explicitly designed not to do.

The community chose the hard fork, which executed on July 20, 2016, at block 1,920,000. The majority of the network adopted the fork, and DAO investors got their ETH back. But a minority of the community refused to accept the fork on principle, arguing that immutability was sacrosanct. They continued running the original chain, which became Ethereum Classic (ETC) — a blockchain that still exists today, born from a reentrancy bug.

The DAO hack taught the entire blockchain industry several painful lessons:

  • Smart contract code is an attack surface. Every public function is a potential entry point for exploitation. The split function was designed for legitimate use — allowing minority investors to exit — but its implementation created a devastating vulnerability.
  • Auditing matters. The DAO's code had been reviewed by members of the community, and the reentrancy pattern was even described publicly before the attack. But the vulnerability was not patched in time. Subsequent analysis by professional security researchers found it quickly, underscoring the difference between casual review and rigorous auditing.
  • Immutability cuts both ways. The inability to patch deployed contracts means vulnerabilities persist until all funds are drained or the contract is deprecated. The DAO's governance process — intended to give token holders democratic control — was too slow to respond to a security emergency.
  • Governance is messy. The hard fork debate revealed that "code is law" is an aspiration, not a reality. Human governance intervenes when the stakes are high enough. The creation of Ethereum Classic as a permanent fork demonstrated that the blockchain community is not monolithic — different values (immutability vs. pragmatism) can lead to permanent schism.
  • Complexity is the enemy of security. The DAO's split function combined fund calculation, child DAO creation, ETH transfer, and balance updates in a single function. This complexity made the reentrancy vulnerability harder to spot. Simpler, more modular designs are easier to audit and less likely to contain hidden vulnerabilities.

15.1.3 The Checks-Effects-Interactions Pattern

The standard defense against reentrancy is the checks-effects-interactions pattern, sometimes abbreviated CEI. The principle is straightforward: structure every function so that:

  1. Checks come first: validate all conditions and require statements.
  2. Effects come second: update all state variables.
  3. Interactions come last: make external calls only after all state is finalized.

Here is the vault contract rewritten using CEI:

contract SafeVault {
    mapping(address => uint256) public balances;

    function deposit() external payable {
        balances[msg.sender] += msg.value;
    }

    function withdraw() external {
        // CHECKS
        uint256 amount = balances[msg.sender];
        require(amount > 0, "No balance");

        // EFFECTS - state update BEFORE external call
        balances[msg.sender] = 0;

        // INTERACTIONS - external call LAST
        (bool success, ) = msg.sender.call{value: amount}("");
        require(success, "Transfer failed");
    }
}

Now, even if the receiving contract calls withdraw() again, balances[msg.sender] is already zero, so the require check fails and the reentrant call reverts. The state is consistent before any external call is made.

15.1.4 Reentrancy Guards

For additional safety, OpenZeppelin provides a ReentrancyGuard contract that uses a mutex (mutual exclusion) lock:

import "@openzeppelin/contracts/utils/ReentrancyGuard.sol";

contract GuardedVault is ReentrancyGuard {
    mapping(address => uint256) public balances;

    function deposit() external payable {
        balances[msg.sender] += msg.value;
    }

    function withdraw() external nonReentrant {
        uint256 amount = balances[msg.sender];
        require(amount > 0, "No balance");
        balances[msg.sender] = 0;
        (bool success, ) = msg.sender.call{value: amount}("");
        require(success, "Transfer failed");
    }
}

The nonReentrant modifier sets a boolean lock to true when the function begins and resets it when the function completes. If the function is called again while the lock is active, the call reverts. This provides defense-in-depth: even if a developer accidentally places an external call before a state update, the reentrancy guard prevents re-entry.

Best practice is to use both the checks-effects-interactions pattern and a reentrancy guard. Defense in depth is the security mindset.

15.1.5 Cross-Function and Cross-Contract Reentrancy

The examples above show single-function reentrancy, where the attacker re-enters the same function. More subtle variants exist:

Cross-function reentrancy occurs when the attacker re-enters a different function that shares state with the vulnerable function. For example, if withdraw() sends ETH before updating the balance, and there is a separate transfer() function that reads the same balance, the attacker can call transfer() from within the reentrant callback to move the "phantom" balance to another address.

Cross-contract reentrancy occurs when multiple contracts share state (for example, through a shared storage contract or shared token balances). The attacker exploits the fact that Contract A has made an external call but not yet updated the shared state, allowing re-entry through Contract B that reads the stale shared state.

Read-only reentrancy is a particularly insidious variant discovered more recently. In this pattern, the attacker re-enters a view function (one that only reads state) during a state transition. If another protocol reads this view function's return value to make decisions (such as computing a price), they get stale data. This was exploited in the Curve/Vyper reentrancy incident in 2023, where reentrancy in Curve pools' get_virtual_price() function caused dependent protocols to use incorrect price data.


15.2 Flash Loan Attacks

Flash loans are one of the most innovative — and most weaponized — features of DeFi. They represent a financial primitive that has no analog in traditional finance, and they have enabled some of the most devastating exploits in blockchain history.

15.2.1 What Is a Flash Loan?

A flash loan is an uncollateralized loan that must be borrowed and repaid within a single transaction. If the borrower does not repay the full amount (plus a small fee) by the end of the transaction, the entire transaction reverts as if it never happened. The lender takes zero risk because the atomicity of blockchain transactions guarantees repayment.

This mechanism means that anyone can temporarily control hundreds of millions of dollars in capital for the cost of a transaction fee (typically a few dollars in gas). In traditional finance, accessing that much capital requires collateral, credit history, and institutional relationships built over years. In DeFi, it requires writing a smart contract.

Flash loans have legitimate uses: arbitrage across decentralized exchanges, liquidation of undercollateralized positions, collateral swaps, and self-liquidation to avoid penalties. But they also enable attacks that would be impossible without temporary access to massive capital.

15.2.2 The Flash Loan Attack Pattern

The general pattern for a flash loan attack is:

  1. Borrow a large amount of tokens via flash loan (often tens or hundreds of millions of dollars).
  2. Manipulate a price oracle, governance vote, or other mechanism that is sensitive to token balances or trading volume.
  3. Exploit the manipulated state to extract value from a vulnerable protocol.
  4. Repay the flash loan plus fee.
  5. Profit from the difference between what was extracted and what was repaid.

The key insight is that flash loans give attackers temporary economic weight. A governance system that requires 51% of tokens to pass a proposal can be attacked by someone who holds zero tokens permanently but borrows 51% for a single transaction. A price oracle that uses on-chain spot prices can be manipulated by someone who executes a massive trade, reads the distorted price, and then reverses the trade — all in one transaction.

To understand why this is so dangerous, consider the fundamental assumption that most financial systems make: acquiring economic power requires sustained capital commitment. In traditional finance, buying a controlling stake in a company requires purchasing and holding shares. The capital is locked up, creating accountability. Flash loans eliminate this assumption entirely. An attacker can wield the economic power of a billion-dollar entity for microseconds, exploit a vulnerability, and return the capital — all atomically, all in one transaction. If the transaction fails for any reason, the attacker loses nothing but gas fees. The risk-reward ratio is radically asymmetric: near-zero risk for the attacker, potentially hundreds of millions in losses for the victim protocol.

15.2.3 Anatomy of a Flash Loan Attack: Beanstalk

The Beanstalk exploit of April 2022 is a textbook example. Beanstalk was a stablecoin protocol with an on-chain governance system. Governance proposals could be executed immediately after a supermajority vote. The attacker:

  1. Flash-borrowed approximately $1 billion in various tokens from Aave.
  2. Used the borrowed tokens to acquire enough Beanstalk governance tokens to reach supermajority.
  3. Submitted and immediately voted for a malicious governance proposal that transferred all protocol funds to the attacker.
  4. Executed the proposal.
  5. Repaid the flash loan.
  6. Walked away with approximately $182 million in profit.

The entire attack occurred in a single transaction. The governance system was not designed to handle the possibility that someone could temporarily acquire a supermajority of voting power, vote, and disappear — all atomically. Traditional governance assumes that acquiring a controlling stake requires sustained capital commitment. Flash loans break that assumption.

15.2.4 Mitigating Flash Loan Attacks

Defenses against flash loan attacks include:

  • Time-weighted average prices (TWAPs) instead of spot prices for oracles. A TWAP computes the average price over a window (e.g., 30 minutes), making single-transaction manipulation ineffective.
  • Governance timelocks. Require a delay between when a proposal is submitted and when it can be executed. This prevents same-transaction governance attacks.
  • Snapshot-based voting. Take a snapshot of token balances at a past block number for governance votes. Flash-borrowed tokens at the current block do not appear in the historical snapshot.
  • Flash loan-resistant oracle design. Use decentralized oracle networks like Chainlink that aggregate prices from multiple off-chain sources rather than relying on on-chain DEX prices.

15.3 Oracle Manipulation

Price oracles are the connective tissue between smart contracts and the outside world. A DeFi lending protocol needs to know the price of ETH to determine if a borrower's collateral is sufficient. A derivatives protocol needs to know asset prices to settle contracts. If an attacker can manipulate the price data that a smart contract consumes, they can trick the protocol into making decisions based on false information.

15.3.1 On-Chain vs. Off-Chain Oracles

On-chain oracles derive price data from decentralized exchange (DEX) trading activity. For example, a Uniswap pool's reserves reflect the current market price: if a pool holds 100 ETH and 200,000 USDC, the implied price is 2,000 USDC per ETH. The advantage of on-chain oracles is that they are trustless and always available. The critical disadvantage is that they can be manipulated by anyone who can move the market — and with flash loans, that is everyone.

Off-chain oracles like Chainlink aggregate price data from multiple centralized exchanges, API providers, and other sources. A network of independent oracle nodes reports prices, and the protocol takes the median value. Manipulating Chainlink requires corrupting a majority of independent oracle nodes, which is far more difficult than executing a large trade on a single DEX.

Hybrid approaches combine on-chain and off-chain data. Uniswap v3's built-in TWAP oracle computes the time-weighted average price over a configurable window, making single-transaction manipulation impractical (the attacker would need to maintain the manipulated price for the entire TWAP window, which requires sustained capital commitment and exposes them to arbitrage). Some protocols use Chainlink as the primary oracle with a Uniswap TWAP as a fallback, or vice versa, and trigger a pause if the two sources diverge significantly.

15.3.2 The Price Oracle Attack Pattern

The typical oracle manipulation attack proceeds as follows:

  1. Flash-borrow a large amount of Token A.
  2. Execute a massive swap on a DEX (e.g., Uniswap), dumping Token A for Token B. This dramatically moves the on-chain price.
  3. Interact with a victim protocol that reads this manipulated price. For example, borrow against overvalued collateral, or liquidate positions based on the distorted price.
  4. Reverse the original swap (or let it stand if profitable).
  5. Repay the flash loan.

The Cream Finance exploit of October 2021 used this pattern to drain $130 million. The attacker manipulated the price of Cream's lending token through a series of flash-borrowed trades, then used the inflated token as collateral to borrow real assets from Cream's pools.

15.3.3 The Mango Markets Exploit

The Mango Markets exploit of October 2022 on the Solana blockchain (worth approximately $114 million) demonstrated a variation: the attacker did not use flash loans (Solana's architecture makes flash loans harder) but instead used two accounts to manipulate the on-chain price of the MNGO token. One account held a massive long position; the other executed trades to push the price up. The inflated unrealized profit on the long position was then used as collateral to borrow all available assets from Mango's lending pools.

Notably, the exploiter, Avraham Eisenberg, publicly acknowledged the attack and argued it was a "profitable trading strategy" rather than an exploit. He was subsequently arrested by the FBI and convicted of fraud — a landmark case establishing that manipulating DeFi protocols can constitute criminal fraud regardless of how the attacker characterizes the activity.

15.3.4 Oracle Security Best Practices

  1. Never use spot prices from a single DEX pool. Spot prices can be manipulated in a single transaction.
  2. Use TWAPs (time-weighted average prices) that average over multiple blocks. Uniswap v3 provides built-in TWAP oracle functionality.
  3. Use decentralized oracle networks like Chainlink for critical price feeds. These aggregate data from multiple independent sources and are resistant to single-point manipulation.
  4. Implement circuit breakers. If a price moves more than a threshold (e.g., 20%) in a single block, pause the protocol or use a fallback price.
  5. Monitor for anomalous price movements. Off-chain monitoring systems can detect price manipulation and trigger emergency pauses.

15.4 Front-Running and MEV

Maximal Extractable Value (MEV) — originally called Miner Extractable Value before Ethereum's transition to proof of stake — refers to the profit that block producers (validators) and specialized actors called searchers can extract by reordering, inserting, or censoring transactions within a block. MEV is not a bug in smart contracts but a structural property of public blockchains that creates a hostile execution environment for users.

15.4.1 How Front-Running Works

When a user submits a transaction to the Ethereum network, it enters the mempool — a public waiting area where pending transactions sit before being included in a block. The mempool is visible to everyone. Searchers run sophisticated software that monitors the mempool for profitable opportunities.

Front-running occurs when a searcher sees a pending transaction and submits their own transaction before it, paying a higher gas price to ensure priority. For example, if a user is about to make a large buy on a DEX, a front-runner can buy the same token first (pushing the price up), let the user's transaction execute at the now-higher price, and then sell at a profit.

Back-running is the reverse: the searcher submits a transaction immediately after the victim's transaction to capture the residual profit (such as an arbitrage opportunity created by the price impact of the victim's trade).

Sandwich attacks combine both: the attacker front-runs the victim's trade (buying before them and pushing the price up), lets the victim trade at the worse price, then back-runs by selling at the elevated price. The victim receives fewer tokens than they expected, and the difference is the attacker's profit.

15.4.2 The MEV Supply Chain

The MEV ecosystem has evolved into a sophisticated supply chain:

  • Searchers are specialized actors who find MEV opportunities (arbitrage, liquidations, sandwich attacks) and construct transaction bundles.
  • Builders aggregate transaction bundles from searchers and construct complete blocks that maximize MEV extraction.
  • Proposers (validators) select the most profitable block from builders and propose it to the network.

This separation of roles is formalized through MEV-Boost, a protocol developed by Flashbots that allows validators to outsource block building to specialized builders. The majority of Ethereum blocks are now produced through this system.

15.4.3 MEV as a Tax on Users

MEV extraction functions as an invisible tax on DeFi users. Every swap on a DEX, every liquidation on a lending protocol, and every NFT purchase is potentially subject to MEV extraction. Estimates suggest that MEV extraction has cost Ethereum users over $600 million cumulatively. Sandwich attacks alone extracted over $200 million from DEX traders in 2023.

The existence of MEV also creates negative externalities for the network: priority gas auctions (searchers competing to front-run each other) drive up gas prices for all users, and the economic incentives can lead to blockchain instability (validators might reorder or revert blocks to capture MEV).

15.4.4 Mitigations

  • Private transaction pools. Services like Flashbots Protect allow users to submit transactions directly to builders, bypassing the public mempool and preventing front-running by other searchers.
  • Commit-reveal schemes. Users first submit an encrypted commitment, then reveal the actual transaction in a later block. Front-runners cannot extract value from encrypted commitments.
  • Batch auctions. Protocols like CoW Protocol batch transactions together and execute them at a uniform clearing price, eliminating the advantage of transaction ordering.
  • MEV-Share and MEV redistribution. Flashbots' MEV-Share protocol allows users to capture a portion of the MEV their transactions generate, rather than losing all of it to searchers.
  • Encrypted mempools. Proposals like threshold encryption schemes aim to encrypt pending transactions so their contents are hidden until they are committed to a block, making front-running impossible. These are still largely experimental.

15.4.5 MEV and Smart Contract Design

For smart contract developers, the existence of MEV has practical implications. Any contract that interacts with DEXes or performs price-sensitive operations must account for the possibility that transaction ordering will be manipulated. Specifically:

  • DEX trades should always use slippage protection (minimum output amounts). Without slippage protection, a sandwich attack can extract the maximum possible value.
  • Auction mechanisms should use commit-reveal patterns rather than open bidding, where the final bid is visible in the mempool before inclusion.
  • Liquidation mechanisms should be designed to minimize the incentive for liquidation MEV, which can cascade into destabilizing feedback loops during market crashes.
  • Any function where the execution order between two transactions matters is potentially vulnerable to MEV extraction. Design for order-independence where possible.

15.5 Access Control Failures

Access control vulnerabilities occur when smart contract functions lack proper authorization checks, allowing unauthorized users to execute privileged operations. These are conceptually simple — a missing require statement or an incorrect modifier — but they have led to some of the largest losses in blockchain history.

15.5.1 The Parity Wallet Hack: $150 Million Frozen Forever

The Parity multi-signature wallet library is the most infamous access control failure in Ethereum history. The story unfolded in two acts.

Act I: The First Hack (July 2017). The Parity multi-sig wallet used a library contract containing the wallet logic, and individual wallet contracts that delegated calls to this library. The library contract's initWallet() function, which set the wallet owners, was not properly protected. An attacker called initWallet() directly on the library, made themselves the owner, and then drained funds from wallets that used this library. Approximately $31 million was stolen before white-hat hackers drained the remaining vulnerable wallets to protect them.

Act II: The Kill (November 2017). After the first hack was patched, the library contract was redeployed. However, the new library contract had a critical flaw: it was itself an uninitialized wallet. A user (who later claimed it was accidental) called initWallet() on the library contract, becoming its owner. They then called kill(), which executed selfdestruct on the library. Since all Parity multi-sig wallets delegated their logic to this library, every one of them immediately became non-functional. Approximately 587 wallets holding a combined 513,774 ETH (worth roughly $150 million at the time, and far more at subsequent ETH prices) were frozen permanently. The funds remain locked to this day — the library they depended on no longer exists, and there is no way to upgrade or recover them.

15.5.2 Common Access Control Patterns

Proper access control in Solidity typically uses one of these patterns:

Simple Ownable:

import "@openzeppelin/contracts/access/Ownable.sol";

contract MyContract is Ownable {
    function adminFunction() external onlyOwner {
        // Only the owner can call this
    }
}

Role-Based Access Control:

import "@openzeppelin/contracts/access/AccessControl.sol";

contract MyContract is AccessControl {
    bytes32 public constant MINTER_ROLE = keccak256("MINTER_ROLE");
    bytes32 public constant PAUSER_ROLE = keccak256("PAUSER_ROLE");

    constructor() {
        _grantRole(DEFAULT_ADMIN_ROLE, msg.sender);
    }

    function mint(address to, uint256 amount) external onlyRole(MINTER_ROLE) {
        // Only addresses with MINTER_ROLE can call this
    }
}

15.5.3 The Checklist for Access Control

Every public or external function in a smart contract must answer the question: Who should be able to call this function? If the answer is not "anyone," the function must have explicit access control. Common failures include:

  • Missing onlyOwner or role checks on administrative functions.
  • Initializer functions that can be called by anyone (especially in proxy/upgradeable contracts).
  • Functions that use tx.origin for authentication instead of msg.sender (vulnerable to phishing via malicious intermediary contracts).
  • Constructors in library contracts that are never called because libraries are deployed differently than regular contracts.

15.6 Other Vulnerability Classes

Beyond the major categories above, several other vulnerability classes have caused significant losses.

15.6.1 Integer Overflow and Underflow (Historical)

Before Solidity 0.8.0, arithmetic operations did not check for overflow or underflow. A uint256 set to 0, when decremented by 1, would wrap around to 2^256 - 1 (the maximum value) rather than reverting. The BEC token exploit of April 2018 exploited integer overflow in a batchTransfer function to generate tokens worth billions from thin air.

Since Solidity 0.8.0 (released in December 2020), all arithmetic operations revert on overflow and underflow by default. Developers who need unchecked arithmetic for gas optimization can use the unchecked block, but this must be done deliberately. This compiler-level fix has largely eliminated this vulnerability class, though contracts compiled with older versions remain vulnerable.

15.6.2 Signature Replay

Many protocols use off-chain signatures for gasless transactions, meta-transactions, or permit approvals. A signature replay attack occurs when a valid signature is used more than once. If a contract does not track which signatures have already been processed, an attacker can submit the same signed message repeatedly.

The defense is to include a nonce (a sequential counter) in the signed message and track used nonces in contract storage. EIP-712 (typed structured data hashing) provides a standard format for signable messages that includes a domain separator (preventing signatures from being replayed across different contracts or chains) and a nonce.

A particularly dangerous variant of signature replay occurs across chains. After Ethereum's merge and the proliferation of Layer 2 networks, the same contract may be deployed on multiple chains (Ethereum mainnet, Arbitrum, Optimism, Polygon, Base). If the signed message does not include the chain ID, a signature valid on one chain can be replayed on another. This is why EIP-712's domain separator includes chainId — but developers must ensure they are using it correctly. The Wintermute/Optimism incident of 2022, where 20 million OP tokens were lost, involved a transaction replay across chains due to a misconfigured deployment process.

15.6.3 Denial of Service

Denial of service (DoS) vulnerabilities prevent legitimate users from interacting with a contract. Common patterns include:

  • Unexpected revert in a loop. If a function iterates over an array of addresses and sends ETH to each one, a single malicious address (a contract that reverts on receive()) can block the entire distribution. The fix is to use a "pull" pattern where users withdraw their own funds rather than a "push" pattern where the contract sends to everyone.
  • Block gas limit DoS. If a function's gas cost grows with the size of an array or mapping, an attacker can add enough entries to make the function exceed the block gas limit, rendering it permanently uncallable.
  • Griefing via selfdestruct. A contract that relies on address(this).balance == 0 for some logic can be griefed by sending it ETH via selfdestruct, which bypasses the receive() and fallback() functions and forces ETH into the contract. (Note: EIP-6780, included in the Dencun upgrade, significantly restricted selfdestruct behavior, but the general principle of not relying on precise balance checks remains valid.)

15.6.4 tx.origin Phishing

tx.origin returns the address of the externally owned account that initiated the transaction, while msg.sender returns the immediate caller. If a contract uses tx.origin for authentication, an attacker can create a malicious contract that tricks the victim into calling it (for example, by disguising it as a legitimate interaction). The malicious contract then calls the victim's target contract, and tx.origin still returns the victim's address, passing the authentication check.

The fix is simple: never use tx.origin for authentication. Always use msg.sender.

15.6.5 Unsafe External Calls

Low-level calls (call, delegatecall, staticcall) return a boolean indicating success or failure but do not automatically revert on failure. If the return value is not checked, a failed external call is silently ignored. This has led to vulnerabilities where a contract believes a transfer succeeded when it actually failed.

// VULNERABLE - unchecked return value
payable(recipient).send(amount); // Returns false on failure, does not revert

// SAFE - checked return value
(bool success, ) = recipient.call{value: amount}("");
require(success, "Transfer failed");

15.7 Systematic Auditing Methodology

A smart contract audit is a structured review process designed to identify vulnerabilities, logic errors, and deviations from specification before a contract is deployed to mainnet. Professional audits are conducted by specialized firms (Trail of Bits, OpenZeppelin, Consensys Diligence, Sigma Prime, and others) and typically cost between $20,000 and $500,000 depending on contract complexity and audit depth.

15.7.1 The Audit Process

A systematic audit follows these phases:

Phase 1: Specification Review. Before reading a single line of code, the auditor reviews the protocol's documentation, architecture diagrams, and intended behavior. The question is: What is this contract supposed to do? A vulnerability is meaningless without understanding the intended behavior. A function that allows anyone to withdraw funds might be a critical vulnerability — or it might be the intended design of a faucet contract.

Phase 2: Architecture Analysis. Map the contract system: which contracts exist, how they interact, where external calls cross contract boundaries, which contracts are upgradeable (via proxy patterns), and where privileged access exists. Draw a trust diagram: which addresses does the system trust, and what can each trusted address do?

Phase 3: Automated Analysis. Run automated tools (Slither for static analysis, Mythril for symbolic execution, Echidna for fuzz testing). These tools find known vulnerability patterns quickly but produce false positives and miss logic errors. Automated analysis is a supplement to manual review, not a replacement.

Phase 4: Manual Line-by-Line Review. The core of any audit. The auditor reads every line of code, checking for:

  • Reentrancy vulnerabilities in functions that make external calls
  • Access control on every public/external function
  • Integer arithmetic edge cases (even with Solidity 0.8+, division by zero and precision loss remain possible)
  • Oracle dependency and potential manipulation
  • State consistency across functions (invariants)
  • Correct event emission for off-chain monitoring
  • Gas optimization issues that might cause DoS
  • Compliance with the specification from Phase 1

Phase 5: Testing and Exploitation. The auditor writes proof-of-concept exploits for any discovered vulnerabilities to confirm they are exploitable and demonstrate the impact. This also involves reviewing the project's existing test suite for coverage gaps.

Phase 6: Report Writing. Findings are categorized by severity:

  • Critical: Direct loss of funds or permanent freezing of funds. Must be fixed before deployment.
  • High: Significant economic impact or governance manipulation. Should be fixed before deployment.
  • Medium: Limited economic impact or conditional exploitability. Recommended to fix.
  • Low: Best practice violations, gas optimizations, code clarity. Nice to fix.
  • Informational: Suggestions, style issues, documentation gaps.

Each finding includes: a description of the vulnerability, the affected code, a proof of concept or exploit scenario, the severity assessment, and a recommended fix.

15.7.2 What Audits Cannot Do

An audit is not a guarantee of security. Audited contracts have been exploited. The reasons include:

  • Audits are point-in-time. They review the code as it existed during the audit. Any subsequent changes (even "minor" ones) can introduce new vulnerabilities.
  • Auditors are human. They miss things. Even the best auditors cannot guarantee they have found every vulnerability.
  • Specification bugs. If the specification itself is flawed (for example, a governance system that is inherently vulnerable to flash loan attacks by design), an audit that confirms the code matches the specification will miss the vulnerability.
  • Composability risks. A contract might be secure in isolation but vulnerable when composed with other protocols in ways the auditor did not anticipate.
  • Economic attacks. Some attacks are economically rational but not technically "bugs" — they exploit the protocol's game theory rather than its code.

Despite these limitations, audits dramatically reduce risk. The vast majority of exploited DeFi protocols either were not audited, ignored audit findings, or changed the code after the audit.


15.8 Automated Security Tools

A modern smart contract security workflow uses multiple automated tools, each with different strengths. No single tool catches everything; they are complementary.

15.8.1 Slither: Static Analysis

Slither, developed by Trail of Bits, is the most widely used static analysis tool for Solidity. It parses Solidity source code into an intermediate representation and runs a battery of detectors that check for known vulnerability patterns.

Installation and basic usage:

pip install slither-analyzer
slither ./contracts/MyContract.sol

Slither checks for over 90 vulnerability classes, including reentrancy, unprotected functions, unused return values, and dangerous delegatecall patterns. It runs in seconds and produces no false negatives for its supported detectors (if a pattern matches, it reports it) but can produce false positives (not every match is a real vulnerability).

Slither is the first tool you should run on any Solidity project. It catches low-hanging fruit immediately.

15.8.2 Mythril: Symbolic Execution

Mythril, developed by Consensys, uses symbolic execution to explore all possible execution paths through a smart contract. Rather than testing with specific inputs, it treats inputs as symbolic variables and uses a constraint solver (Z3) to determine which inputs can reach each code path.

pip install mythril
myth analyze ./contracts/MyContract.sol

Mythril can discover vulnerabilities that static analysis misses because it reasons about the reachability of states. For example, it can determine that a particular combination of function calls, in a specific order with specific inputs, leads to a state where funds can be stolen. The tradeoff is execution time: symbolic execution is computationally expensive and can take hours for complex contracts.

15.8.3 Echidna: Property-Based Fuzzing

Echidna, also from Trail of Bits, is a property-based fuzzer for smart contracts. The developer writes invariants — properties that should always be true — and Echidna generates random transactions to try to break them.

For example, an invariant for a token contract might be: "The sum of all balances should always equal the total supply." Echidna will generate thousands of random sequences of transfer, approve, transferFrom, mint, and burn calls, checking after each sequence whether the invariant still holds.

Fuzzing excels at finding edge cases that humans overlook: unusual input values, unexpected function call orderings, and boundary conditions. It complements static analysis (which checks patterns) and symbolic execution (which checks reachability) by testing random exploration of the state space.

15.8.4 Certora Prover: Formal Verification

Formal verification is the most rigorous approach to smart contract security. The Certora Prover translates Solidity contracts and a set of specifications (written in a language called CVL, Certora Verification Language) into mathematical formulas and uses an SMT solver to prove or disprove whether the contract satisfies the specifications.

Unlike testing (which shows the absence of bugs for tested inputs) or fuzzing (which shows the absence of bugs for randomly generated inputs), formal verification proves that a property holds for all possible inputs. If Certora proves that "no sequence of transactions can cause the total token supply to exceed 1 billion," then that property is mathematically guaranteed (assuming the Solidity compiler and EVM behave correctly).

The limitation is that formal verification proves only the properties you specify. If you do not specify "no one except the owner can mint tokens," the prover will not check for it. Writing comprehensive specifications requires deep understanding of both the protocol and formal methods.

15.8.5 When to Use Each Tool

Tool Technique Speed Strengths Limitations
Slither Static analysis Seconds Fast, comprehensive pattern detection False positives, misses logic errors
Mythril Symbolic execution Minutes to hours Finds reachable vulnerabilities Slow for complex contracts, path explosion
Echidna Fuzzing Minutes to hours Finds edge cases, tests invariants Requires writing invariants, probabilistic
Certora Formal verification Hours Mathematical proof of properties Requires specifications, expensive

A mature security workflow runs Slither in CI/CD (every commit), uses Echidna for ongoing property testing, applies Mythril for pre-audit deep analysis, and employs Certora for the most critical protocol invariants.


15.9 Progressive Project: Auditing Our Voting Contracts

In Chapter 13, we built VotingToken.sol and SimpleVoting.sol as part of the progressive project. At that point, we focused on getting the functionality right. Now, we put on our auditor hats and examine those contracts with fresh eyes. Security is not an afterthought — it is something that must be applied retroactively when we find gaps, and proactively in every line we write going forward.

15.9.1 VotingToken.sol Audit Findings

Let us systematically review the VotingToken contract from Chapter 13. Recall that it was a standard ERC-20 token with minting capability. Here are the findings an auditor would report:

Finding 1: Centralized Minting Risk (Medium)

If the VotingToken has a mint function restricted to the owner, the owner can unilaterally inflate the token supply and dilute voting power. In a governance context, this is a significant centralization risk.

Recommendation: If the token is used for governance, consider capping the supply at deployment, implementing a minting schedule, or requiring a governance vote to approve new minting. At minimum, emit events on mint to enable off-chain monitoring.

Finding 2: Missing Snapshot Mechanism (Medium)

The voting contract reads token balances at the time of voting. Without a snapshot mechanism, token holders can vote, transfer their tokens to another address, and vote again from the new address (a "double-voting" attack via token transfer).

Recommendation: Implement the ERC20Votes extension from OpenZeppelin, which provides checkpoint-based balance snapshots. Votes are counted based on balances at the block when the proposal was created, not at the time of voting.

Finding 3: No Transfer Restrictions During Active Proposals (Low)

Related to Finding 2, there is no mechanism to prevent token transfers while voting is active. In protocols without snapshot voting, this enables vote manipulation.

Recommendation: Either implement snapshot voting (preferred) or add a transfer lock during active voting periods.

15.9.2 SimpleVoting.sol Audit Findings

The SimpleVoting contract — which manages proposals, voting, and execution — has more substantial issues:

Finding 4: Flash Loan Governance Attack (Critical)

If voting power is based on current token balance (no snapshots), an attacker can flash-borrow tokens, vote, and return them in a single transaction. This allows zero-cost governance takeover.

Recommendation: Use snapshot-based voting where voting power is determined by token balances at a past block number. This is the single most critical fix.

Finding 5: Missing Proposal Validation (High)

If the contract does not validate the target and calldata of proposals before execution, a malicious proposal could call any function on any contract, including self-destructing the voting contract itself or transferring all treasury funds.

Recommendation: Implement a whitelist of allowed target contracts and function selectors. At minimum, add a timelock between proposal approval and execution to give the community time to review and potentially veto malicious proposals.

Finding 6: Reentrancy in Proposal Execution (High)

If the execute function makes an external call to the proposal's target contract before marking the proposal as executed, the target contract could re-enter the voting contract and execute the same proposal multiple times.

Recommendation: Apply the checks-effects-interactions pattern: mark the proposal as executed before making the external call. Add a reentrancy guard for defense in depth.

Finding 7: No Quorum Enforcement (Medium)

If the contract does not require a minimum number of votes for a proposal to pass, a proposal can pass with a single vote if no one opposes it. This is especially dangerous during periods of low engagement.

Recommendation: Implement a quorum requirement: a minimum percentage of total voting power must participate for a vote to be valid.

Finding 8: Unbounded Proposal Array (Low)

If proposals are stored in an unbounded array that is iterated in any function, the gas cost grows linearly with the number of proposals. After enough proposals, functions that iterate over this array will exceed the block gas limit.

Recommendation: Use mappings indexed by proposal ID rather than arrays. If iteration is necessary, implement pagination.

15.9.3 The Fixed Contracts

Our code directory contains VotingAudit.sol, a hardened version of the voting contracts that addresses all Critical and High findings. The key changes are:

  1. Snapshot voting using OpenZeppelin's ERC20Votes extension.
  2. Timelock on proposal execution (48-hour delay after approval).
  3. Reentrancy protection via checks-effects-interactions and ReentrancyGuard.
  4. Quorum enforcement requiring at least 10% of total supply to participate.
  5. Proposal validation with a whitelist of allowed targets.

This exercise demonstrates a critical principle: every smart contract should be audited before deployment. The contracts we wrote in Chapter 13 were functional, but they had vulnerabilities that would have been exploited in production. Security is iterative — write the code, review it, find the bugs, fix the bugs, and review again.


15.10 The Economics of Security

Smart contract security exists within an economic framework. The decision to invest in security — audits, bug bounties, formal verification, insurance — is ultimately a cost-benefit analysis. But it is a cost-benefit analysis where the downside is catastrophic and irreversible.

15.10.1 The Cost of Audits

Professional audit costs vary widely based on contract complexity, audit depth, and the reputation of the audit firm:

Contract Complexity Typical Cost Timeline
Simple token or NFT $5,000 - $20,000 1-2 weeks
DeFi lending protocol $50,000 - $200,000 4-8 weeks
Complex cross-chain bridge $100,000 - $500,000 8-16 weeks
Multiple-audit comprehensive $200,000 - $1,000,000+ 3-6 months

These costs seem high until compared to the alternative. The Euler Finance hack cost $197 million. The Beanstalk exploit cost $182 million. The Wormhole bridge hack cost $326 million. An audit that prevents even one of these exploits generates a return on investment measured in thousands of percent.

15.10.2 Bug Bounty Programs

Bug bounties create economic incentives for security researchers to find and responsibly disclose vulnerabilities rather than exploit them. Immunefi, the dominant bug bounty platform in Web3, has facilitated over $100 million in bounty payouts.

Bounty sizes for critical vulnerabilities in major protocols can be substantial:

  • MakerDAO: Up to $10 million
  • Optimism: Up to $2 million
  • Uniswap: Up to $3 million
  • Wormhole (post-hack): Up to $10 million

The economic logic is clear: paying a researcher $1 million to report a vulnerability that could cause $200 million in losses is an extraordinarily good deal for the protocol and its users.

15.10.3 Insurance

DeFi insurance protocols like Nexus Mutual and InsurAce allow users to purchase coverage against smart contract exploits. If an audited protocol is hacked despite the audit, insurance can compensate affected users.

However, DeFi insurance has limitations: coverage is often limited relative to total protocol TVL (total value locked), premiums can be expensive (2-5% annually), and claims assessment can be contentious (the insurance DAO must vote on whether the event qualifies for payout). There is also the recursive risk problem: insurance protocols are themselves smart contracts that could be exploited. The security of your insurance depends on the security of the insurance protocol's code — turtles all the way down.

Despite these limitations, the DeFi insurance market is growing. Nexus Mutual has paid out over $20 million in claims. The market signals that the industry is maturing: protocols that offer insurance coverage attract more TVL because risk-averse users prefer the additional safety net, even with the premium cost.

15.10.4 The True Cost of NOT Auditing

The most expensive audit is the one you did not pay for. The economic argument for security investment is overwhelming:

  • Direct losses: Stolen or frozen funds.
  • Reputational damage: Users and investors lose trust. Many hacked protocols never recover.
  • Legal liability: Protocol teams face potential lawsuits and regulatory action. The Mango Markets exploiter was criminally prosecuted.
  • Opportunity cost: Developer time spent on incident response, fund recovery, and rebuilding rather than building new features.
  • Insurance costs increase: Hacked protocols face higher premiums or are uninsurable.

The security-conscious protocol treats audit costs not as an expense but as an investment — the same way a bank treats vault construction costs not as an expense but as a prerequisite for operating.

Consider the math concretely. A comprehensive audit of a medium-complexity DeFi protocol costs approximately $100,000 and takes 4-6 weeks. If the protocol manages $50 million in TVL and the audit prevents a single exploit that would have drained 50% of TVL ($25 million), the return on investment is 250x. Even if the probability of an exploit is only 5% without the audit, the expected value of the audit is $1.25 million — more than ten times its cost. The asymmetry is overwhelming: audit costs are bounded and predictable; exploit costs are unbounded and catastrophic.


15.11 Building a Security Mindset

The technical content in this chapter — reentrancy, flash loans, oracles, MEV — is essential knowledge. But the deeper lesson is about mindset. Smart contract security requires a fundamentally different way of thinking about code.

15.11.1 Adversarial Thinking

In traditional software development, you think about what the user wants to do and build for the happy path. In smart contract development, you must think about what an adversary could do and defend against every possible path. Every public function is an attack surface. Every external call is a potential re-entry point. Every price feed is a potential manipulation vector. Every governance mechanism is a potential takeover target.

The mindset shift is from "does this work correctly when used as intended?" to "can this be exploited if used in the worst possible way by the most sophisticated attacker in the world who has unlimited capital via flash loans?"

This is not paranoia — it is realism. Professional smart contract auditors report finding critical or high-severity vulnerabilities in the majority of contracts they review. Many of these are written by experienced developers who simply did not think adversarially. The difference between a secure contract and a vulnerable one is often not technical skill but the habit of asking "what if?" at every decision point.

Practical exercises for developing adversarial thinking:

  • For every function you write, ask: Who can call this? What happens if they call it with extreme values (0, MAX_UINT256)? What happens if they call it repeatedly? What happens if they call it from a contract rather than an EOA? What if they combine it with a flash loan?
  • For every external call you make, ask: What happens if the recipient is malicious? What happens if it reverts? What happens if it re-enters my contract? What state has been committed, and what state is still pending?
  • For every price or balance you read, ask: Can this value be manipulated? By whom? At what cost? In a single transaction?

15.11.2 Defense in Depth

No single defense is sufficient. The checks-effects-interactions pattern prevents reentrancy — but you add a reentrancy guard anyway. TWAP oracles resist flash loan manipulation — but you add circuit breakers anyway. Access control restricts privileged functions — but you add timelocks anyway.

Each layer of defense addresses the failure of the layer below it. This is defense in depth, and it is the standard in every mature security discipline, from network security to physical security.

15.11.3 Immutability Changes Everything

The most important mental model shift for smart contract developers is internalizing that you cannot fix bugs after deployment. In web development, a production bug is an annoyance: deploy a hotfix, roll back if needed, compensate affected users. In smart contract development, a production bug is a crisis: the code cannot be changed, the funds may be gone, and the damage may be permanent.

This reality demands a different relationship with testing, review, and deployment. Every deployment should be preceded by:

  1. Comprehensive unit and integration tests.
  2. At least one independent audit.
  3. A staged rollout (deploy to testnet, deploy to mainnet with limited TVL, gradually increase limits).
  4. A bug bounty program.
  5. An incident response plan.

15.12 Summary and Looking Ahead

Smart contract security is not one topic — it is the lens through which every other topic in this textbook must be viewed. The vulnerabilities we have studied in this chapter — reentrancy, flash loan attacks, oracle manipulation, MEV, access control failures — are not edge cases. They are the standard attack vectors that have been used, repeatedly, to steal billions of dollars from real protocols.

The key principles from this chapter:

  • Reentrancy is prevented by the checks-effects-interactions pattern and reentrancy guards. The DAO hack demonstrated the consequences of getting this wrong and led to the Ethereum/Ethereum Classic fork.
  • Flash loans give attackers temporary access to unlimited capital, breaking assumptions about who can be a "whale." Time-weighted price averaging, governance timelocks, and snapshot-based voting mitigate flash loan attacks.
  • Oracle manipulation is prevented by using decentralized oracle networks (like Chainlink) rather than on-chain spot prices, and by implementing circuit breakers for anomalous price movements.
  • MEV and front-running are structural properties of public blockchains. Private transaction submission, commit-reveal schemes, and batch auctions reduce their impact.
  • Access control must be explicit on every function. The Parity wallet hack demonstrated that a missing access check can freeze $150 million forever.
  • Auditing is a systematic process: specification review, automated analysis, manual review, testing, and reporting. It is expensive but far cheaper than exploitation.
  • Security tools (Slither, Mythril, Echidna, Certora) provide complementary capabilities. No single tool catches everything.

In Chapter 16, we will turn to DeFi protocols — decentralized exchanges, lending platforms, and yield aggregators. Every vulnerability we studied in this chapter will be directly relevant: DeFi protocols are the primary targets for smart contract exploits because that is where the money is. The security mindset you have developed here is not optional for understanding DeFi — it is a prerequisite.

Checkpoint: You should now be able to explain what reentrancy is and how to prevent it, describe how a flash loan attack works, list the phases of a smart contract audit, and conduct a basic security review of a simple smart contract. If any of these feel uncertain, revisit the relevant section before proceeding to Chapter 16.


In the next chapter, we enter the world of decentralized finance — where smart contracts replace banks, exchanges, and insurance companies, and where every vulnerability we studied here has been exploited for real money.