Throughout this textbook, you have written Solidity contracts, tested them in isolation, analyzed token economics, and studied the governance structures that underpin DAOs. Each of those activities happened in a controlled environment: a Hardhat...
Learning Objectives
- Architect a full-stack dApp identifying the responsibilities of each layer (contracts, frontend, storage, indexing)
- Connect a JavaScript frontend to Ethereum smart contracts using ethers.js v6
- Store and retrieve data on IPFS and understand the tradeoffs vs. on-chain storage
- Write a comprehensive test suite covering happy paths, edge cases, and security scenarios
- Deploy and verify smart contracts on an Ethereum testnet with a reproducible process
In This Chapter
- 33.1 From Smart Contract to Application
- 33.2 The dApp Architecture Stack
- 33.3 Reviewing Our Progressive Project Components
- 33.4 The Final Contract: VotingDApp.sol
- 33.5 Frontend Development with ethers.js
- 33.6 Decentralized Storage with IPFS
- 33.7 Indexing with The Graph
- 33.8 Testing: The Complete Suite
- 33.9 Deployment Pipeline
- 33.10 Development Best Practices
- 33.11 What We Have Built: The Complete Progressive Project Summary
- 33.12 Summary and Bridge to Chapter 34
Chapter 33: dApp Development: Building a Full-Stack Decentralized Application
33.1 From Smart Contract to Application
Throughout this textbook, you have written Solidity contracts, tested them in isolation, analyzed token economics, and studied the governance structures that underpin DAOs. Each of those activities happened in a controlled environment: a Hardhat console, a unit test, a whiteboard diagram. But a smart contract sitting on a blockchain with no way for an ordinary person to interact with it is like a database with no application server and no user interface. It is technically functional and practically useless.
This chapter is about building the bridge between a set of smart contracts and a real human being sitting at a real computer, clicking a button that says "Vote." That bridge is the decentralized application, or dApp, and constructing it requires an entirely different set of skills from the ones you have used so far. You will need to understand how a browser communicates with a blockchain node, how a wallet signs transactions on behalf of a user, how proposal metadata that would cost thousands of dollars to store on-chain can live on a decentralized file system for pennies, how an indexing layer makes it possible to query historical data that the blockchain was never designed to serve efficiently, and how all of these pieces fit together into a development workflow that you can reason about, test, and deploy with confidence.
This is also the culmination of the progressive project that has been running since Chapter 2. Over the course of this textbook, you have built components of a decentralized governance system piece by piece:
- Chapter 2: You deployed your first contract on a local blockchain and understood the transaction lifecycle.
- Chapter 6: You examined how consensus mechanisms secure the network your contracts rely on.
- Chapter 11: You wrote your first ERC-20 token, the foundation of governance weight.
- Chapter 13: You deepened your Solidity skills, writing more complex logic and understanding the EVM's execution model.
- Chapter 14: You learned to write tests and use Hardhat's development environment.
- Chapter 15: You studied smart contract security, learning the attack patterns that can destroy a governance system overnight.
- Chapter 26: You designed token economics, reasoning about supply, distribution, and incentive alignment.
- Chapter 28: You studied DAOs as organizational structures, understanding the governance flows that your contracts must implement.
Now, in Chapter 33, you will integrate every one of those components into a single, working, full-stack decentralized application. By the end of this chapter, you will have a voting dApp that a user can open in a browser, connect their wallet, browse proposals stored on IPFS, cast votes weighted by their governance token holdings, and watch the results in real time. You will have a test suite that gives you confidence the system works. You will have a deployment pipeline that takes the application from your laptop to a public testnet. And you will have the conceptual vocabulary to understand what "decentralized" means at each layer of the stack and where that decentralization breaks down.
💡 Why This Matters: The vast majority of blockchain's value to end users flows through dApps. DeFi protocols, NFT marketplaces, DAOs, prediction markets, decentralized social networks — all of them are dApps. Understanding how to build one is the single most practical skill in the blockchain developer's toolkit.
Let us begin.
33.2 The dApp Architecture Stack
Before writing a single line of code, you need a mental model of how the pieces fit together. A traditional web application has a well-understood architecture: a frontend (HTML, CSS, JavaScript running in a browser) communicates with a backend (an application server running business logic) which reads from and writes to a database. The backend is the single source of truth, and the frontend is a window into it.
A dApp replaces some of those layers and adds new ones. Here is the full stack, from bottom to top:
Layer 1: The Blockchain (State and Execution)
The blockchain is the database and the application server combined. Your smart contracts live here. When a user casts a vote, the transaction that records that vote is processed by the EVM and stored permanently in the blockchain's state. The blockchain is the single source of truth for everything that matters: who holds tokens, which proposals exist, how many votes each proposal has received, whether a proposal has been executed.
Key constraints: Storage is expensive (roughly 20,000 gas to store a 256-bit word, which at typical gas prices translates to dollars, not cents). Computation is expensive. There is no built-in way to push data to a client; the client must pull. There is no efficient way to query historical data across many blocks.
Layer 2: The Node / RPC Provider
Your frontend does not communicate with "the blockchain" directly. It communicates with a node — a computer running Ethereum client software (Geth, Nethermind, Besu, etc.) that maintains a copy of the blockchain state and can process JSON-RPC requests. In practice, most dApp developers use a hosted RPC provider like Infura, Alchemy, or QuickNode rather than running their own node. The provider gives you an HTTP or WebSocket endpoint that your frontend can call.
Key constraints: You are trusting the provider to return accurate data. If Infura lies to you, your frontend displays incorrect information. This is a centralization pressure point.
Layer 3: The Indexing Layer (The Graph)
The blockchain stores state, but it does not store it in a way that is easy to query. If you want to answer the question "What are all proposals created in the last 30 days, sorted by vote count?" you cannot do that with a single RPC call. You would have to scan every block, decode every event log, and aggregate the results yourself. This is prohibitively slow.
The Graph solves this problem. It is a decentralized protocol for indexing blockchain data. You define a subgraph — essentially a schema and a set of event handlers — and The Graph's indexers process every relevant event, store the results in a queryable database, and expose a GraphQL API that your frontend can call.
Key constraints: The Graph adds a dependency. If The Graph's indexers are slow or down, your frontend cannot display historical data (though it can still send transactions).
Layer 4: Decentralized Storage (IPFS)
For data that is too large or too expensive to store on-chain — proposal descriptions, images, metadata — you use a decentralized file system. IPFS (InterPlanetary File System) is the most common choice. IPFS uses content addressing: you upload a file, and IPFS returns a CID (Content Identifier) that is a hash of the file's contents. Anyone who knows the CID can retrieve the file from any IPFS node that has it. You store only the CID on-chain, which is a 32-byte hash — far cheaper than storing the full proposal text.
Key constraints: IPFS does not guarantee availability. If no node is pinning your file, it will be garbage collected and become unretrievable. You need a pinning service (Pinata, web3.storage, Filebase) or run your own IPFS node.
Layer 5: The Wallet (MetaMask and Signing)
The wallet is the user's identity and their authorization mechanism. When a user connects MetaMask to your dApp, they are granting your frontend permission to (a) read their Ethereum address and (b) prompt them to sign transactions. The wallet never gives your frontend access to the user's private key. Every transaction must be explicitly approved by the user in the wallet's popup window.
Key constraints: The wallet is a user experience bottleneck. Every state-changing action requires a popup, a confirmation, a wait for the transaction to be mined. Gas fees are visible and sometimes alarming to new users.
Layer 6: The Frontend (HTML, CSS, JavaScript)
The frontend is a standard web application, typically built with React, Vue, or even plain HTML and JavaScript. It uses a library like ethers.js to communicate with the blockchain through the wallet's injected provider. It reads data from The Graph's GraphQL API, fetches metadata from IPFS, and presents everything in a user interface that hides as much blockchain complexity as possible.
Key constraints: The frontend is usually hosted on a centralized server (Vercel, Netlify, AWS). This makes it a censorship point. If the hosting provider takes it down, users cannot access the dApp through that URL — though they can still interact with the smart contracts directly through Etherscan or a command-line interface.
The Architecture Diagram
┌─────────────────────────────────────────────────────┐
│ USER'S BROWSER │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ Frontend │ │ MetaMask │ │
│ │ (app.js) │◄─►│ Wallet │ │
│ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │
└─────────┼───────────────────┼────────────────────────┘
│ │
Read data Sign & send
(GraphQL, transactions
IPFS CIDs) (JSON-RPC)
│ │
▼ ▼
┌──────────────┐ ┌──────────────────┐
│ The Graph │ │ RPC Provider │
│ (Indexer) │ │ (Alchemy/Infura)│
└──────┬───────┘ └──────┬───────────┘
│ │
│ ┌──────▼───────────┐
└───────────►│ Blockchain │
│ ┌────────────┐ │
│ │VotingDApp │ │
│ │ .sol │ │
│ └────────────┘ │
└──────────────────┘
┌──────────────┐
│ IPFS │ ◄── Proposal metadata (title, description, links)
│ (Pinata) │ CID stored on-chain, content off-chain
└──────────────┘
This architecture is the mental model you should hold in your head for the rest of this chapter. Every section that follows will zoom in on one or more of these layers.
33.3 Reviewing Our Progressive Project Components
Before we build the final application, let us take inventory of what we already have. Each prior chapter contributed a component:
The Governance Token (Chapter 11, refined in Chapter 26)
We built an ERC-20 token called GovToken with a fixed supply of 1,000,000 tokens. The token includes the ERC20Votes extension from OpenZeppelin, which tracks voting power via checkpoints. A holder must delegate their tokens (to themselves or to another address) before their voting power is active. This is a critical detail: holding tokens is not the same as having voting power.
The Governor Contract (Chapter 28)
We built a GovernorContract that implements OpenZeppelin's Governor framework. It supports:
- Creating proposals with a description and a set of target contracts, values, and calldata
- A voting delay (time between proposal creation and voting start)
- A voting period (duration of the vote)
- A quorum requirement (minimum votes for the proposal to pass)
- A timelock that delays execution after a proposal passes
The Timelock Controller (Chapter 28)
The TimelockController adds a mandatory delay between when a proposal passes and when it can be executed. This gives token holders time to exit if they disagree with a passed proposal. The timelock is the actual "owner" of any controlled resources — the Governor can only execute actions through it.
Testing Foundations (Chapter 14)
We wrote unit tests using Hardhat and Chai, covering basic contract functionality. We learned to use loadFixture for test setup, expect for assertions, and time.increase to simulate the passage of time.
Security Patterns (Chapter 15)
We studied reentrancy, access control, integer overflow, and other attack vectors. We applied the checks-effects-interactions pattern and used OpenZeppelin's battle-tested libraries wherever possible.
What we have not yet built is the glue that binds these components together: the unified contract that a frontend can interact with, the frontend itself, the IPFS integration for proposal storage, the indexing layer for efficient queries, and the deployment pipeline that puts it all on a live network.
That is what this chapter delivers.
33.4 The Final Contract: VotingDApp.sol
Our final contract integrates all prior components into a cohesive system. Rather than rewriting the Governor framework from scratch, we extend OpenZeppelin's modular Governor contracts. This is the correct approach in production: you compose battle-tested modules rather than reimplementing complex logic.
Contract Architecture
The system consists of three contracts that work together:
- GovernanceToken.sol — An ERC-20 token with voting capabilities.
- VotingDApp.sol — The Governor contract that manages proposals and voting.
- TimelockController — OpenZeppelin's timelock (used directly, not subclassed).
The Governor inherits from five OpenZeppelin modules:
Governor // Core proposal and voting logic
GovernorSettings // Configurable voting delay, period, and proposal threshold
GovernorCountingSimple // For, Against, Abstain vote counting
GovernorVotes // Integration with ERC20Votes token
GovernorTimelockControl // Execution through a timelock
Key Design Decisions
Why separate the token from the governor? Because separation of concerns is a fundamental principle. The token is a financial instrument; the governor is a governance mechanism. A single token might be used by multiple governors (e.g., one for protocol parameters, one for treasury management). Coupling them would make the system rigid.
Why use a timelock? Because governance without a timelock is a rug pull waiting to happen. If a malicious proposal passes at 3 AM when most token holders are asleep, the timelock gives them a window to react — selling tokens, moving funds, or coordinating a response.
Why GovernorCountingSimple instead of a custom counting module? Because For/Against/Abstain covers the vast majority of governance use cases, and the implementation has been audited by multiple security firms. A custom counting module introduces risk without proportional benefit.
Walking Through the Contract
The VotingDApp.sol contract (see code/contracts/VotingDApp.sol for the complete source) defines the governance parameters in its constructor:
- Voting delay: 1 block (in production, this would be higher — perhaps 1 day worth of blocks — to give people time to acquire voting power before a vote starts).
- Voting period: 50,400 blocks (approximately 1 week on Ethereum mainnet at 12-second block times).
- Proposal threshold: 0 (any token holder can propose; in production, you might require a minimum balance).
- Quorum: 4% of total supply (this is a common default; higher quorums provide more legitimacy but risk failed proposals).
The contract also includes two custom additions beyond the standard Governor:
- An
executedProposalsmapping that records the timestamp when each proposal was executed. This is useful for frontends that want to display execution history. - An
Executedevent with additional metadata that makes indexing easier.
⚠️ Security Note: Every function that the Governor needs to override (such as
votingDelay,votingPeriod,state,proposalThreshold,_queueOperations,_executeOperations,_cancel, andproposalNeedsQueuing) must be explicitly overridden when multiple base contracts define them. Solidity's linearization rules require this. Missing an override causes a compilation error; misimplementing one causes a governance vulnerability.
The GovernanceToken Contract
The GovernanceToken.sol contract (see code/contracts/GovernanceToken.sol) is an ERC-20 token with the ERC20Votes extension. The key functionality:
- Constructor mints the total supply to the deployer.
- ERC20Permit allows gasless approvals via EIP-2612 signatures.
- ERC20Votes tracks voting power through checkpoints. Every transfer updates the checkpoint history so that voting power at any past block can be queried.
- The
clock()function returnsblock.number(block-number-based voting, which is the standard for Ethereum mainnet).
The critical user-facing behavior is delegation. After receiving tokens, a user must call delegate(address) to activate their voting power. If they delegate to themselves, they can vote directly. If they delegate to another address, that address votes on their behalf. Until delegation occurs, the tokens have zero voting weight.
// User must delegate to activate voting power
token.delegate(msg.sender); // Self-delegation: "I will vote with my own tokens"
This is one of the most common sources of confusion for new DAO participants. Your frontend must handle it gracefully — either by auto-prompting delegation after token receipt or by displaying a clear "Activate Voting Power" button.
33.5 Frontend Development with ethers.js
The frontend is where the user meets the blockchain. Everything you have built so far — the token, the governor, the timelock — is invisible to an end user. They see a web page with buttons. The quality of that web page determines whether your governance system is used by thousands of people or abandoned after a week.
Why ethers.js?
The two dominant libraries for Ethereum frontend development are ethers.js and web3.js. We use ethers.js (v6) for several reasons:
- Cleaner API: ethers.js distinguishes between a
Provider(read-only connection to the network) and aSigner(an entity that can sign transactions). This separation makes the code easier to reason about. - Smaller bundle size: ethers.js v6 is modular and tree-shakable, resulting in smaller frontend bundles.
- Better TypeScript support: ethers.js was designed with TypeScript from the ground up.
- Active maintenance: ethers.js v6 was a major rewrite with modern JavaScript patterns (BigInt instead of BigNumber, native ESM support).
📊 Library Comparison: | Feature | ethers.js v6 | web3.js v4 | |---------|-------------|------------| | Provider/Signer separation | Yes | No (single Web3 instance) | | BigInt support | Native | Native (v4+) | | Bundle size (min+gzip) | ~120 KB | ~180 KB | | ENS resolution | Built-in | Plugin | | License | MIT | LGPL-3.0 |
Connecting to MetaMask
The connection flow is the first thing that happens when a user visits your dApp. Here is the sequence:
- The user clicks "Connect Wallet."
- Your frontend calls
window.ethereum.request({ method: 'eth_requestAccounts' }). - MetaMask displays a popup asking the user to approve the connection.
- If the user approves, MetaMask returns an array of account addresses.
- Your frontend creates an ethers.js
BrowserProviderwrapping the MetaMask provider. - You can now read data (via the provider) and send transactions (via a signer obtained from the provider).
The complete code is in code/frontend/app.js, but let us walk through the critical parts:
// Detect MetaMask
if (typeof window.ethereum === 'undefined') {
showError('Please install MetaMask to use this dApp.');
return;
}
// Create provider and signer
const provider = new ethers.BrowserProvider(window.ethereum);
const signer = await provider.getSigner();
const address = await signer.getAddress();
The BrowserProvider class wraps the window.ethereum object that MetaMask injects into every page. The getSigner() method returns a Signer object that can sign transactions on behalf of the connected account.
Reading Contract State
Once connected, you can read data from the blockchain without requiring the user to sign anything. Reading is free — it does not cost gas because it does not create a transaction:
const token = new ethers.Contract(TOKEN_ADDRESS, tokenABI, provider);
const governor = new ethers.Contract(GOVERNOR_ADDRESS, governorABI, provider);
// Read the user's token balance
const balance = await token.balanceOf(address);
// Read the user's voting power
const votes = await token.getVotes(address);
// Read a proposal's state
const state = await governor.state(proposalId);
Notice that we pass provider (not signer) to the contract constructor for read operations. This is ethers.js's way of expressing the intent: provider for reads, signer for writes.
Sending Transactions
When the user wants to create a proposal, cast a vote, or execute a passed proposal, your frontend must construct a transaction and ask MetaMask to sign it:
// Connect the contract to the signer for write operations
const governorWithSigner = governor.connect(signer);
// Cast a vote (0 = Against, 1 = For, 2 = Abstain)
const tx = await governorWithSigner.castVote(proposalId, 1); // Vote "For"
const receipt = await tx.wait(); // Wait for the transaction to be mined
The tx.wait() call is essential. It returns a promise that resolves when the transaction is included in a block. Until wait() resolves, the vote has not been recorded. Your frontend should show a loading state during this period.
Handling Events
Smart contract events are the primary mechanism for real-time updates. When someone creates a proposal, the Governor emits a ProposalCreated event. When someone votes, it emits a VoteCast event. Your frontend can listen for these events and update the UI accordingly:
governor.on('ProposalCreated', (proposalId, proposer, targets, values,
signatures, calldatas, voteStart, voteEnd, description) => {
console.log(`New proposal: ${description}`);
refreshProposalList();
});
governor.on('VoteCast', (voter, proposalId, support, weight, reason) => {
console.log(`Vote cast: ${voter} voted ${support} on ${proposalId}`);
refreshVoteCounts(proposalId);
});
⚠️ Important: Event listeners require a WebSocket connection (not HTTP). If your RPC provider only supports HTTP, you will need to poll for new events using
queryFilterinstead. This is less efficient but works with any provider.
Error Handling
Blockchain transactions fail in ways that web developers are not accustomed to. A transaction can fail because the user does not have enough gas, because the contract reverted with a custom error, because the nonce is wrong, or because the network is congested and the transaction was not included in time. Your frontend must handle all of these cases:
try {
const tx = await governorWithSigner.castVote(proposalId, 1);
showStatus('Transaction submitted. Waiting for confirmation...');
const receipt = await tx.wait();
showSuccess(`Vote recorded in block ${receipt.blockNumber}`);
} catch (error) {
if (error.code === 'ACTION_REJECTED') {
showError('Transaction was rejected in MetaMask.');
} else if (error.reason) {
showError(`Contract error: ${error.reason}`);
} else if (error.code === 'INSUFFICIENT_FUNDS') {
showError('Insufficient ETH for gas fees.');
} else {
showError(`Unexpected error: ${error.message}`);
}
}
The error.reason field is particularly useful. When a Solidity contract reverts with a custom error string (e.g., revert("Voting period has ended")), ethers.js extracts that string and puts it in error.reason. This allows your frontend to display meaningful error messages instead of cryptic hex data.
Understanding the Provider/Signer Pattern
The distinction between Provider and Signer is worth dwelling on because it reflects a fundamental property of blockchains: reading is free, writing costs money.
When you create a contract instance with a Provider, you can call any view or pure function without cost. These calls are executed locally by the RPC node and do not create a transaction:
// Read-only: no gas, no signature, no MetaMask popup
const balance = await token.balanceOf(userAddress); // Free
const name = await governor.name(); // Free
const state = await governor.state(proposalId); // Free
When you create a contract instance with a Signer (or use contract.connect(signer)), you can call state-changing functions. These calls create transactions that must be signed by the user's wallet, broadcast to the network, included in a block, and paid for with gas:
// State-changing: costs gas, requires signature, MetaMask popup
const tx = await token.connect(signer).delegate(userAddress); // Costs gas
const tx2 = await governor.connect(signer).castVote(id, 1); // Costs gas
This pattern makes the code self-documenting. When you see provider in a contract constructor, you know the code is reading. When you see signer, you know the code is writing. This distinction matters for security auditing, performance optimization, and user experience design.
The User Experience Challenge
Building a dApp frontend is harder than building a traditional web frontend because every state-changing action involves:
- A MetaMask popup (the user must manually approve).
- A wait for transaction confirmation (12-15 seconds on Ethereum mainnet, longer on congested networks).
- The possibility of failure after the user has already confirmed (the transaction can revert on-chain even after MetaMask accepted it).
Good dApp frontends mitigate this with: - Optimistic updates: Update the UI immediately when the user confirms in MetaMask, then correct if the transaction fails. - Transaction toasts: Show a persistent notification with the transaction hash and a link to Etherscan so the user can monitor progress. - Batch operations: Where possible, combine multiple operations into a single transaction (e.g., delegate and vote in one call).
33.6 Decentralized Storage with IPFS
A governance proposal is more than a function call. It has a title, a description (potentially hundreds of words), links to forum discussions, maybe an image or a PDF. Storing all of that on-chain would cost a fortune. At current Ethereum gas prices, storing 1 KB of data costs approximately $5-20 (varying with gas prices). A detailed proposal might be 10 KB of text, costing $50-200 — and that is just for one proposal.
The solution is off-chain storage with on-chain references. You store the proposal metadata on IPFS and record only the IPFS CID on-chain. The CID is a 32-byte hash that uniquely identifies the content. Anyone can retrieve the content using the CID, and the hash guarantees that the content has not been tampered with.
How IPFS Content Addressing Works
In a traditional web system, you access data by location: "go to this server, at this path, and get the file." If the server moves, the link breaks. If the server is compromised and the file is altered, you have no way to detect the change.
IPFS uses content addressing: the address of a file is its hash. If you ask for QmXoYp7..., you are asking for "the file whose SHA-256 hash is XoYp7..." Any IPFS node that has a file matching that hash can serve it to you. If someone alters the file, the hash changes, and the altered file has a different CID. The original CID still points to the original content.
Traditional web: https://server.com/proposal.json → content may change
IPFS: ipfs://QmXoYp7abc123... → content is immutable
The Proposal Metadata Schema
We define a standard JSON schema for proposal metadata:
{
"title": "Increase Quorum to 10%",
"description": "This proposal increases the quorum requirement from 4% to 10% to ensure broader participation in governance decisions.",
"discussion_url": "https://forum.example.com/t/increase-quorum/123",
"author": "0x1234...abcd",
"created_at": "2025-03-15T12:00:00Z",
"category": "governance-parameters",
"version": 1
}
This schema is stored in a JSON file, uploaded to IPFS, and the resulting CID is stored in the proposal's on-chain description field.
Uploading to IPFS with Python
The code/ipfs_upload.py script demonstrates how to upload proposal metadata to IPFS using the Pinata API. We use Python because it is the most accessible language for this task, and many blockchain developers use Python for tooling and automation.
The upload flow:
- Construct the proposal metadata as a Python dictionary.
- Serialize it to JSON.
- POST the JSON to Pinata's API (or any IPFS pinning service).
- Receive the CID in the response.
- Use the CID in the on-chain proposal description.
import requests
import json
def upload_to_ipfs(metadata: dict, pinata_jwt: str) -> str:
"""Upload JSON metadata to IPFS via Pinata and return the CID."""
url = "https://api.pinata.cloud/pinning/pinJSONToIPFS"
headers = {
"Authorization": f"Bearer {pinata_jwt}",
"Content-Type": "application/json"
}
payload = {
"pinataContent": metadata,
"pinataMetadata": {"name": f"proposal-{metadata.get('title', 'untitled')}"}
}
response = requests.post(url, headers=headers, json=payload)
response.raise_for_status()
return response.json()["IpfsHash"]
Pinning: The Availability Problem
Uploading a file to IPFS does not guarantee it will be available forever. IPFS nodes periodically garbage-collect files that are not "pinned." If no node is pinning your proposal metadata, it will eventually become unretrievable, and the CID stored on-chain will point to nothing.
Solutions:
- Pinning services: Pinata, web3.storage, and Filebase offer pinning as a service. You pay a monthly fee and they guarantee your files are available.
- Self-hosting: Run your own IPFS node and pin the files yourself. This requires maintaining infrastructure.
- Redundancy: Pin files on multiple services for resilience.
For a governance dApp, pinning is non-negotiable. If proposal metadata becomes unavailable, voters cannot read what they are voting on. The contract still works — they can still cast votes — but the governance process becomes meaningless.
🔗 Connection to Chapter 28: This is the same problem we discussed in the DAO chapter: governance requires informed participation. A CID that resolves to nothing is equivalent to a proposal with no description.
The Economics of On-Chain vs. Off-Chain Storage
To make the storage tradeoff concrete, consider the following cost comparison for a single governance proposal:
| Data Element | Size | On-Chain Cost (30 Gwei, ETH=$3000) | IPFS Cost (Pinata) |
|---|---|---|---|
| Proposal title (50 chars) | 50 B | ~$0.50 | Included |
| Description (2 KB) | 2,048 B | ~$20 | Included |
| Full proposal with images (50 KB) | 51,200 B | ~$500 | Included |
| IPFS CID stored on-chain | 32 B | ~$0.30 | N/A |
| Pinata monthly pin (per file) | — | — | ~$0.01/mo |
The difference is staggering. For a governance system that might create 100 proposals per year, on-chain storage of full proposal text would cost $2,000+ per year in gas alone. IPFS with pinning costs less than $2 per year. The tradeoff is availability: on-chain data is guaranteed available as long as the blockchain exists, while IPFS data requires active pinning.
This is why we store only the CID on-chain and the full metadata on IPFS. The CID acts as a commitment: it cryptographically binds the on-chain proposal to its off-chain description. Anyone can verify that the IPFS content matches the on-chain CID by recomputing the hash. This gives us the best of both worlds: the integrity guarantee of the blockchain and the cost efficiency of off-chain storage.
IPFS Gateways
Users do not typically run IPFS nodes. To make IPFS content accessible in a browser, you use an IPFS gateway — an HTTP-to-IPFS bridge:
https://gateway.pinata.cloud/ipfs/QmXoYp7abc123...
https://ipfs.io/ipfs/QmXoYp7abc123...
https://cloudflare-ipfs.com/ipfs/QmXoYp7abc123...
Your frontend fetches proposal metadata from a gateway, parses the JSON, and displays the title and description. The CID ensures integrity: even if the gateway is compromised, you can verify the content by checking its hash against the on-chain CID.
33.7 Indexing with The Graph
You have a governance dApp with proposals, votes, and execution. A user opens the frontend and wants to see a list of all active proposals. How does the frontend get this data?
The Naive Approach (and Why It Fails)
The naive approach is to query the blockchain directly:
// This does NOT work efficiently
for (let i = 0; i < totalProposals; i++) {
const state = await governor.state(proposalIds[i]);
const votes = await governor.proposalVotes(proposalIds[i]);
// ... build the list
}
This fails for three reasons:
- There is no
totalProposalscounter in the standard Governor. You would need to scan event logs to discover all proposal IDs. - Each
state()call is a separate RPC request. With 100 proposals, that is 100+ network round trips. - Historical data is not indexed. To find all
ProposalCreatedevents, you must scan every block since the contract was deployed. On Ethereum mainnet, that is tens of millions of blocks.
How The Graph Solves This
The Graph is a protocol for indexing blockchain data and serving it via GraphQL APIs. The workflow:
- Define a subgraph. You write a
schema.graphqlfile that defines your data model and a set of mapping functions (in AssemblyScript or TypeScript) that transform blockchain events into your data model. - Deploy the subgraph. You deploy it to The Graph's decentralized network (or a hosted service for development).
- The Graph indexes the data. Indexer nodes process every block, extract events from your contracts, run your mapping functions, and store the results.
- Your frontend queries GraphQL. Instead of making hundreds of RPC calls, your frontend makes a single GraphQL query.
Subgraph Schema
Here is a simplified schema for our governance dApp:
type Proposal @entity {
id: ID!
proposalId: BigInt!
proposer: Bytes!
description: String!
ipfsCid: String
voteStart: BigInt!
voteEnd: BigInt!
forVotes: BigInt!
againstVotes: BigInt!
abstainVotes: BigInt!
executed: Boolean!
canceled: Boolean!
createdAt: BigInt!
createdTx: Bytes!
}
type Vote @entity {
id: ID!
proposal: Proposal!
voter: Bytes!
support: Int!
weight: BigInt!
reason: String
timestamp: BigInt!
}
type Delegate @entity {
id: ID!
address: Bytes!
votingPower: BigInt!
delegationsReceived: BigInt!
}
Subgraph Mapping
The mapping file transforms events into entities:
import { ProposalCreated, VoteCast } from "../generated/Governor/Governor"
import { Proposal, Vote } from "../generated/schema"
export function handleProposalCreated(event: ProposalCreated): void {
let proposal = new Proposal(event.params.proposalId.toHexString())
proposal.proposalId = event.params.proposalId
proposal.proposer = event.params.proposer
proposal.description = event.params.description
proposal.voteStart = event.params.voteStart
proposal.voteEnd = event.params.voteEnd
proposal.forVotes = BigInt.fromI32(0)
proposal.againstVotes = BigInt.fromI32(0)
proposal.abstainVotes = BigInt.fromI32(0)
proposal.executed = false
proposal.canceled = false
proposal.createdAt = event.block.timestamp
proposal.createdTx = event.transaction.hash
proposal.save()
}
export function handleVoteCast(event: VoteCast): void {
let voteId = event.transaction.hash.toHexString() + "-" + event.logIndex.toString()
let vote = new Vote(voteId)
vote.proposal = event.params.proposalId.toHexString()
vote.voter = event.params.voter
vote.support = event.params.support
vote.weight = event.params.weight
vote.reason = event.params.reason
vote.timestamp = event.block.timestamp
vote.save()
// Update proposal vote counts
let proposal = Proposal.load(event.params.proposalId.toHexString())
if (proposal != null) {
if (event.params.support == 0) {
proposal.againstVotes = proposal.againstVotes.plus(event.params.weight)
} else if (event.params.support == 1) {
proposal.forVotes = proposal.forVotes.plus(event.params.weight)
} else {
proposal.abstainVotes = proposal.abstainVotes.plus(event.params.weight)
}
proposal.save()
}
}
Querying from the Frontend
With the subgraph deployed, your frontend can query it using standard GraphQL:
const SUBGRAPH_URL = 'https://api.thegraph.com/subgraphs/name/your-org/voting-dapp';
async function getActiveProposals() {
const currentBlock = await provider.getBlockNumber();
const query = `{
proposals(
where: { voteEnd_gt: "${currentBlock}", canceled: false }
orderBy: createdAt
orderDirection: desc
) {
id
proposalId
proposer
description
voteStart
voteEnd
forVotes
againstVotes
abstainVotes
}
}`;
const response = await fetch(SUBGRAPH_URL, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ query })
});
return (await response.json()).data.proposals;
}
A single GraphQL query replaces dozens of RPC calls. The response is structured exactly as you need it, sorted and filtered by the indexer. This is what makes complex dApp frontends feasible.
💡 Development Tip: During local development, you can use a mock subgraph or simply fall back to event scanning (which is fast on a local Hardhat node). Only integrate The Graph when you deploy to a testnet or mainnet.
33.8 Testing: The Complete Suite
Smart contract bugs are not like web application bugs. A web application bug causes a 500 error or a broken layout. A smart contract bug causes permanent, irrecoverable loss of funds. The DAO hack in 2016 exploited a reentrancy bug and drained $60 million. The Parity wallet bug in 2017 permanently froze $280 million. More recently, governance attacks have exploited flash loans to temporarily acquire enough voting power to pass malicious proposals.
Testing is not optional. It is the primary defense against catastrophe.
Test Categories
Our test suite (see code/test/VotingDApp.test.js) covers four categories:
1. Unit Tests: Test individual functions in isolation.
- Can a token holder delegate voting power?
- Does the Governor correctly calculate quorum?
- Does castVote revert if the proposal is not active?
2. Integration Tests: Test the interaction between contracts. - Can a user create a proposal, vote on it, queue it, and execute it through the full lifecycle? - Does the timelock correctly delay execution? - Do token transfers update voting power checkpoints?
3. Edge Case Tests: Test boundary conditions. - What happens if someone votes with zero voting power? - What happens if two proposals target the same contract function? - What happens if the timelock delay is set to zero?
4. Security Tests: Test known attack vectors. - Can a voter vote twice on the same proposal? - Can someone execute a proposal that has not passed? - Can someone create a proposal after the Governor is paused (if pausable)?
Test Structure
We use Hardhat's test framework with ethers.js and Chai assertions. The test file follows a structured pattern:
const { expect } = require("chai");
const { ethers } = require("hardhat");
const { loadFixture, time, mine } = require("@nomicfoundation/hardhat-network-helpers");
describe("VotingDApp", function () {
async function deployFixture() {
const [deployer, voter1, voter2, voter3] = await ethers.getSigners();
// Deploy token
const GovernanceToken = await ethers.getContractFactory("GovernanceToken");
const token = await GovernanceToken.deploy(deployer.address);
// Deploy timelock
const TimelockController = await ethers.getContractFactory("TimelockController");
const timelock = await TimelockController.deploy(
3600, // 1 hour minimum delay
[], // proposers (set later)
[], // executors (set later)
deployer.address // admin
);
// Deploy governor
const VotingDApp = await ethers.getContractFactory("VotingDApp");
const governor = await VotingDApp.deploy(
await token.getAddress(),
await timelock.getAddress()
);
// Configure roles...
// Distribute tokens and delegate...
return { token, timelock, governor, deployer, voter1, voter2, voter3 };
}
describe("Deployment", function () {
it("Should set the correct token", async function () {
const { governor, token } = await loadFixture(deployFixture);
expect(await governor.token()).to.equal(await token.getAddress());
});
});
// ... more test suites
});
The loadFixture pattern is crucial. It deploys the contracts once and takes a snapshot. Before each test, it reverts to the snapshot, giving each test a clean starting state without the overhead of redeployment. This makes the test suite fast.
Testing the Full Governance Lifecycle
The most important test is the end-to-end governance lifecycle:
- Propose: A token holder creates a proposal.
- Wait: The voting delay passes (mined blocks).
- Vote: Token holders cast their votes.
- Wait: The voting period ends.
- Queue: If the proposal passed, it is queued in the timelock.
- Wait: The timelock delay passes.
- Execute: The proposal is executed.
it("Should execute a proposal through the full lifecycle", async function () {
const { governor, token, timelock, deployer, voter1 } = await loadFixture(deployFixture);
// Create proposal
const tx = await governor.propose(
[await token.getAddress()],
[0],
[token.interface.encodeFunctionData("transfer", [voter1.address, 1000n])],
"Transfer 1000 tokens to voter1"
);
const receipt = await tx.wait();
const proposalId = receipt.logs[0].args[0];
// Wait for voting delay
await mine(2); // voting delay is 1 block
// Vote
await governor.connect(voter1).castVote(proposalId, 1); // Vote For
// Wait for voting period to end
await mine(50401);
// Queue
const descriptionHash = ethers.id("Transfer 1000 tokens to voter1");
await governor.queue(
[await token.getAddress()],
[0],
[token.interface.encodeFunctionData("transfer", [voter1.address, 1000n])],
descriptionHash
);
// Wait for timelock delay
await time.increase(3601);
// Execute
await governor.execute(
[await token.getAddress()],
[0],
[token.interface.encodeFunctionData("transfer", [voter1.address, 1000n])],
descriptionHash
);
// Verify
expect(await token.balanceOf(voter1.address)).to.equal(/* expected balance */);
});
Gas Reporting
Hardhat's gas reporter plugin tracks how much gas each function consumes. This is important for governance because:
- If
castVotecosts too much gas, participation drops. - If
proposeis expensive, only wealthy token holders can create proposals. - If
executecosts more than the block gas limit, the proposal can never be executed.
Enable gas reporting in hardhat.config.js:
gasReporter: {
enabled: true,
currency: 'USD',
gasPrice: 30, // Gwei
coinmarketcap: process.env.COINMARKETCAP_API_KEY
}
Why Governance Tests Are Harder Than Typical Contract Tests
Governance testing is uniquely challenging because the governance lifecycle spans multiple blocks and involves multiple actors. A simple unit test might check that 1 + 1 = 2. A governance lifecycle test must:
- Deploy three contracts and configure their relationships.
- Distribute tokens to multiple accounts and activate delegation.
- Create a proposal (one transaction).
- Advance the blockchain by the voting delay (mine blocks).
- Have multiple accounts cast votes (multiple transactions).
- Advance the blockchain by the voting period (mine more blocks).
- Queue the proposal through the timelock (one transaction).
- Advance the clock by the timelock delay (time manipulation).
- Execute the proposal (one transaction).
- Verify the final state across all three contracts.
Each step depends on the previous steps. If any step fails, the entire test fails. This is why the loadFixture pattern is essential: it allows each test to start from a known good state without repeating the entire deployment and configuration sequence.
The Hardhat Network Helpers (mine and time.increase) are what make this possible. Without them, you would have to wait for real blocks to be mined and real time to pass, making governance tests take hours instead of seconds.
Coverage
Code coverage measures what percentage of your contract code is exercised by tests. Use solidity-coverage:
npx hardhat coverage
Aim for 100% line coverage and 100% branch coverage on your Governor and Token contracts. Missing a branch means missing a code path that an attacker might exploit.
✅ Best Practice: Run
npx hardhat coveragebefore every deployment. If coverage drops, add tests before proceeding.
33.9 Deployment Pipeline
You have written the contracts, built the frontend, integrated IPFS, and tested everything locally. Now it is time to deploy to a live network. This section covers the journey from your laptop to a public testnet, and then the considerations for mainnet.
Why a Pipeline Matters
In traditional web development, deployment is relatively forgiving. If you deploy a bug, you roll back. If the database migration fails, you restore from backup. In smart contract deployment, there is no rollback. A deployed contract is permanent. A misconfigured constructor parameter is permanent. A forgotten role revocation is a permanent security hole.
A deployment pipeline imposes structure on this inherently risky process. It forces you to: - Validate every configuration before spending gas. - Deploy contracts in the correct order (dependencies first). - Configure access control systematically. - Verify the deployment is correct before moving to the next step. - Record the deployment artifacts for the frontend and for future reference.
The pipeline described below was developed through the hard lessons documented in Case Study 2 of this chapter. Every check exists because someone, somewhere, made the mistake it prevents.
The Deployment Stages
Stage 1: Local Development (Hardhat Network)
Hardhat includes a built-in Ethereum network that runs entirely in your process. When you run npx hardhat test, contracts are deployed to this local network, functions are called, and everything happens in milliseconds. This is where you do 95% of your development.
Stage 2: Testnet Deployment (Sepolia)
A testnet is a public Ethereum network that uses test ETH (which has no monetary value) instead of real ETH. Sepolia is the recommended testnet as of 2025. Deploying to a testnet lets you: - Test with real network latency and block times. - Share a URL with collaborators for testing. - Verify contracts on Etherscan. - Test frontend behavior with real MetaMask interactions.
Stage 3: Mainnet Deployment
Mainnet deployment uses real ETH and is irreversible. Once deployed, the contract is permanent (unless it includes an upgrade mechanism). Mainnet deployment requires: - A thorough audit (professional or community). - Sufficient ETH in the deployer wallet for gas. - A verified deployment process that has been rehearsed on testnet.
The Deployment Script
Our deployment script (see code/scripts/deploy.js) follows a structured process:
- Validate environment: Check that all required environment variables are set (private key, RPC URL, Etherscan API key).
- Deploy GovernanceToken: Deploy the token and wait for confirmation.
- Deploy TimelockController: Deploy the timelock with the appropriate delay and role configuration.
- Deploy VotingDApp: Deploy the governor, passing the token and timelock addresses.
- Configure roles: Grant the governor the proposer and executor roles on the timelock. Revoke the deployer's admin role on the timelock (this is critical — the deployer should not retain admin access).
- Verify contracts: Submit source code to Etherscan for verification.
- Write deployment artifacts: Save the deployed addresses to a JSON file that the frontend can import.
async function main() {
const [deployer] = await ethers.getSigners();
console.log("Deploying with:", deployer.address);
console.log("Balance:", ethers.formatEther(await ethers.provider.getBalance(deployer.address)));
// 1. Deploy Token
const token = await ethers.deployContract("GovernanceToken", [deployer.address]);
await token.waitForDeployment();
console.log("GovernanceToken:", await token.getAddress());
// 2. Deploy Timelock
const timelock = await ethers.deployContract("TimelockController", [
3600, [], [], deployer.address
]);
await timelock.waitForDeployment();
// 3. Deploy Governor
const governor = await ethers.deployContract("VotingDApp", [
await token.getAddress(),
await timelock.getAddress()
]);
await governor.waitForDeployment();
// 4. Configure roles
const PROPOSER_ROLE = await timelock.PROPOSER_ROLE();
const EXECUTOR_ROLE = await timelock.EXECUTOR_ROLE();
const ADMIN_ROLE = await timelock.DEFAULT_ADMIN_ROLE();
await timelock.grantRole(PROPOSER_ROLE, await governor.getAddress());
await timelock.grantRole(EXECUTOR_ROLE, ethers.ZeroAddress); // Anyone can execute
await timelock.revokeRole(ADMIN_ROLE, deployer.address); // Renounce admin
console.log("Deployment complete. Admin role revoked.");
}
🔴 Critical Security Step: The line
await timelock.revokeRole(ADMIN_ROLE, deployer.address)is the most important line in the entire deployment script. If you forget this step, the deployer retains the ability to bypass the timelock and execute arbitrary actions. This is the single most common governance deployment mistake.
Contract Verification
After deploying to a testnet or mainnet, you should verify your contracts on Etherscan. Verification means submitting your Solidity source code to Etherscan so that anyone can read it and confirm that the deployed bytecode matches the source.
With the @nomicfoundation/hardhat-verify plugin:
npx hardhat verify --network sepolia DEPLOYED_ADDRESS constructor_arg1 constructor_arg2
Unverified contracts are a red flag. Users cannot see what the contract does, which undermines the trust model of the entire system. A governance dApp with unverified contracts should not be trusted.
Environment Management
Deployment requires sensitive data: private keys, RPC URLs, API keys. This data must never be committed to version control. Use environment variables and .env files:
# .env (NEVER commit this file)
DEPLOYER_PRIVATE_KEY=0xabc123...
SEPOLIA_RPC_URL=https://eth-sepolia.g.alchemy.com/v2/your-key
ETHERSCAN_API_KEY=your-etherscan-api-key
COINMARKETCAP_API_KEY=your-cmc-key
PINATA_JWT=your-pinata-jwt
The hardhat.config.js reads these variables:
require('dotenv').config();
module.exports = {
networks: {
sepolia: {
url: process.env.SEPOLIA_RPC_URL || "",
accounts: process.env.DEPLOYER_PRIVATE_KEY
? [process.env.DEPLOYER_PRIVATE_KEY]
: []
}
}
};
33.10 Development Best Practices
Building a dApp is not a one-time event. It is the beginning of an ongoing process of maintenance, monitoring, and iteration. This section covers the practices that separate professional dApp development from weekend projects.
Git Workflow
Use a branching strategy that separates contract development from frontend development:
main ← Production deployments only
├── develop ← Integration branch
│ ├── feature/token-upgrade ← Contract features
│ ├── feature/new-proposal-ui ← Frontend features
│ └── fix/vote-display-bug ← Bug fixes
Never push directly to main. Every change goes through a pull request with at least one review. For contract changes, the review should include:
- A diff of the Solidity source.
- Gas comparison reports (before and after).
- Test coverage reports (before and after).
- A narrative description of why the change is safe.
CI/CD for Smart Contracts
Continuous integration for smart contracts should run:
1. npx hardhat compile — Compilation check.
2. npx hardhat test — Full test suite.
3. npx hardhat coverage — Coverage report.
4. npx hardhat size-contracts — Contract size check (the EVM has a 24,576-byte contract size limit).
5. Static analysis with Slither or Mythril.
A GitHub Actions workflow:
name: Smart Contract CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- run: npx hardhat compile
- run: npx hardhat test
- run: npx hardhat coverage
Monitoring After Deployment
Deploying is not the end. After deployment, monitor:
- Contract events: Set up alerts (via Alchemy Notify, Tenderly, or The Graph) for significant events like large token transfers, proposal creation, or proposal execution.
- Gas prices: If gas prices spike, governance participation drops. Consider L2 deployment for cost-sensitive applications.
- Dependency health: If your IPFS pinning service goes down, proposal metadata becomes unavailable. If your RPC provider has an outage, your frontend stops working. Monitor all dependencies.
- Security advisories: Follow OpenZeppelin's security advisories. If a vulnerability is discovered in a library you depend on, you need to know immediately.
The Frontend Hosting Question
Where you host the frontend determines who can censor it:
| Hosting Method | Censorship Resistance | UX Quality | Maintenance |
|---|---|---|---|
| Traditional CDN (Vercel, Netlify) | Low — provider can remove it | Excellent — fast, global CDN | Easy — automated deploys |
| IPFS + Gateway | Medium — gateway can censor, but content persists | Good — depends on gateway speed | Moderate — must update pins |
| IPFS + ENS | High — no DNS dependency | Fair — requires ENS-aware browser | Hard — each update = transaction |
| Arweave (permanent storage) | Very high — immutable once stored | Good — dedicated gateways | None — but cannot update |
Most production dApps use a layered approach: the official frontend is hosted on a fast CDN for user experience, with an IPFS mirror for censorship resistance, and open-source code so that anyone can deploy an alternative. This is not perfect decentralization, but it is practical resilience.
Upgrade Planning
Smart contracts are immutable by default, but governance often requires evolution. Plan for upgrades from the beginning:
- Proxy patterns: Use OpenZeppelin's Transparent Proxy or UUPS Proxy to make contracts upgradeable. The proxy delegates all calls to an implementation contract that can be swapped out.
- Governor upgrades: The Governor itself can be upgraded by deploying a new Governor and migrating the timelock's roles. This requires a governance proposal to pass with the old Governor.
- Token migration: If the token needs changes, deploy a new token with a migration contract that allows 1:1 exchange.
⚠️ Warning: Upgradeable contracts introduce their own risks. A compromised admin key can swap the implementation to a malicious contract. Always pair upgradeability with a timelock and multisig.
33.11 What We Have Built: The Complete Progressive Project Summary
Let us step back and appreciate the full scope of what you have built across this textbook. This is not a toy. This is a real governance system with the same architecture used by Compound, Uniswap, ENS, and hundreds of other DAOs managing billions of dollars.
The Complete Architecture
┌─────────────────────────────────────────────────────────┐
│ GOVERNANCE dApp │
│ │
│ Frontend (index.html + app.js) │
│ ├── Wallet connection (MetaMask + ethers.js v6) │
│ ├── Proposal browsing (The Graph + IPFS gateways) │
│ ├── Voting interface (castVote / castVoteWithReason) │
│ ├── Proposal creation (propose + IPFS metadata upload) │
│ └── Execution tracking (queue + execute + events) │
│ │
│ Smart Contracts (Solidity ^0.8.20) │
│ ├── GovernanceToken.sol (ERC20 + ERC20Votes + Permit) │
│ ├── VotingDApp.sol (Governor + Settings + Counting │
│ │ + Votes + TimelockControl) │
│ └── TimelockController (OpenZeppelin, unmodified) │
│ │
│ Off-chain Infrastructure │
│ ├── IPFS (proposal metadata via Pinata pinning) │
│ ├── The Graph (subgraph for indexed queries) │
│ └── RPC Provider (Alchemy/Infura for node access) │
│ │
│ Development Tooling │
│ ├── Hardhat (compilation, testing, deployment) │
│ ├── Chai + Hardhat Network Helpers (assertions + time) │
│ ├── Solidity Coverage (100% target) │
│ └── Etherscan Verification (transparency) │
└─────────────────────────────────────────────────────────┘
What You Have Accomplished
Across the chapters of this textbook, you have:
- Deployed your first smart contract and understood the transaction lifecycle (Chapter 2).
- Learned how consensus secures your transactions from double-spending and censorship (Chapter 6).
- Created an ERC-20 governance token with voting weight tracking via checkpoints (Chapter 11).
- Written complex Solidity logic with proper access control, events, and error handling (Chapter 13).
- Built a comprehensive test suite using Hardhat and Chai (Chapter 14).
- Studied smart contract security and learned to defend against reentrancy, overflow, and access control attacks (Chapter 15).
- Designed token economics with supply schedules, distribution models, and incentive alignment (Chapter 26).
- Architected a DAO governance system with proposals, voting, quorum, and timelock execution (Chapter 28).
- Integrated all components into a full-stack dApp with a frontend, IPFS storage, an indexing layer, and a deployment pipeline (Chapter 33 — this chapter).
This is the progression from "What is a blockchain?" to "I can build a production-grade governance application." That progression is not trivial. The concepts you now understand — state machines, cryptographic commitment, game-theoretic incentives, immutable deployment, content addressing, event-driven indexing — form a coherent intellectual framework that will serve you regardless of which specific blockchain, language, or framework is popular next year.
A Note on the Decentralization Spectrum
If we map each layer of our dApp onto a decentralization scale from 1 (fully centralized) to 5 (fully decentralized), the picture is revealing:
| Layer | Score | Rationale |
|---|---|---|
| Smart contracts | 5 | Immutable, permissionless, no admin keys after timelock admin revocation |
| Blockchain (Ethereum) | 5 | 800,000+ validators, no single point of failure |
| Token distribution | 3 | Depends on initial allocation; concentrated holdings = concentrated power |
| RPC Provider | 2 | Alchemy/Infura are centralized companies; self-hosting a node raises the score to 4 |
| IPFS content | 3 | Content-addressed and distributed, but dependent on pinning services for availability |
| The Graph indexing | 3 | Decentralized network of indexers, but still early in decentralization journey |
| Frontend hosting | 1 | Standard web hosting on a centralized provider |
| DNS/domain | 1 | Traditional DNS, subject to seizure |
The average is approximately 2.9 out of 5. The core protocol (contracts + blockchain) is fully decentralized. The infrastructure layers that make it usable are significantly less so. This is the honest state of dApp development in 2025, and understanding it is essential for building systems that are resilient where it matters most.
The good news is that decentralization at the contract layer — where the money is, where the votes are, where the governance decisions live — is the most critical layer. A censored frontend is an inconvenience; a compromised contract is a catastrophe. Our system is fully decentralized where the stakes are highest.
What This System Does Not Have (Yet)
To set appropriate expectations, here is what a production governance system would add beyond what we have built:
- Professional frontend framework: Our HTML/JS frontend is deliberately simple. A production dApp would use React, Next.js, or similar.
- ENS integration: Resolving Ethereum addresses to human-readable names (e.g.,
vitalik.eth). - Snapshot voting: Off-chain voting for gas-free signaling, with on-chain execution for binding decisions.
- Delegation UI: A rich interface for browsing delegates, viewing their voting history, and delegating with a single click.
- Multi-chain deployment: Deploying the same governance system on Ethereum, Arbitrum, Optimism, and Base.
- Formal verification: Mathematical proofs that the contracts satisfy their specifications.
- Professional audit: A third-party security audit before mainnet deployment.
These are not shortcomings of our project — they are the natural next steps that take a working system and make it production-ready.
33.12 Summary and Bridge to Chapter 34
This chapter brought together every thread of the progressive project into a single working application. You now understand the full dApp architecture stack: smart contracts for state and execution, RPC providers for node access, ethers.js for frontend-to-blockchain communication, MetaMask for user identity and transaction signing, IPFS for decentralized storage, The Graph for efficient data indexing, Hardhat for testing and deployment, and Etherscan for transparency and verification.
The key lessons of this chapter:
- A dApp is more than smart contracts. The frontend, the wallet integration, the storage layer, and the indexing layer are all essential components that determine whether the system is usable.
- Decentralization is a spectrum. Our frontend is hosted on a centralized server. Our RPC provider is a centralized service. IPFS gateways are centralized endpoints. True decentralization requires effort at every layer.
- Testing is not optional. Smart contract bugs are permanent. A comprehensive test suite — unit, integration, edge case, and security tests — is the minimum standard.
- Deployment is a process, not an event. The journey from local to testnet to mainnet requires environment management, contract verification, role configuration, and post-deployment monitoring.
- The progressive project demonstrates a real architecture. The governance system you have built uses the same patterns as Compound Governor, Uniswap Governance, and ENS DAO. The skills transfer directly.
In Chapter 34, we turn from building to analyzing. You will learn to read blockchain data at scale, trace transactions through the EVM, analyze DeFi protocols, and use tools like Dune Analytics and Etherscan's advanced features to understand what is really happening on-chain. Building a dApp teaches you how the system works from the inside. Analyzing blockchain data teaches you how the system behaves in the wild — and the two perspectives together give you a complete understanding.
🔗 Progressive Project Status: COMPLETE
Congratulations. You have built a full-stack decentralized governance application from first principles. The code in this chapter's
code/directory represents the culmination of the entire progressive project. Deploy it. Break it. Extend it. It is yours.