Capstone Project 1: Comprehensive AI Audit Report
Overview
You have been building toward this moment for the entire course. Since Chapter 1, when you first selected an AI system and documented your initial impressions, you have added layers of analysis — technical, ethical, social, environmental, and political — with each chapter. This capstone is your chance to pull all of those threads together into a single, coherent document that demonstrates genuine AI literacy.
The AI Audit Report is not a summary of what you learned in class. It is an applied analysis of a real system operating in the real world, written for an audience that needs to make decisions about that system. Think of yourself as an independent auditor hired by a city council, a hospital board, a school district, or a corporate ethics committee. Your job is to tell them what this system does, how it works, where the risks are, and what they should do about it.
If you have kept up with the progressive project components, you already have substantial raw material. This capstone asks you to revise, synthesize, and elevate that material into a polished professional document.
Learning Objectives
By completing this project, you will demonstrate your ability to:
- Describe an AI system's technical architecture, data pipeline, and decision-making process in accessible language (Chapters 1, 3, 5, 6, 7)
- Analyze the system's training data for quality, representativeness, and potential bias (Chapters 4, 9)
- Evaluate the system's failure modes, error patterns, and consequences of mistakes (Chapter 8)
- Assess the system's impact on diverse stakeholders, including those who benefit and those who are harmed (Chapters 9, 10, 11, 17)
- Examine privacy implications and data governance practices (Chapter 12)
- Review the regulatory landscape and recommend governance improvements (Chapters 13, 19)
- Estimate the system's environmental footprint and propose sustainability measures (Chapter 18)
- Synthesize findings into actionable, prioritized recommendations (Chapter 21)
Report Structure
Your final report must include all of the following sections. Page counts are guidelines, not rigid requirements — let the complexity of your system guide the depth of each section.
1. Executive Summary (1 page)
Write this last. Summarize the system you audited, your three most important findings, and your top three recommendations. A busy decision-maker who reads only this page should walk away understanding the stakes.
💡 Tip: A strong executive summary does not use jargon. It answers: What is this system? What did I find? What should be done?
2. System Description (1–2 pages)
Describe the AI system in concrete, specific terms:
- What it does: What task does the system perform? What inputs does it take, and what outputs does it produce?
- Who built it: The organization behind the system, their stated goals, and their business model.
- Who uses it: The operators, end users, and people affected by the system's decisions.
- How it is deployed: The context in which the system operates — scale, geography, integration with human decision-making.
- Historical context: When was it developed? What problem was it designed to solve? How has it evolved? (Draw from your Chapter 2 timeline.)
Avoid marketing language. Describe the system as it actually operates, not as its creators wish it operated.
3. Technical Analysis (2–3 pages)
You are not expected to reverse-engineer proprietary algorithms. You are expected to demonstrate that you understand the type of AI at work and can explain it clearly. Address:
- Learning approach: Is the system using supervised learning, unsupervised learning, reinforcement learning, or a hybrid? (Chapter 3)
- Model architecture: To the extent you can determine, what kind of model powers the system — a large language model, a convolutional neural network, a decision tree ensemble, something else? (Chapters 5, 6)
- Decision type: Does the system recommend, classify, predict, or generate? (Chapter 7)
- Transparency: How interpretable is the system? Can users understand why it made a particular decision? (Chapter 7)
- Confidence and uncertainty: Does the system communicate how confident it is? Is there a gap between its confidence and its accuracy? (Chapter 8)
Where information is unavailable, say so explicitly. Identifying what you cannot find out about a system is itself a finding worth reporting.
4. Data Analysis (2–3 pages)
This section examines the fuel that powers the system. Draw on your Chapter 4 and Chapter 12 work:
- Data sources: Where does the training data come from? Who created it? Under what conditions?
- Data quality: How clean, complete, and representative is the data? What gaps exist?
- Labeling: Who labeled the data, and what assumptions did the labeling process embed? (Chapter 4, Section 4.3)
- Data collection practices: What data does the system collect from users during operation? Is consent informed and meaningful? (Chapter 12)
- Data retention and access: How long is data stored? Who has access? Can users delete their data?
⚠️ Common weakness: Vague statements like "the data might be biased." Be specific. What bias? Which groups are underrepresented? What consequences follow from the gap?
5. Bias Audit (2–3 pages)
This is the heart of the report for many systems. Apply the frameworks from Chapter 9:
- Pipeline analysis: Where in the pipeline could bias enter — data collection, labeling, model design, deployment, feedback loops?
- Demographic disparities: Does the system perform differently across racial, gender, age, socioeconomic, geographic, or disability groups? Cite evidence where available.
- Fairness definitions: Which definition(s) of fairness are most relevant to this system? Are they in tension? (Chapter 9, Section 9.3)
- Intersectional effects: Are there compounding harms for people who belong to multiple marginalized groups?
- Mitigation efforts: What has the system's developer done to address bias? How effective have those efforts been?
6. Governance Review (1–2 pages)
Assess how the system is governed — or not. Draw on Chapters 13 and 19:
- Regulatory status: What laws and regulations currently apply to this system? In which jurisdictions?
- Industry standards: Does the developer follow any voluntary standards, certifications, or best practices?
- Accountability mechanisms: If the system causes harm, what recourse do affected people have? (Chapter 17)
- Governance gaps: Where is oversight missing? What regulatory changes would improve accountability?
7. Stakeholder Impact Assessment (1–2 pages)
Map the system's effects on different groups. Be specific and evidence-based:
- Direct beneficiaries: Who gains from the system's operation, and what do they gain?
- Direct harms: Who is harmed, and how? Consider both immediate and downstream effects.
- Indirect effects: How does the system reshape the broader environment — labor markets (Chapter 10), creative industries (Chapter 11), educational settings (Chapter 16)?
- Power dynamics: Does the system concentrate or distribute power? Who gets to decide how it is used?
8. Environmental Assessment (1 page)
Drawing on Chapter 18:
- Energy consumption: Estimate the system's computational footprint for training and inference.
- Resource use: Consider water, hardware, and e-waste.
- Environmental benefits: If the system produces environmental benefits (e.g., energy optimization), weigh them against costs.
- Sustainability recommendations: Propose concrete steps to reduce the system's environmental impact.
9. Recommendations (2–3 pages)
This is where your analysis becomes actionable. Organize recommendations by priority (critical, important, suggested) and by audience (developer, regulator, user, civil society):
- Each recommendation should follow from a specific finding in your report.
- Each recommendation should be concrete enough to act on. "Be less biased" is not a recommendation. "Conduct and publish annual demographic performance audits using the fairness metrics described in Section 5" is.
- Address trade-offs honestly. If your recommendation has costs or risks, acknowledge them.
10. References and Appendices
- Cite all sources using a consistent citation format (APA, Chicago, or MLA — pick one and be consistent).
- Include an appendix with any supplementary material — screenshots, data tables, interview notes, or code.
What Makes a Strong Report vs. a Weak Report
| Element | Strong Report | Weak Report |
|---|---|---|
| System description | Specific, grounded in evidence, distinguishes between marketing claims and operational reality | Vague, relies on the company's own descriptions without critical analysis |
| Technical analysis | Demonstrates understanding of the system's approach even with limited information; names what is unknown | Copies technical jargon without explaining it, or avoids technical analysis entirely |
| Bias audit | Identifies specific groups affected, cites evidence, applies fairness frameworks from Chapter 9 | Makes generic claims about bias without specifics or evidence |
| Data analysis | Traces data from source to model, identifies gaps and labeling assumptions | Lists data sources without analyzing quality, representation, or consent |
| Recommendations | Concrete, prioritized, tied to findings, acknowledges trade-offs | Vague ("they should do better"), disconnected from analysis, ignores costs |
| Writing quality | Clear, accessible, well-organized, appropriate for a decision-making audience | Academic jargon, poor organization, unclear who the audience is |
| Integration | Sections connect to each other; the report tells a coherent story | Sections feel like separate assignments stapled together |
Connecting Your Progressive Project Components
Throughout the course, you completed 21 progressive project components. Here is how they map to the final report:
| Report Section | Primary Chapter Components |
|---|---|
| System Description | Ch. 1 (selection, initial impressions), Ch. 2 (technology history/timeline) |
| Technical Analysis | Ch. 3 (learning approach), Ch. 5 (LLM usage), Ch. 6 (vision capabilities), Ch. 7 (decision types) |
| Data Analysis | Ch. 4 (training data investigation), Ch. 12 (data collection/privacy) |
| Bias Audit | Ch. 9 (bias audit across demographics) |
| Governance Review | Ch. 13 (regulatory landscape), Ch. 19 (global/cross-cultural implications) |
| Stakeholder Impact | Ch. 10 (labor impact), Ch. 11 (authorship/creative implications), Ch. 15 (healthcare comparison), Ch. 16 (educational implications) |
| Environmental Assessment | Ch. 18 (environmental footprint) |
| Recommendations | Ch. 17 (accountability structures), Ch. 20 (alignment/safety risks), Ch. 21 (final synthesis) |
| Process Reflection | Ch. 8 (failure modes), Ch. 14 (AI tool use in audit) |
💡 Important: The capstone report is not a stapled-together collection of these components. Revise, reorganize, and rewrite. Your understanding has deepened since Chapter 1 — your early work should reflect that growth.
Process Recommendations
Weeks 1–2: Inventory and Outline - Gather all 21 progressive project components. - Identify gaps — which sections need the most new research? - Create a detailed outline mapping your existing material to the report structure.
Weeks 3–4: Drafting - Write new sections; revise existing material to fit the report's voice and structure. - Ensure every claim is supported by evidence (research, documentation, news reports, your own analysis).
Week 5: Revision - Read the report as if you are the decision-maker receiving it. Is it clear? Is it actionable? - Ask a peer to read the executive summary and tell you what they understood. If they cannot summarize your findings, revise.
Week 6: Polish and Submit - Proofread for clarity, consistency, and citation accuracy. - Format the document professionally — numbered pages, headers, consistent fonts.
Submission Guidelines
- Format: PDF, double-spaced, 12-point font, 1-inch margins.
- Length: 15–20 pages (excluding references and appendices).
- Citations: Minimum 15 sources from at least 5 different source types (academic papers, news articles, technical documentation, government reports, civil society publications).
- Appendices: Include your original progressive project components as an appendix so your instructor can see your growth over the semester.
- File naming:
LastName_AIAuditReport_[SystemName].pdf
A Note on AI Tool Use
You may use AI tools (chatbots, writing assistants, research tools) to help with this project — but you must document exactly how you used them, as you practiced in Chapter 14. Include a brief "AI Use Disclosure" section at the end of your report describing:
- Which tools you used and for what purpose
- What the tools got right and what they got wrong
- How you verified AI-generated content
- What you learned about AI's strengths and limitations from using it on this project
This is not a trick. Thoughtful, transparent use of AI tools is itself a demonstration of AI literacy. Undisclosed use, or use without verification, is not.
Assessment
This project is assessed using the Comprehensive AI Audit rubric in the Capstone Rubric document. The rubric evaluates eight dimensions: System Understanding, Technical Analysis, Data and Bias Analysis, Governance and Policy, Stakeholder Impact, Recommendations Quality, Writing and Communication, and Integration and Synthesis. See the rubric for detailed criteria at each performance level.
This project is worth 30% of your final grade.