Capstone Project 2: AI Policy Brief

Overview

Throughout this course, you have learned that AI is not just a technology problem — it is a governance problem. From the EU AI Act (Chapter 13) to predictive policing accountability (Chapter 17) to global digital sovereignty debates (Chapter 19), you have seen that the people making policy decisions about AI often lack the technical literacy to evaluate what they are regulating, while the people building AI systems often lack the policy literacy to anticipate how their tools will be governed.

You are now in a rare position: you have both. This project asks you to use that dual literacy to write a policy brief — the kind of document that actually lands on a legislator's desk, a city council's agenda, or a regulatory agency's review stack.

A policy brief is not an essay. It is a decision document. It presents a problem, analyzes options, and recommends a course of action. It is concise, evidence-based, and written for an audience that is smart but busy and not necessarily technical. Every sentence must earn its place.


Learning Objectives

By completing this project, your team will demonstrate the ability to:

  1. Identify a specific, well-scoped AI policy challenge and explain why it requires government action (Chapters 13, 17, 19)
  2. Analyze the technical dimensions of the issue in language accessible to non-technical decision-makers (Chapters 3–8)
  3. Map stakeholders with competing interests and characterize their positions fairly (Chapters 9, 10, 11)
  4. Evaluate multiple policy options with honest assessment of trade-offs (Chapters 13, 17, 19)
  5. Recommend a specific course of action supported by evidence and reasoning (Chapter 21)
  6. Collaborate effectively as a team, with clear division of labor and shared accountability

Topic Selection

Choose one of the following topics, or propose your own (subject to instructor approval). Each topic includes a suggested government body to address, but you may adjust the audience to suit your analysis.

Topic A: Facial Recognition in Public Spaces

Audience: City or state legislature Core question: Should your jurisdiction ban, regulate, or permit the use of facial recognition technology in public spaces by law enforcement and government agencies? Key chapters: 6 (computer vision), 9 (bias — facial recognition's documented racial disparities), 12 (surveillance), 13 (governance), 17 (criminal justice)

Topic B: AI in Hiring and Employment Decisions

Audience: Federal or state labor agency Core question: What regulatory framework should govern the use of AI systems in resume screening, candidate assessment, and employment decisions? Key chapters: 7 (AI decision-making), 9 (bias in classification), 10 (AI and work), 13 (governance), 17 (accountability)

Topic C: Deepfake Legislation

Audience: National legislature or media regulatory body Core question: How should the law address AI-generated synthetic media (deepfakes) while protecting free expression and legitimate uses? Key chapters: 5 (LLMs), 6 (computer vision, deepfakes), 11 (AI and creativity), 12 (privacy), 13 (governance)

Topic D: AI in Education Policy

Audience: State or national department of education Core question: What policies should govern the use of AI tools in K–12 and higher education, including generative AI, adaptive learning platforms, and automated assessment? Key chapters: 5 (LLMs), 8 (errors and hallucinations), 11 (creativity and authorship), 14 (using AI effectively), 16 (AI in education)

Topic E: Healthcare AI Standards

Audience: National health agency (e.g., FDA, EMA) or state health department Core question: What standards should apply to AI systems used in clinical diagnosis, treatment recommendations, and patient triage? Key chapters: 7 (decision-making), 8 (failures), 9 (bias and demographic disparities), 15 (healthcare AI), 20 (safety)

Topic F: AI Environmental Reporting and Data Center Regulation

Audience: Environmental protection agency or energy regulatory body Core question: Should AI companies be required to report the environmental impact of their systems, and should data center construction face environmental review? Key chapters: 18 (AI and environment), 13 (governance), 19 (global perspectives), 10 (labor and industry)

Topic G: Propose Your Own

Requirements: Must involve a specific AI application, a specific government body, and a genuine policy gap. Submit a one-paragraph proposal for instructor approval by the stated deadline.


Policy Brief Format

Your brief must follow this structure. Policy briefs are tightly formatted documents — that discipline is part of the exercise.

1. Cover Page

  • Title of the brief
  • Team members and affiliations
  • Date
  • Intended audience (specific government body)

2. Executive Summary (1 page)

The most important page in the document. Many decision-makers will read only this. It must include:

  • The problem in two to three sentences
  • Why action is needed now — what makes this urgent?
  • Your recommendation in one to two sentences
  • Key supporting evidence — two to three bullet points

💡 Test your executive summary: Give it to someone outside the course. If they cannot explain the problem and your recommendation after reading it, revise.

3. Background and Context (1–2 pages)

Provide the information a decision-maker needs to understand the issue:

  • Technical background: How does the relevant AI technology work? Explain at a level appropriate for a smart non-technical audience. (Recall the approach from Chapters 3 and 5 — analogies first, then mechanism.)
  • Current state: What is happening right now? Where is this technology deployed? What are the documented outcomes?
  • Regulatory landscape: What laws and regulations currently apply? What are neighboring jurisdictions doing? (Chapter 13)
  • Why existing frameworks are insufficient: What gap does your brief address?

Avoid technical jargon unless you define it on first use. Remember: your audience is legislators and their staff, not computer scientists.

4. Stakeholder Analysis (1–2 pages)

Map the key stakeholders and their positions. A strong stakeholder analysis:

  • Identifies at least four distinct stakeholder groups (e.g., technology companies, affected communities, law enforcement, civil liberties organizations, labor unions, educators, patients)
  • Characterizes each group's interests, concerns, and power — not just their stated positions
  • Notes where stakeholder interests align and where they conflict
  • Acknowledges legitimate concerns on all sides, even those you ultimately argue against

⚠️ Common weakness: Treating stakeholder analysis as a formality. This section should genuinely inform your policy options. If you cannot explain how your recommendation addresses each group's concerns, your analysis is not deep enough.

Think back to the anchor examples. ContentGuard's content moderation decisions affected platform users, advertisers, content creators, and the communities whose speech was being moderated — each with different and sometimes incompatible interests. CityScope Predict's stakeholder map included city officials, police departments, community residents, civil rights organizations, and technology vendors. Your stakeholder analysis should be at least this layered.

5. Policy Options (2–3 pages)

Present three to four distinct policy options, ranging from minimal intervention to strong regulation. For each option, provide:

  • Description: What would this option actually do? Be specific — cite model legislation, existing regulations in other jurisdictions, or proposed frameworks.
  • Advantages: What problems does it solve? Who benefits?
  • Disadvantages: What problems does it create? Who bears the costs? What are the implementation challenges?
  • Precedents: Where has something similar been tried, and what happened?

One of your options should be "maintain the status quo" — even if you ultimately argue against it, decision-makers need to understand what happens if they do nothing.

Structure this section so that the options are easy to compare. A summary table is helpful:

Option Key Feature Primary Benefit Primary Risk Precedent
A: Status quo No new regulation No compliance costs Continued harms Current state
B: Disclosure requirements Mandatory transparency Informed public Limited enforcement EU AI Act (limited risk tier)
C: Sector-specific regulation Targeted rules for high-risk uses Proportionate response Regulatory gaps FDA medical device framework
D: Comprehensive ban Prohibit the technology Eliminates harms Eliminates benefits San Francisco facial recognition ban

6. Recommendation (1–2 pages)

State your team's recommended option clearly, then justify it:

  • Why this option: Connect your recommendation to the evidence in your background section and the stakeholder dynamics in your analysis.
  • How it addresses key concerns: Show how your recommendation responds to each major stakeholder group's interests.
  • Limitations: Acknowledge what your recommendation does not solve. Honesty about limitations builds credibility.
  • Comparison: Briefly explain why you did not choose the other options.

7. Implementation Plan (1 page)

A recommendation without an implementation path is an opinion. Address:

  • Timeline: What should happen first, next, and later? Propose a phased approach.
  • Responsible parties: Who implements, who oversees, who enforces?
  • Resource requirements: What funding, expertise, or institutional capacity is needed?
  • Success metrics: How will you know if the policy is working? What data should be collected?
  • Review mechanism: When and how should the policy be evaluated and updated?

8. References

  • Minimum 20 sources across at least 5 source types.
  • Include academic research, government documents, investigative journalism, technical reports, and civil society publications.
  • Use a consistent citation format throughout.

Group Collaboration Guidelines

Team Formation

Teams of 3–4 students. If possible, form teams with diverse disciplinary backgrounds — a team with perspectives from computer science, political science, ethics, and a domain specialty (health, education, criminal justice) will produce stronger work than a team where everyone has the same background.

Roles and Responsibilities

Assign roles at the start. Suggested roles for a 4-person team:

Role Primary Responsibilities
Project Manager Timeline, coordination, ensuring all sections integrate; leads the executive summary
Technical Lead Background section, technical accuracy review; ensures non-technical accessibility
Policy Analyst Policy options, regulatory landscape research, implementation plan
Stakeholder Researcher Stakeholder analysis, evidence gathering, impact assessment

All team members contribute to the recommendation section. All team members review and edit the full document.

Collaboration Expectations

  • Hold a kickoff meeting within the first week to select your topic, assign roles, and set milestones.
  • Use a shared document for drafting so all team members can see progress in real time.
  • Schedule at least two full-team review sessions — one after the first draft and one before submission.
  • Disagreements about the recommendation are productive, not problems. Document them. If your team genuinely cannot agree, present the disagreement transparently.

Peer Accountability

Each team member will submit a confidential peer evaluation (see the Capstone Rubric document). Significant disparities in contribution may result in individual grade adjustments.


What Makes a Strong Brief vs. a Weak Brief

Element Strong Brief Weak Brief
Scoping Addresses a specific, well-defined policy question Too broad ("AI should be regulated") or too narrow to matter
Audience awareness Written for the specific government body; technical concepts explained accessibly Reads like a term paper, not a policy document; assumes technical knowledge
Stakeholder analysis Identifies non-obvious stakeholders; characterizes interests, not just positions Lists stakeholders without analyzing their interests or power dynamics
Policy options Presents genuinely distinct options with honest trade-off analysis Options are straw men set up to make the preferred option look inevitable
Recommendation Justified by evidence and analysis; acknowledges limitations Asserted rather than argued; ignores costs or implementation challenges
Evidence Uses diverse, credible sources; cites specific data and cases Relies on a few sources or on general claims without evidence
Writing Concise, precise, professional; every paragraph earns its place Wordy, repetitive, or disorganized; padded to meet length requirements

Submission Guidelines

  • Format: PDF, single-spaced body text, 11- or 12-point professional font (e.g., Calibri, Georgia, Times New Roman), 1-inch margins.
  • Length: 10–12 pages (excluding cover page and references).
  • File naming: TeamName_PolicyBrief_[Topic].pdf
  • Supplementary submission: Each team member submits an individual peer evaluation form and a 250-word personal reflection on the collaboration process.

AI Use Policy

The same policy from Capstone 1 applies. Your team may use AI tools, but you must:

  1. Document which tools were used and for what purpose.
  2. Verify all AI-generated facts, statistics, and legal citations. Policy briefs that cite non-existent legislation or fabricated statistics will receive significant grade penalties — the stakes of hallucinations are especially high in policy documents (Chapter 8).
  3. Include an "AI Use Disclosure" section at the end of the brief.

Assessment

This project is assessed using the AI Policy Brief rubric in the Capstone Rubric document. The rubric evaluates seven dimensions: Problem Definition and Scoping, Technical Communication, Stakeholder Analysis, Policy Analysis, Recommendation Quality, Implementation Planning, and Professional Writing. Peer evaluation scores may adjust individual grades.

This project is worth 25% of your final grade.