This document contains the assessment criteria for all three capstone projects. Each rubric uses four performance levels:
Excellent (A/A-): Exceeds expectations. Work demonstrates sophisticated understanding, original analysis, and professional quality.
Proficient (B+/B): Meets expectations. Work demonstrates solid understanding, competent analysis, and clear communication.
Developing (B-/C+): Approaching expectations. Work shows understanding but has notable gaps in analysis, evidence, or communication.
Beginning (C/below): Below expectations. Work has significant gaps that undermine the analysis or communication.
Rubric 1: Comprehensive AI Audit Report (Capstone 1)
Total: 100 points | Weight: 30% of final grade
1.1 System Understanding (15 points)
Level
Points
Description
Excellent
14–15
System description is specific, accurate, and clearly distinguishes between marketing claims and operational reality. Historical context is well-researched. The reader fully understands what the system does, who built it, and who is affected.
Proficient
11–13
System is described accurately with sufficient detail. Some reliance on company-provided descriptions, but critical perspective is present. Reader has a clear understanding of the system.
Developing
8–10
Description is adequate but vague in places. Over-reliance on marketing language or press coverage without critical analysis. Key details about deployment context or affected populations are missing.
Beginning
0–7
System description is superficial, inaccurate, or largely copied from company materials. Reader cannot clearly understand how the system operates in practice.
1.2 Technical Analysis (15 points)
Level
Points
Description
Excellent
14–15
Demonstrates genuine understanding of the system's learning approach, model type, and decision-making process. Technical concepts are explained clearly and accurately. Identifies what is unknown and explains why that matters.
Proficient
11–13
Technical analysis is largely accurate and uses appropriate terminology. Some concepts could be explained more clearly. Acknowledges knowledge gaps.
Developing
8–10
Shows basic understanding but makes technical errors or uses jargon without explanation. May avoid technical depth or repeat explanations from course materials without applying them to the specific system.
Beginning
0–7
Technical analysis is absent, superficial, or contains significant errors. Copies technical descriptions without demonstrating understanding.
1.3 Data and Bias Analysis (20 points)
Level
Points
Description
Excellent
18–20
Data analysis traces sources, identifies specific representational gaps, and examines labeling assumptions. Bias audit identifies specific affected groups, applies appropriate fairness frameworks, cites evidence, and considers intersectional effects.
Proficient
14–17
Data sources and quality are analyzed with reasonable specificity. Bias audit identifies affected groups and applies fairness concepts. Evidence is cited, though analysis could be deeper.
Developing
10–13
Data analysis is present but vague ("the data might be biased"). Bias audit names broad categories without specific evidence. Fairness frameworks are mentioned but not applied rigorously.
Beginning
0–9
Data analysis is absent or generic. Bias discussion is speculative without evidence. No application of course frameworks.
1.4 Governance and Policy (10 points)
Level
Points
Description
Excellent
9–10
Thorough review of applicable regulations and governance gaps. Demonstrates understanding of regulatory complexity (Chapter 13). Accountability analysis identifies specific mechanisms and their limitations.
Proficient
7–8
Reviews relevant regulations accurately. Identifies governance gaps. Accountability analysis is present though could be more specific.
Developing
5–6
Mentions regulations but analysis is thin. Governance gaps are noted in general terms without specific recommendations.
Beginning
0–4
Regulatory review is absent, inaccurate, or copied without analysis. No meaningful discussion of accountability.
1.5 Stakeholder Impact (10 points)
Level
Points
Description
Excellent
9–10
Maps direct and indirect impacts on multiple stakeholder groups with specificity. Considers power dynamics, labor effects, and downstream consequences. Analysis reveals non-obvious impacts.
Proficient
7–8
Identifies key stakeholder groups and their interests. Impact analysis is evidence-based and considers multiple dimensions.
Developing
5–6
Stakeholder identification is reasonable but impact analysis is surface-level. May focus on obvious stakeholders and miss indirect effects.
Beginning
0–4
Stakeholder analysis is minimal or absent. Impacts are stated without evidence or analysis.
1.6 Recommendations Quality (15 points)
Level
Points
Description
Excellent
14–15
Recommendations are specific, actionable, prioritized, and clearly tied to findings. Trade-offs are acknowledged. Recommendations are organized by audience (developer, regulator, user, civil society).
Proficient
11–13
Recommendations are concrete and connected to findings. Some could be more specific or better prioritized. Trade-offs are partially addressed.
Developing
8–10
Recommendations are present but vague ("they should improve fairness") or disconnected from the analysis. Limited attention to feasibility or trade-offs.
Beginning
0–7
Recommendations are absent, generic, or unrelated to the report's findings. No consideration of trade-offs or implementation.
1.7 Writing and Communication (10 points)
Level
Points
Description
Excellent
9–10
Writing is clear, concise, and appropriate for a decision-making audience. Technical concepts are explained accessibly. Document is well-organized with effective transitions. Properly formatted with consistent citations.
Proficient
7–8
Writing is clear and generally well-organized. Some jargon or unclear passages. Citations are mostly consistent.
Developing
5–6
Writing has notable clarity issues, poor organization, or inconsistent tone. Reads more like a term paper than a professional document. Citation issues.
Report tells a coherent story. Sections build on each other. Findings connect across domains (technical analysis informs bias audit, which informs governance review, which informs recommendations). Growth from early progressive project components is evident.
Proficient
4
Sections are connected and the report has a logical flow. Some missed opportunities for cross-referencing.
Developing
3
Sections feel somewhat independent. Limited connections between domains.
Beginning
0–2
Sections are disconnected — reads like separate assignments stapled together.
Rubric 2: AI Policy Brief (Capstone 2)
Total: 100 points | Weight: 25% of final grade
2.1 Problem Definition and Scoping (10 points)
Level
Points
Description
Excellent
9–10
Policy question is sharply defined, appropriately scoped, and clearly situated in the current regulatory landscape. Urgency is established with evidence, not rhetoric.
Proficient
7–8
Policy question is clear and reasonably scoped. Context is adequate. Urgency is stated, though evidence could be stronger.
Developing
5–6
Problem is identified but scoping is too broad or too narrow. Context is thin. Urgency is asserted rather than demonstrated.
Beginning
0–4
Problem is vague, poorly scoped, or misunderstands the policy landscape.
2.2 Technical Communication (15 points)
Level
Points
Description
Excellent
14–15
Technical background is accurate, clearly explained, and calibrated to a non-technical policy audience. Analogies are effective. Jargon is defined or avoided. A legislator could read this section and feel informed, not confused.
Proficient
11–13
Technical explanation is accurate and mostly accessible. Some passages could be clearer for non-technical readers.
Developing
8–10
Technical content is present but either too detailed for the audience or too simplified to be useful. Jargon appears without definition.
Beginning
0–7
Technical background is absent, inaccurate, or incomprehensible to the target audience.
2.3 Stakeholder Analysis (15 points)
Level
Points
Description
Excellent
14–15
Identifies at least four distinct stakeholder groups with nuanced analysis of interests, concerns, and power dynamics. Non-obvious stakeholders are included. Competing interests are characterized fairly — even groups the brief argues against are represented honestly.
Proficient
11–13
Four or more stakeholder groups identified with reasonable analysis of interests. Some nuance in characterizing competing positions.
Developing
8–10
Stakeholders are listed but interests are described superficially. Analysis may be one-sided, presenting only sympathetic stakeholders in depth.
Beginning
0–7
Stakeholder analysis is absent or pro forma. Interests of opposing stakeholders are distorted or ignored.
2.4 Policy Analysis (20 points)
Level
Points
Description
Excellent
18–20
Presents three or more genuinely distinct options with honest trade-off analysis. Each option includes advantages, disadvantages, and relevant precedents. Status quo is analyzed, not dismissed. Summary comparison (table or equivalent) makes options easy to compare.
Proficient
14–17
Three or more options with reasonable trade-off analysis. Options are distinct, though analysis of disadvantages or precedents may be uneven.
Developing
10–13
Options are presented but may not be genuinely distinct, or trade-off analysis is superficial. Some options may be straw men. Limited use of precedents.
Beginning
0–9
Fewer than three options, or options are not analyzed with trade-offs. Policy analysis is superficial or one-sided.
2.5 Recommendation Quality (20 points)
Level
Points
Description
Excellent
18–20
Recommendation is specific, clearly justified by the evidence and analysis, and responsive to stakeholder concerns. Limitations are honestly acknowledged. Implementation plan is realistic with timeline, responsible parties, resource requirements, success metrics, and review mechanism.
Proficient
14–17
Recommendation is clear and supported by analysis. Implementation plan is present with most key elements. Some limitations acknowledged.
Developing
10–13
Recommendation is stated but justification is thin. Implementation plan is vague or missing key elements. Limitations are not addressed.
Beginning
0–9
Recommendation is asserted rather than argued, or is disconnected from the analysis. Implementation plan is absent or unrealistic.
2.6 Evidence and Sources (10 points)
Level
Points
Description
Excellent
9–10
Minimum 20 sources from at least 5 types. Sources are credible, current, and diverse. Claims are supported with specific data. AI-generated content, if used, is verified and disclosed.
Proficient
7–8
Source requirements are met. Most claims are supported. Sources are generally credible and relevant.
Developing
5–6
Fewer than 20 sources or limited source diversity. Some claims are unsupported. Citation formatting is inconsistent.
Beginning
0–4
Sources are insufficient, non-credible, or absent. Significant claims are unsupported.
2.7 Professional Writing (10 points)
Level
Points
Description
Excellent
9–10
Writing is concise, precise, and formatted as a professional policy document. Every paragraph serves a clear purpose. Reads as a document a government office would take seriously.
Proficient
7–8
Writing is clear and well-organized. Mostly concise, with some passages that could be tightened. Professional tone is maintained.
Developing
5–6
Writing is adequate but reads more like an academic paper than a policy document. Wordy passages, inconsistent tone, or organizational issues.
Beginning
0–4
Writing undermines the brief's credibility. Significant errors, poor organization, or inappropriate tone.
Rubric 3: AI Literacy Workshop Design (Capstone 3)
Total: 100 points | Weight: 20% of final grade
3.1 Audience Understanding (15 points)
Level
Points
Description
Excellent
14–15
Audience profile is detailed and specific, demonstrating genuine research or experience. Prior knowledge, motivations, anxieties, and accessibility needs are addressed thoughtfully. Design choices clearly reflect audience understanding.
Proficient
11–13
Audience is described with reasonable specificity. Most design choices are appropriate for the audience, though some may be generic.
Developing
8–10
Audience description is present but assumptions are untested or stereotypical. Some design choices do not match the stated audience.
Beginning
0–7
Audience is described vaguely. Workshop design does not demonstrate real understanding of who the participants are.
3.2 Content Accuracy and Selection (20 points)
Level
Points
Description
Excellent
18–20
Content is technically accurate and well-selected for the audience. Difficult editorial choices about what to include and exclude are made thoughtfully and explained. Concepts are translated into accessible language without sacrificing accuracy. Learning objectives are clear and achievable in 90 minutes.
Proficient
14–17
Content is accurate and appropriate. Selection is reasonable, though may try to cover too much. Accessibility of language is mostly good. Learning objectives are clear.
Developing
10–13
Content has minor accuracy issues or is not well-matched to the audience. May try to cover too much or too little. Some concepts remain too technical or are oversimplified to the point of inaccuracy.
Beginning
0–9
Content contains significant accuracy errors, is inappropriate for the audience, or does not add up to a coherent learning experience.
3.3 Activity Design (20 points)
Level
Points
Description
Excellent
18–20
Three or more well-designed, original activities that engage participants actively with AI concepts. Activities are described in enough detail for another facilitator to run them. Timing is realistic. Activities serve clear learning purposes and include debrief questions. Adaptation notes show thoughtful contingency planning.
Proficient
14–17
Three activities that are well-structured and appropriately interactive. Instructions are mostly sufficient. Timing is generally realistic. Learning purposes are clear.
Developing
10–13
Activities are present but may be underdeveloped, poorly timed, or insufficiently interactive. Instructions lack detail. Debriefs are absent or perfunctory.
Beginning
0–9
Fewer than three activities, or activities are passive (watching a video, reading a handout), inappropriate for the audience, or described too vaguely to be usable.
3.4 Materials Quality (15 points)
Level
Points
Description
Excellent
14–15
Handout materials are visually clean, accurate, and practically useful. A participant could take them home and actually use them. "Questions to Ask" checklist or equivalent tool is well-crafted. Further resources are curated and current.
Proficient
11–13
Materials are accurate and useful. Design could be more polished. Resources are relevant, if not deeply curated.
Developing
8–10
Materials are present but text-heavy, generic, or not audience-appropriate. Resources list is perfunctory.
Beginning
0–7
Materials are absent, contain errors, or would not be useful to participants.
3.5 Facilitation Planning (15 points)
Level
Points
Description
Excellent
14–15
Facilitation guide demonstrates sophisticated understanding of teaching in informal settings. Addresses tone, common pitfalls, difficult moments, accessibility, and technology failures. Anticipated questions are realistic and responses are thoughtful. Session pacing shows awareness of attention and energy dynamics.
Proficient
11–13
Facilitation guide addresses key concerns. Anticipated questions are reasonable. Pacing is logical. Most contingencies are considered.
Developing
8–10
Facilitation guide exists but is generic. Limited anticipation of challenges. Pacing may be unrealistic.
Beginning
0–7
Facilitation guide is absent or does not address real facilitation challenges.
3.6 Reflection Depth (15 points)
Level
Points
Description
Excellent
14–15
Reflection demonstrates genuine critical thinking about pedagogical choices, audience needs, and personal learning. Specific examples support claims. Honest about challenges and uncertainties. Clear connection to course themes, especially "AI Literacy as Civic Skill."
Proficient
11–13
Reflection addresses all required questions with reasonable depth. Some specific examples. Connection to course themes is present.
Developing
8–10
Reflection is present but superficial. Generic statements without specific support. Weak connection to course themes.
Beginning
0–7
Reflection is perfunctory, missing, or does not demonstrate genuine engagement with the design process.
3.7 Delivery and Adaptation (bonus: up to 10 points, if workshop is delivered)
Level
Points
Description
Excellent
9–10
Workshop was delivered to a real audience. Team adapted in the moment based on participant responses. Revised session plan reflects specific lessons learned. Participant feedback is collected and analyzed.
Proficient
6–8
Workshop was delivered. Some adaptation occurred. Revised plan reflects general lessons. Feedback was collected.
Developing
3–5
Workshop was delivered but team did not adapt or collect meaningful feedback.
Beginning
0–2
Workshop delivery was incomplete or demonstrated significant facilitation problems.
Peer Evaluation Form
Submitted confidentially by each team member for Capstones 2 and 3.
Instructions: Rate each team member (including yourself) on the following dimensions. Use a scale of 1–5, where 1 = minimal contribution, 3 = expected contribution, and 5 = exceptional contribution.
Dimension
Self
Member 2
Member 3
Member 4
Contribution quality: Produced work that was thorough, accurate, and well-crafted
Reliability: Met deadlines, followed through on commitments, attended meetings
Collaboration: Communicated effectively, gave and received feedback constructively, supported teammates
Initiative: Identified problems proactively, suggested improvements, took on additional work when needed
Integration: Helped connect individual sections into a coherent whole; participated in review and editing
Open-ended questions (answer in 2–3 sentences each):
What was each team member's most valuable contribution to the project?
If you could change one thing about how the team worked together, what would it be?
Is there any team member whose contribution was significantly above or below the group average? If so, explain briefly.
⚠️ Note to instructors: Peer evaluations are used to identify contribution disparities, not to penalize small differences. Adjustments should be made only when peer evaluations and other evidence (drafts, meeting notes, commit histories) indicate a significant and consistent gap. A suggested approach: if a student's average peer rating is more than 1.0 below the team average, schedule a conversation before making grade adjustments.
Self-Assessment Reflection Prompts
Completed individually. Submitted with each capstone project.
Answer each question in 3–5 sentences. Honest self-assessment is valued more than self-promotion.
For Capstone 1 (AI Audit Report):
Growth: Look at your Chapter 1 progressive project component and your final report. How has your understanding of your AI system changed? What do you understand now that you did not understand at the start of the course?
Threshold concepts: Which of the course's threshold concepts (listed below) most changed how you think about AI? How is that visible in your report?
- "AI is a spectrum of techniques, not a single technology"
- "Machines learn from patterns in data, not from understanding"
- "Data is never neutral — it encodes the world that created it"
- "LLMs predict the next word — they don't understand meaning"
- "AI decisions are probability estimates, not truths"
- "AI confidence and AI correctness are different things"
- "Fairness is not a single metric"
- "In the age of AI, privacy is not about hiding — it's about power"
- "The alignment problem — specifying what we want is harder than building the system"
Limitations: What is the weakest section of your report, and why? What would you need (more time, more access, more expertise) to strengthen it?
AI literacy: After completing this report, how would you define "AI literacy" in your own words? How has your definition changed since Chapter 1?
For Capstone 2 (Policy Brief):
Perspective-taking: Which stakeholder group was hardest for you to represent fairly? Why?
Trade-offs: What was the most difficult trade-off in your policy recommendation? What did you sacrifice, and why?
Collaboration: What did you learn about collaborative writing and analysis from this project? What would you do differently next time?
Civic skill: How did this project change your understanding of AI governance as a civic responsibility?
For Capstone 3 (Workshop Design):
Translation: Which AI concept was hardest to translate for your audience? What strategy did you use, and how satisfied are you with the result?
Empathy: What did you learn about your audience that surprised you? How did that change your design?
What you left out: What important AI literacy concepts did you choose not to include in your 90-minute workshop? Why? What would you add if you had three hours instead?
Civic engagement: If you could deliver this workshop anywhere in your community, where would you go and why?
Grade Weights Summary
Component
Weight
Capstone 1: AI Audit Report (individual)
30%
Capstone 2: AI Policy Brief (group)
25%
Capstone 3: AI Literacy Workshop Design (group)
20%
Progressive project components (Chapters 1–21)
15%
Participation and in-class activities
10%
Total
100%
💡 Note to instructors: These weights assume the capstones are the primary assessment mechanism for the course. Adjust as needed for your institutional context. If your course includes exams, reduce capstone weights proportionally. The key principle is that the capstones should carry enough weight to incentivize sustained effort — they are where students demonstrate the integration and synthesis that distinguish AI literacy from AI awareness.
We use cookies to improve your experience and show relevant ads. Privacy Policy