Part 3: Algorithmic Systems and AI Ethics

"An algorithm must be seen to be believed." — Donald Knuth


Parts 1 and 2 examined data and privacy as if data simply existed and was collected. Part 3 introduces the engine that transforms data from a passive record into an active social force: algorithms.

Algorithms — and their most powerful contemporary form, artificial intelligence — do not merely store or transmit data. They act on it: sorting, scoring, predicting, classifying, recommending, generating, and deciding. When a loan application is approved or denied, an algorithm is often involved. When a social media feed shows you one story instead of another, an algorithm made that choice. When a predictive policing system sends officers to a particular neighborhood, an algorithm drew the map.

Part 3 examines these systems through seven chapters:

Chapter 13: How Algorithms Shape Society surveys the algorithmic landscape — from recommendation systems and content moderation to hiring tools and criminal justice — and asks what changes when institutions delegate decisions to code.

Chapter 14: Bias in Data, Bias in Machines investigates how historical inequalities get encoded in algorithmic systems — through biased training data, flawed design choices, and feedback loops that amplify the very disparities they claim to address. This chapter includes Python code for a BiasAuditor class.

Chapter 15: Fairness — Definitions, Tensions, and Trade-offs reveals that "fairness" has multiple incompatible mathematical definitions — and that choosing between them is a political and ethical decision, not a technical one. This chapter includes a Python FairnessCalculator dataclass.

Chapter 16: Transparency, Explainability, and the Black Box Problem asks what happens when we can't explain why an algorithm made a particular decision — and whether the right to explanation is meaningful or merely aspirational.

Chapter 17: Accountability and Audit confronts the accountability gap: when an algorithmic system causes harm, who is responsible? The developer? The deployer? The data provider? This chapter explores emerging audit methods and liability frameworks.

Chapter 18: Generative AI — Ethics of Creation and Deception examines the newest frontier: AI systems that create text, images, audio, and video, raising questions about training data consent, copyright, deepfakes, and the provenance of truth.

Chapter 19: Autonomous Systems and Moral Machines closes the part with the most ambitious applications of AI — self-driving cars and autonomous weapons — and the deepest philosophical question: can machines be moral agents?


What's at Stake

The stakes in Part 3 are higher than in Parts 1-2. We are no longer discussing abstract data flows or theoretical privacy principles. We are discussing systems that make consequential decisions about real people — who gets hired, who gets a loan, who gets paroled, who gets medical treatment, and what billions of people see when they open their phones.

For VitraMed, this is the part where the company deploys machine learning models for patient risk scoring — and discovers that those models perform worse for Black patients, raising fairness questions that technical fixes alone cannot resolve.

For Eli, this is the part where the predictive policing algorithms targeting his Detroit neighborhood come under scrutiny — and where the mathematical impossibility of being simultaneously fair in all senses becomes not a theoretical curiosity but a lived reality.

By the end of Part 3, you will be equipped to evaluate algorithmic systems not just on their technical performance but on their social impact, ethical implications, and governance requirements. That evaluation is the foundation for Part 4, where we turn to the regulatory frameworks societies have built — and are still building — to govern these systems.

Chapters in This Part