Chapter 35: Further Reading — Generative AI Ethics

Foundational and Technical Works

1. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" Proceedings of FAccT 2021. The landmark "stochastic parrots" paper that articulated the environmental, labor, and social risks of large language models before the current generation of models was widely deployed. Essential background for understanding why scale is not neutral.

2. Bommasani, R., et al. (2021). "On the Opportunities and Risks of Foundation Models." Stanford HAI. A comprehensive technical and social analysis of foundation models — what they are, how they work, what risks they pose across dimensions including bias, privacy, security, and accountability. Remains the most comprehensive survey of the foundation model landscape.

3. Wei, J., et al. (2022). "Emergent Abilities of Large Language Models." Transactions on Machine Learning Research. The primary academic treatment of emergent capabilities in LLMs — what they are, how they are measured, and why they create fundamental challenges for capability evaluation and governance.


Hallucination and Professional Liability

4. Masnick, M. (2023, June 8). "Judge Sanctions Lawyers for Using ChatGPT to Generate Fake Case Citations." Techdirt. A thorough journalistic account of the Schwartz case, the court proceedings, and the reactions of the legal profession.

5. American Bar Association. (2023). Formal Opinion 512: Generative Artificial Intelligence Tools. ABA Standing Committee on Ethics and Professional Responsibility. The ABA's official guidance on attorney use of AI tools and the competence obligations of Model Rule 1.1. Essential reading for legal professionals and for understanding professional responsibility frameworks applicable to AI.

6. Dahl, M., et al. (2024). "Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models." Journal of Legal Analysis. An empirical study systematically testing LLM hallucination rates in legal contexts, documenting the frequency and types of hallucination in legal research tasks.


Deepfakes and Synthetic Media

7. Chesney, B., & Citron, D. (2019). "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security." California Law Review, 107, 1753. The foundational legal scholarly treatment of deepfakes — written before the current generation of AI image and video tools, but prescient about the harms that have since been documented.

8. Delfino, R. (2019). "Pornographic Deepfakes: The Case for Federal Criminalization of Revenge Porn's Next Tragic Act." Fordham Law Review, 88, 887. Early advocacy for federal criminalization of AI-generated NCII, providing legal analysis of available remedies and their gaps.

9. Deeptrace Labs. (2019). The State of Deepfakes: Landscape, Threats, and Impact. An industry research report documenting the state of deepfake technology and its deployment, including the early finding that 96% of deepfake videos were non-consensual pornography.


10. Lemley, M. A., & Casey, B. (2021). "Fair Learning." Texas Law Review, 99, 743. A leading legal analysis of the fair use question in AI training, arguing that AI training on copyrighted works may constitute fair use under current doctrine. Presents the developer-favorable view in a scholarly framework.

11. Samuelson, P. (2023). "Generative AI Meets Copyright." Science, 381(6654), 158-161. A concise and balanced assessment of the copyright questions raised by generative AI, by one of the foremost copyright scholars in the United States.

12. McKinsey Global Institute. (2023). The Economic Potential of Generative AI: The Next Productivity Frontier. McKinsey's assessment of generative AI's economic impact, including analysis of labor market effects across occupational categories. Essential context for assessing the creative labor displacement question.


Bias and Representation

13. Buolamwini, J., & Gebru, T. (2018). "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of Machine Learning Research, 81, 1–15. While focused on discriminative AI (facial recognition), Gender Shades established foundational frameworks for understanding and measuring bias in AI systems that remain applicable to generative AI.

14. Cho, J., et al. (2023). "DALLEval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models." IEEE International Conference on Computer Vision. An empirical study documenting gender and racial bias in text-to-image models including DALL-E, providing systematic methodology for bias evaluation in generative AI.


Privacy and Data

15. Carlini, N., et al. (2021). "Extracting Training Data from Large Language Models." USENIX Security Symposium. The foundational paper documenting that LLMs memorize and can reproduce training data, including sensitive personal information, when appropriately prompted. Technical but accessible; essential for understanding the privacy risks of LLM training.

16. European Data Protection Board. (2024). Opinion on the Processing of Personal Data for Training AI Models. The EDPB's assessment of how GDPR applies to the processing of personal data in AI model training — the most authoritative European regulatory guidance on this question.


Manipulation and Governance

17. Susarla, A., Gopal, R., Thatcher, J. B., & Sambamurthy, V. (2023). "From ChatGPT to TruthGPT: Artificial Intelligence and the Future of Work, Education, and Societal Impact." MIS Quarterly Executive. A business-focused assessment of generative AI's implications for organizations, including manipulation risks, governance requirements, and organizational strategy.

18. National Institute of Standards and Technology. (2023). AI Risk Management Framework (AI RMF 1.0). NIST. The U.S. government's voluntary framework for AI risk management — the primary reference document for organizational AI governance in the United States.

19. Federal Trade Commission. (2023). Generative AI Raises Competition Concerns. FTC Blog. The FTC's framework for thinking about generative AI competition and consumer protection issues, with specific attention to manipulation risks.


Regulatory Frameworks

20. European Parliament and Council. (2024). Regulation (EU) 2024/1689 on Artificial Intelligence (the AI Act). The comprehensive EU AI regulatory framework — the most developed binding AI regulation in the world. Provisions on general-purpose AI models (Articles 51-56) and transparency (Article 50) are most directly relevant to generative AI.

21. Office of the Privacy Commissioner of Canada. (2023). Joint Statement on Generative AI by Global Privacy Authorities. A joint statement by data protection authorities from multiple countries addressing privacy risks of generative AI — useful for understanding the international regulatory consensus developing on AI privacy.