Chapter 33 Further Reading: Ethics of AI Use — Disclosure, Attribution, and Fairness
Disclosure and Transparency Norms
1. Floridi, L., et al. (2018). AI4People — An Ethical Framework for a Good AI Society. Minds and Machines, 28(4), 689-707. One of the foundational documents of AI ethics as a field. The principles articulated — beneficence, non-maleficence, autonomy, justice, explicability — provide the theoretical grounding for the practical disclosure and fairness questions in Chapter 33. Available free via Springer Open Access.
2. International Committee of Medical Journal Editors (ICMJE) — Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals (icmje.org) ICMJE's updated guidance on AI and authorship requirements in medical publishing. Representative of the publisher-side transparency standard that is becoming normative across academic publishing. Updated to address AI tools; free to access.
3. Committee on Publication Ethics (COPE) — COPE Position Statement on Authored AI Tools (publicationethics.org) COPE's formal position on AI tools and authorship in scholarly publishing. Establishes the principle that AI cannot be listed as an author and that AI tool use in the writing process must be transparent. Essential reading for anyone publishing in academic or professional journals.
Copyright and Attribution
4. U.S. Copyright Office — Copyright and Artificial Intelligence (copyright.gov/ai) The Copyright Office's ongoing inquiry and guidance on AI-generated content and copyright protection. The formal determinations on what constitutes sufficient human creative contribution for copyright protection are published here. Essential primary source for any practitioner concerned about IP ownership of AI-generated work. Free.
5. Grimmelmann, J. (2024). The Semiotics of Ghostwriting. Fordham Law Review, 92(2), 667. A legal-theoretical treatment of ghostwriting that provides the best current analysis of how ghost-writing traditions should be understood in relation to AI authorship. Clarifies the legal and ethical distinctions that practitioners need to understand when using AI assistance in named publications.
6. Lemley, M. A., & Casey, B. (2021). Fair Learning. Texas Law Review, 99(4), 743-800. An analysis of copyright questions around AI training data and output. Provides the scholarly foundation for understanding the copyright landscape that practitioners are navigating. Free via SSRN.
Fairness and Access
7. Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press. A book-length examination of how automated systems impose unequal consequences on vulnerable populations. Not about AI writing tools specifically, but the access inequality and structural fairness arguments are directly relevant to the competitive and organizational fairness concerns in Chapter 33.
8. Rainie, L., & Anderson, J. (2021). The Future of Jobs and Job Training. Pew Research Center. Pew Research's survey of experts on AI's impact on employment and economic opportunity. Relevant to the fairness dimensions of AI tool advantages in professional competition. Free from Pew Research Center.
9. Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. A comprehensive critical examination of AI's social and political dimensions. The chapters on labor, infrastructure, and power are relevant to the structural fairness concerns in Chapter 33 that go beyond individual practitioner decisions.
Organizational Ethics and Governance
10. Morley, J., Cowls, J., Taddeo, M., & Floridi, L. (2020). The Ethics of AI in Health Care: A Mapping Review. Social Science & Medicine, 260, 113172. A mapping review of AI ethics frameworks applied to healthcare. Representative of the domain-specific AI ethics work that informs organizational governance in regulated industries. Open access.
11. Jobin, A., Ienca, M., & Vayena, E. (2019). The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence, 1(9), 389-399. A systematic analysis of AI ethics guidelines published by governments, organizations, and companies. Useful for understanding the range of approaches and the principles that recur across different frameworks. Informs the personal framework development section of Chapter 33.
12. Partnership on AI — The AI Hiring Assessment Guidelines (partnershiponai.org) PAI's practical resources for organizations using AI in hiring contexts, addressing both fairness and transparency requirements. Practical guidance for organizations trying to develop responsible hiring AI governance.
The Legal and Regulatory Dimension
13. Federal Trade Commission — Endorsements and Testimonials in Advertising (ftc.gov/endorsement-guides) The FTC's official guidance on endorsement disclosure requirements, updated to address influencer marketing and increasingly relevant to AI-generated review and testimonial content. Essential reading for marketing professionals. Free.
14. Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology, 31(2), 841-887. An important paper on transparency requirements under GDPR for automated decision systems. Relevant to organizational AI ethics and the transparency obligations that European regulations impose.
Philosophical Foundations
15. O'Neill, O. (2018). Linking Trust to Trustworthiness. International Journal of Philosophical Studies, 26(2), 293-300. A brief but important philosophical paper on the relationship between trustworthiness and trust. O'Neill's argument — that we should focus on being trustworthy rather than on building trust — is directly applicable to the disclosure and attribution questions in Chapter 33. The professional who has defensible, transparent AI practices is trustworthy regardless of whether specific disclosures are required; the professional who avoids disclosure to maintain appearances is not.