Chapter 34: Quiz — AI Ethics in Emerging Markets
Instructions: Choose the best answer for each multiple-choice question. For short-answer questions, write 2–4 complete sentences. This quiz covers the main chapter content, both case studies, and the key takeaways.
Multiple Choice (Questions 1–15)
1. The term "emerging markets" is described in the chapter as analytically problematic primarily because: - A) The term is inaccurate — these markets are not actually emerging, they are declining - B) It encompasses an enormous diversity of economic, political, and social contexts that resist a unified analytical treatment - C) It was coined by development economists to promote investment and therefore lacks scientific validity - D) It excludes China, which is the most important AI market outside the United States
2. The concept of "data colonialism," as developed by scholars like Nick Couldry and Ulises Mejias, draws an analogy between: - A) The collection of data from Global South populations and historical extraction of natural resources - B) The imposition of EU data protection law on non-EU countries - C) China's deployment of surveillance infrastructure in African countries - D) The use of AI to automate colonial administrative processes
3. The chapter identifies which of the following as the most significant beneficial AI application domain for smallholder farmers in sub-Saharan Africa? - A) Drone-based precision agriculture - B) Agricultural AI including crop disease detection and market price information - C) Blockchain-based supply chain tracking - D) Satellite-based weather prediction systems
4. The Masakhane NLP initiative is significant because it: - A) Is the largest corporate investment in African language AI, funded by Google - B) Demonstrates community-driven African language AI research that centers African researchers and data sovereignty - C) Has been contracted by the African Union to develop an official continental language AI standard - D) Is the first government-funded initiative to develop AI capabilities for African languages
5. The TIME magazine investigation into Sama Group and OpenAI revealed that Kenyan content moderation workers were paid approximately: - A) $15–20 per hour, equivalent to US minimum wage - B) $5–8 per hour, the local tech sector average - C) $1.32–2 per hour, while reviewing traumatic content including descriptions of violence and child exploitation - D) No wage — workers were compensated through equity in Sama Group's impact fund
6. According to the chapter, the "Safe City" AI surveillance programs deployed by Huawei in several African countries have been documented as being used for: - A) Traffic management and crime prevention, consistent with their stated purpose - B) Monitoring and suppressing political opposition in countries including Zimbabwe and Uganda - C) Tracking environmental violations and water usage - D) Managing pandemic contact tracing and health monitoring
7. The "infrastructure equity problem" in AI refers to the observation that: - A) AI companies in developed countries receive more venture capital than those in developing countries - B) AI research labs in the Global South lack access to the computing infrastructure needed for frontier model development - C) AI applications requiring reliable internet, smartphones, and electricity systematically exclude populations with the greatest development needs - D) The physical infrastructure required to power large AI data centers is concentrated in a small number of countries
8. Language model underrepresentation of African languages is caused primarily by: - A) The lack of written traditions in most African languages - B) The absence of African researchers at major AI labs - C) The skewed distribution of internet text toward languages spoken by wealthy, connected populations - D) Legal restrictions on using African language content for AI training
9. Medical AI bias affecting non-Western patient populations has been documented most prominently in which of the following applications? - A) Oncology drug discovery AI that recommends treatments not approved in African countries - B) Dermatology AI trained predominantly on light-skinned patients that performs poorly on patients with darker skin tones - C) Hospital administration AI that uses English-language records and fails for non-English speakers - D) Telemedicine AI that cannot function on the cellular networks common in developing countries
10. Huawei's argument that it "sells technology, not governance" and is therefore not responsible for how Safe City systems are used is critiqued in the chapter primarily because: - A) Huawei is legally required under Chinese law to share surveillance data with Chinese intelligence services - B) Misuse of surveillance technology by authoritarian governments is foreseeable at the point of sale, making the claim of ignorance ethically inadequate - C) Huawei's contract terms explicitly provide that it will monitor how systems are used to ensure compliance with human rights standards - D) International export control law holds technology suppliers responsible for foreseeable misuse
11. The financing structure of many Safe City deployments in Africa — using Chinese state loans from Exim Bank — is identified as a governance concern because: - A) Chinese state loans carry higher interest rates than World Bank or IMF alternatives - B) Loan conditions may constrain recipient governments' ability to make governance decisions about the surveillance infrastructure independently - C) Chinese state financing requires recipient countries to exclude Western technology companies from future procurement - D) Exim Bank loans must be repaid in Chinese renminbi, creating currency risk
12. Regarding AI annotation labor, the chapter's analysis of the Sama Group controversy concludes that: - A) The controversy was primarily the result of Sama Group's individual management failures, not industry-wide structural problems - B) OpenAI bears no responsibility because it maintained contractual requirements about working conditions - C) The layered contracting structure created an accountability gap that allowed both OpenAI and Sama to deflect responsibility while workers bore the costs - D) The psychological harm experienced by workers was adequately addressed by Sama Group's mental health program
13. The chapter's discussion of financial inclusion AI (e.g., companies like Tala and Branch) identifies which of the following as a key ethical concern? - A) These companies charge interest rates that exceed legal limits in their target markets - B) AI credit models using mobile phone behavioral data may encode proxy discrimination affecting women, elderly, and rural populations whose usage patterns differ - C) These companies collect and sell borrowers' financial data to credit bureaus without consent - D) Financial inclusion AI replicates traditional credit scoring, failing to serve the credit-invisible population it claims to reach
14. The "representation problem in global AI" refers to: - A) The lack of AI representation in global governance institutions - B) The insufficient number of AI-generated media representations of non-Western populations - C) The demographic homogeneity of the global AI research community, which shapes what problems AI research addresses and what assumptions AI systems encode - D) The inadequacy of copyright law to protect AI-generated content representing diverse cultures
15. The chapter identifies which of the following as the most effective current example of community-driven AI research for Global South contexts? - A) FAIR for Africa (Meta's Nairobi-based research lab) - B) Masakhane NLP, a grassroots research organization building African language NLP through African researchers - C) Google's Africa Center of Excellence program - D) The African Union's continental AI lab, established under the AI Continental Strategy
Short Answer (Questions 16–20)
16. Explain the concept of "leapfrogging" in the context of technology adoption in developing economies. What are the conditions under which AI leapfrogging can genuinely benefit emerging market populations, and what conditions typically prevent those benefits from materializing?
17. The chapter describes the Sama Group controversy as illustrating "the accountability gap in layered contracting." What does this mean, and how does the contracting structure between OpenAI, Sama Group, and Kenyan workers create this gap?
18. Describe two specific ways in which the demographic homogeneity of the global AI research community produces AI systems that fail or cause harm when deployed in emerging market contexts. Use concrete examples from the chapter.
19. What is "impact sourcing," and how does the chapter argue that it can function as a form of ethics washing? What would distinguish genuine ethical employment in AI annotation from impact sourcing ethics washing?
20. The chapter calls for "data sovereignty agreements" as a component of responsible AI deployment in emerging markets. What would such an agreement need to contain to be genuinely protective of community interests, and what obstacles would organizations face in implementing such agreements?
Answer Key: 1-B, 2-A, 3-B, 4-B, 5-C, 6-B, 7-C, 8-C, 9-B, 10-B, 11-B, 12-C, 13-B, 14-C, 15-B. Short answers should demonstrate understanding of the chapter's key concepts, use of specific examples from the chapter, and genuine analytical engagement rather than summary repetition.