Chapter 22 Quiz: No-Code / Low-Code AI


Multiple Choice

Question 1. In the chapter's opening scene, NK builds a customer churn model using DataRobot in approximately two hours. Her model's AUC is 0.81, compared to Tom's hand-coded model AUC of 0.83. Which of the following best characterizes Professor Okonkwo's assessment of this comparison?

  • (a) NK's approach is superior because it achieves nearly the same result in a fraction of the time.
  • (b) Tom's approach is superior because his model has a higher AUC score.
  • (c) The comparison highlights that no-code tools democratize AI, but the user may skip critical steps like data examination and model understanding.
  • (d) The comparison shows that AutoML platforms have made professional data scientists obsolete.

Question 2. Which of the following is NOT a step that AutoML platforms typically automate?

  • (a) Feature engineering
  • (b) Problem framing and business objective definition
  • (c) Hyperparameter tuning
  • (d) Ensemble creation and model stacking

Question 3. A marketing analyst at a financial services company wants to build a lead scoring model using an AutoML platform. The model will be used to prioritize which prospects receive outbound sales calls. Which tier of Ravi's governance framework would this use case most likely fall under?

  • (a) Tier 1: Exploration — self-service with training certification
  • (b) Tier 2: Departmental — peer review plus AI team consultation
  • (c) Tier 3: Production — full AI team review, governance board approval, ongoing monitoring
  • (d) This use case should not use no-code tools under any governance framework

Question 4. Which of the following best describes the "No Free Lunch Theorem" as it relates to AutoML?

  • (a) AutoML platforms are always free to use for initial experimentation.
  • (b) No single algorithm is optimal for all problems, which is why AutoML trains multiple algorithms and lets the data determine the best one.
  • (c) The cost of AutoML platforms always exceeds the cost of hiring data scientists in the long run.
  • (d) Every machine learning problem has exactly one correct algorithm, and AutoML helps find it.

Question 5. Ravi's shadow AI audit at Athena reveals several unauthorized AI tool uses. Which discovery raises the most immediate governance concern?

  • (a) Marketing using Jasper AI to generate email copy with access to customer persona data
  • (b) Finance analysts uploading quarterly revenue spreadsheets to personal ChatGPT accounts
  • (c) HR using an AutoML tool for resume screening based on historical hiring data
  • (d) Supply Chain using a free-tier AutoML platform for demand forecasting

Question 6. An organization is evaluating no-code AI platforms using the seven-dimension vendor evaluation framework. The organization operates in a heavily regulated industry (healthcare) and processes patient data. Which evaluation dimension should receive the highest weight in their analysis?

  • (a) Functional Capability
  • (b) Pricing and Total Cost of Ownership
  • (c) Security and Compliance
  • (d) Lock-In Risk

Question 7. Which of the following scenarios is LEAST suitable for a no-code AutoML approach?

  • (a) A retail company building a customer churn prediction model using clean CRM data
  • (b) A startup prototyping a fraud detection model to validate whether the signal exists in the data
  • (c) A hospital building a real-time patient deterioration detection system that must integrate with multiple EHR systems, meet FDA approval requirements, and run predictions within 200 milliseconds
  • (d) A small e-commerce company building a product recommendation model for its website

Question 8. The chapter describes the difference between "build," "buy," and "configure" as AI development strategies. Which of the following best describes the "configure" option?

  • (a) Purchasing a complete AI solution from a SaaS vendor with no customization
  • (b) Developing a custom AI solution from scratch using Python and TensorFlow
  • (c) Using no-code or low-code platforms to create custom AI solutions without writing code, providing moderate speed and moderate customization
  • (d) Hiring a consulting firm to build a custom AI solution on the organization's behalf

Question 9. A citizen data scientist builds an AutoML model that achieves an AUC of 0.92 on the training leaderboard. When deployed, the model performs poorly, with an AUC of 0.61 on new data. What is the most likely explanation?

  • (a) The AutoML platform has a software bug that inflates training metrics.
  • (b) The model is overfitting to the training data, and the platform's validation methodology did not adequately detect generalization failure — possibly due to data leakage, a non-representative training set, or temporal data that was not properly split.
  • (c) An AUC of 0.61 is actually good performance for most business problems.
  • (d) The citizen data scientist selected the wrong algorithm from the leaderboard.

Question 10. Which of the following statements about embedded AI features (Salesforce Einstein, Tableau AI, Microsoft Copilot) is most accurate?

  • (a) Embedded AI provides the highest level of model customization because the vendor knows the data best.
  • (b) Embedded AI is ideal for organizations that need full transparency into model methodology for regulatory compliance.
  • (c) Embedded AI provides convenience and fast deployment but creates vendor dependency and limits transparency into the model's methodology.
  • (d) Embedded AI eliminates the need for any AI governance because the vendor handles governance.

True or False

Question 11. True or False: AutoML platforms eliminate the need for data quality review because they automatically detect and correct data quality issues.


Question 12. True or False: The citizen data scientist program at Athena requires all AI use cases — including using ChatGPT for drafting emails — to undergo a full governance board review before proceeding.


Question 13. True or False: According to the chapter, the greatest risk of AutoML is not that it builds bad models, but that it builds models so quickly and easily that organizations deploy them without adequate validation, governance, or monitoring.


Question 14. True or False: Visual pipeline builders like KNIME and Azure ML Designer are most valuable for problems at the extreme ends of the complexity spectrum — either very simple or very complex.


Question 15. True or False: Shadow AI typically results from employees' malicious intent to bypass organizational security controls.


Short Answer

Question 16. Explain why the chapter describes "democratization without governance" as "chaos" and "governance without democratization" as "bottleneck." In two to three sentences, describe how a well-designed citizen data science program balances these two forces.


Question 17. A small retail company (100 employees, no data science team) wants to use AI to predict which customers are most likely to respond to a promotional campaign. Using the build-vs-buy-vs-configure framework, recommend an approach and justify your recommendation in three to four sentences.


Question 18. Describe two scenarios where a model that started as a no-code prototype should be "graduated" to a code-based, custom-built solution. What signals indicate that the no-code approach has reached its limits?


Scenario-Based

Question 19. A pharmaceutical company's marketing team uses ChatGPT (non-enterprise version) to draft promotional materials for a prescription drug. They paste clinical trial results and patient outcome data into the chat interface to help the AI generate accurate claims.

  • (a) Identify two immediate risks of this practice.
  • (b) Under the chapter's tiered governance framework, how should this use case be classified?
  • (c) What should the company do in response?

Question 20. An insurance company deploys an AutoML model to predict claim fraud. The model achieves high accuracy on test data. Six months later, the company discovers that the model disproportionately flags claims from specific ZIP codes that correlate with minority neighborhoods — a result of using ZIP code as a feature, which serves as a proxy for race.

  • (a) What type of bias does this represent?
  • (b) Could the AutoML platform have prevented this? Why or why not?
  • (c) What governance process, if one had been in place, would have caught this issue before deployment?
  • (d) Connect this scenario to the chapter's forward reference to Chapter 25 (Bias in AI Systems).

Answer Key

  1. (c) — Okonkwo acknowledges the democratization but highlights that NK skipped data examination, did not understand the algorithm selection, and could not defend the model in a regulatory proceeding.

  2. (b) — Problem framing and business objective definition is explicitly identified as something AutoML does NOT automate. The platform accelerates the solution but does not validate the question.

  3. (b) — Lead scoring that informs outbound sales calls is a departmental decision tool, not a personal exploration (Tier 1) and not an automated customer-facing or regulated decision (Tier 3). It requires peer review and AI team consultation.

  4. (b) — The No Free Lunch Theorem states that no single algorithm is universally optimal. AutoML operationalizes this by training multiple algorithms and selecting the best one for the specific dataset.

  5. (c) — HR resume screening using historical data creates the most immediate governance concern due to potential discrimination under employment law, affecting individual candidates' livelihoods and exposing Athena to legal liability.

  6. (c) — Security and Compliance should receive the highest weight for a healthcare organization processing patient data, due to HIPAA requirements, data privacy obligations, and regulatory scrutiny.

  7. (c) — The hospital scenario involves multi-system integration, regulatory approval (FDA), and stringent latency requirements — all factors that exceed no-code capabilities.

  8. (c) — Configure represents the middle option: using no-code/low-code platforms for custom solutions with moderate speed and customization, positioned between full build and pure buy.

  9. (b) — A dramatic gap between training and deployment performance strongly suggests overfitting, potentially caused by data leakage, non-representative training data, or improper temporal splitting.

  10. (c) — Embedded AI features provide convenience and speed but create vendor dependency and limit transparency into model internals. They do not eliminate the need for governance.

  11. False. — AutoML platforms can identify some data quality issues but do not eliminate the need for human data quality review. The chapter explicitly warns that AutoML makes data quality "invisible" rather than "optional."

  12. False. — The tiered governance model applies proportional governance. Tier 1 (exploration/personal productivity) requires only training certification, not full governance board review.

  13. True. — This is a direct quote from the chapter's caution about AutoML risks.

  14. False. — Visual pipeline builders are most valuable in the middle of the complexity spectrum — too complex for fully automated AutoML but not complex enough to justify custom code.

  15. False. — Shadow AI typically results from a rational mismatch between employee needs and organizational processes, not malicious intent. Employees are usually trying to solve business problems quickly.

  16. Democratization without governance is chaos because untrained users deploying unvalidated models with sensitive data create data leakage, compliance violations, and biased decisions — as illustrated by Athena's shadow AI audit. Governance without democratization is a bottleneck because centralizing all AI development in a small data science team creates a queue that prevents the organization from scaling AI across use cases. A well-designed citizen data science program balances both by providing approved tools and training for self-service use at low-risk tiers while requiring increasing oversight for higher-risk applications.

  17. A small retail company with no data science team should configure — using an AutoML platform (such as Google Vertex AI AutoML or a similar consumption-priced service) to build a campaign response prediction model. Building is inappropriate because the company lacks the talent and infrastructure for custom ML development. Buying a complete vendor solution may be overkill for a 100-person company and creates vendor dependency for a standard problem. Configuring with AutoML provides a cost-effective middle path, allowing the company to build a custom model on its own data without hiring data scientists, at a fraction of the cost.

  18. A no-code prototype should graduate to custom code when (1) the model becomes strategically critical and requires custom feature engineering, real-time scoring at scale, or integration with complex production systems that exceed the platform's deployment capabilities, or (2) the model operates in a regulated domain where full transparency, complete audit trails, and the ability to reproduce predictions from first principles are required by law. Signals that the no-code approach has reached its limits include frequent workarounds to platform constraints, performance bottlenecks under production load, inability to implement domain-specific logic within the platform's visual interface, and governance requirements that exceed the platform's interpretability features.

  19. (a) Data leakage: clinical trial results and patient outcome data uploaded to a non-enterprise ChatGPT may be used for model training, potentially exposing proprietary and regulated data. Compliance violation: promotional materials for prescription drugs are regulated by the FDA, and using AI-generated content without proper review may violate pharmaceutical marketing regulations. (b) This should be classified as Tier 3 — customer-facing content in a regulated industry requires full governance review, legal and regulatory compliance assessment, and ongoing monitoring. (c) The company should immediately cease using non-enterprise ChatGPT for this purpose, conduct a data exposure assessment, implement an approved enterprise AI tool with data privacy guarantees, and establish a review process that includes regulatory affairs and legal sign-off before any AI-generated pharmaceutical marketing content is published.

  20. (a) This represents proxy discrimination or indirect bias — the model uses a feature (ZIP code) that correlates with a protected characteristic (race), resulting in disparate impact on minority communities. (b) The AutoML platform could not have prevented this independently because the platform lacks the domain knowledge and ethical judgment to identify which features serve as proxies for protected characteristics. Automated bias detection can flag statistical correlations but cannot make the normative judgment that using ZIP code in fraud prediction is ethically problematic. (c) A Tier 3 governance review — including bias testing, disparate impact analysis, and review by legal and compliance teams — would have caught this issue before deployment by requiring the team to test for differential performance across demographic groups. (d) Chapter 25 will examine bias in AI systems in depth, including the specific challenge of proxy variables that create discriminatory outcomes even when protected characteristics are not directly included as model features.


Selected answers appear in Appendix B: Answers to Selected Exercises.