Chapter 40 Exercises: AI and the Creator Economy


Exercise 40.1 — The AI Workflow Audit

Type: Individual analysis Time: 45–60 minutes Difficulty: Foundational

Map your current (or planned) content creation workflow from idea to published piece. Break it into at least 10 discrete steps — be specific. "Edit video" is too broad; "remove filler words and pauses," "add B-roll," "export captions," and "create thumbnail" are each distinct steps.

For each step, rate it on two dimensions: 1. AI Assistance Potential (1–5): How much could an AI tool assist or accelerate this step with current technology? 2. Human Irreplaceability (1–5): How important is your specific judgment, voice, experience, or relationship with your audience to this step?

Create a 2×2 matrix: - High AI / Low Human: Prime automation targets — let AI do this - High AI / High Human: Augmentation zone — AI drafts, you refine - Low AI / High Human: Your core creative territory — protect this time - Low AI / Low Human: Consider whether this step is necessary at all

Write a 200-word reflection: Where are you spending your time that could be automated? Where are you not spending enough time on the irreducibly human parts of your work?


Exercise 40.2 — Run the Content Pipeline

Type: Technical / Applied Time: 60–90 minutes Difficulty: Intermediate (requires Python and an LLM API key)

Run the code/ai_content_pipeline.py script on a topic from your own content niche (or a niche you're studying). You'll need to:

  1. Install Python and the requests library (pip install requests)
  2. Get a free-tier or paid API key from OpenAI, Anthropic, or Google AI Studio
  3. Set your API key as an environment variable
  4. Run the script with a real topic

After running the script, evaluate the output honestly:

Research questions: Were they the right questions for your niche? What questions did the AI miss that you, as a niche expert, would have asked?

Content outline: How does the AI's structure compare to how you would naturally organize this content? What did it get right? What would you completely restructure?

Hook options: Which hook (if any) could you actually use? What made the others wrong for your voice or audience?

Thumbnail concepts: Do they fit your visual brand? What's missing?

Tweet thread: Would your audience recognize this as sounding like you? What would you change?

Write a 300-word assessment: What did AI accelerate in this process? What did it get wrong that only your niche expertise could correct?


Exercise 40.3 — Sentiment Analysis on Real Comments

Type: Technical / Data Analysis Time: 45–60 minutes Difficulty: Intermediate (requires Python)

Run the code/sentiment_analysis.py script. If you have your own YouTube or social media comment data, export it and run the analysis. If not, use the --sample flag to run on generated data.

Install dependencies: pip install vaderSentiment matplotlib pandas

Then run:

python sentiment_analysis.py --sample

or, with your own data:

python sentiment_analysis.py --input your_comments.csv

After running the analysis, write a 200-word interpretation of the results:

If using your own data: - What is your overall average sentiment score? What does that tell you? - Which video generated the most positive audience response? Why do you think that is? - What do the top 10 critical comments have in common? Is there actionable feedback? - Is your sentiment trend improving or declining over time?

If using sample data: - What patterns do you notice in the comment distribution? - Which video in the sample data had the most positive response? Does that make intuitive sense given the titles? - What would you want to do next if this were your own channel data?


Exercise 40.4 — The Displacement Threat Assessment

Type: Research + Analysis Time: 60 minutes Difficulty: Analytical

Choose one creator role or content type that you believe is significantly threatened by AI displacement. Do not pick a role already fully discussed in the chapter — find one that requires additional research.

Your analysis (400–500 words) should include:

  1. Description of the role/content type: What does this creator do? What is their typical business model?

  2. The specific AI threat: What specific AI tool(s) or capability threaten this work? When did this threat emerge? How significant is the disruption already, versus projected?

  3. Evidence of impact: Find at least two real data points — market research, platform data, creator community reports, or news coverage — showing this impact is already occurring.

  4. What's protected: What dimension of this creator's work, if any, is most resistant to AI displacement? What makes that dimension hard to automate?

  5. Strategic implication: If you were advising this creator type, what would you tell them to do differently given the AI threat landscape?

Share your analysis in a small group. Did different people choose different roles? Were there common patterns in what's most and least protected from AI displacement?


Exercise 40.5 — The Training Data Ethics Debate

Type: Structured debate / Discussion Time: 45–60 minutes Difficulty: Conceptual / Ethical

This exercise structures a debate around the training data ethics question raised in Section 40.5 and the ⚖️ equity callout.

Resolution: "AI companies that trained their models on creator content without consent or compensation are obligated to pay retroactive compensation to affected creators."

Teams: Divide into three groups: - Pro: Arguing for the resolution (retroactive compensation is ethically required) - Con: Arguing against the resolution (compensation is not ethically required or not practically feasible) - Platform: Arguing from the perspective of a platform that uses AI tools — what position should they take?

Each team prepares for 15 minutes, then holds a 20-minute structured debate (5-minute opening, 5-minute rebuttals, 5-minute closing each side).

After the debate, drop out of your assigned positions and have a 10-minute open discussion: - What was the strongest argument from the opposing side? - What's the most practical path toward fair treatment of creators whose work trained AI? - As a creator who uses AI tools, what is your personal position?


Exercise 40.6 — Build Your AI Disclosure Policy

Type: Individual writing Time: 30 minutes Difficulty: Reflective / Applied

Draft a 200–300 word "AI use disclosure policy" for your creator brand. This should be something you'd post publicly — in your bio, on your website's about page, or as a pinned post.

Your policy should address:

  1. What AI tools you use (or plan to use) in your content creation
  2. At what level of AI involvement you would disclose it — is AI-assisted research disclose-worthy? AI-drafted outlines? AI-generated images? AI-cloned voice?
  3. What AI you will never use in your content creation
  4. How you ensure AI use doesn't replace your genuine voice and perspective

After drafting, exchange policies with a classmate and give each other one piece of feedback: - Is the policy clear and specific enough to be meaningful to an audience? - Does it feel authentic to the creator's actual approach, or does it feel like a public relations document?

Consider: should creators be required to have and publish AI disclosure policies? What would the creator ecosystem look like if this were a platform requirement?


Exercise 40.7 — Marcus's AI Decision

Type: Scenario analysis / Short writing Time: 30–40 minutes Difficulty: Applied / Ethical

Read the following scenario carefully:

Marcus is preparing a video on the topic "How to Invest Your First $5,000: A Step-by-Step Guide." He uses Claude to generate a research outline, then uses Perplexity to find current fund performance data. He then asks Claude to write a draft script based on his outline and the data. The draft is well-structured and factually accurate based on his research. He then records himself reading the script with light editing — perhaps 20% of the words are changed to match his voice.

The video publishes. It gets 180,000 views, more than his average. Several viewers report making their first investment decisions based on the video.

Questions to answer (150–200 words each):

  1. The disclosure question: Given Marcus's described process, is this video primarily AI-generated content that requires disclosure, or is it primarily Marcus's content that happened to use AI tools? Where exactly is the ethical line?

  2. The expertise question: Marcus is an MBA student with genuine financial expertise. The AI-drafted script was factually accurate and his research verified the claims. Does the AI origin of the draft affect the ethical standing of the financial guidance? Why or why not?

  3. The stakes question: Would your answers to questions 1 and 2 change if the video were about fashion recommendations instead of investment advice? Should it? What does this tell you about how creator ethics interact with the stakes of the content area?