Appendix E: FAQ — Common Questions and Frustrations

These are the questions that come up most often from professionals learning to work with AI tools. They are grouped thematically so you can quickly find what is relevant to your current situation.


Getting Started

Why does AI give me different answers to the same question?

AI language models are probabilistic — they do not retrieve a single stored answer the way a search engine retrieves a webpage. Each response is generated by sampling from a probability distribution over possible next words, which means identical inputs can produce different outputs.

Several factors amplify this variability. First, most AI systems have a "temperature" setting that controls how much randomness is introduced. Higher temperature means more creative but less consistent outputs. Second, even identical prompts are rarely truly identical — timestamps, session history, and small character differences can affect outputs. Third, as AI providers update their models (sometimes without announcement), responses shift.

What this means in practice: for tasks where consistency matters (classification, extraction, precise formatting), lower the temperature if you have API access, use very specific prompts, and validate outputs programmatically. For tasks where variety is an asset (brainstorming, creative drafts), the variability is a feature, not a bug.


How do I know which AI tool to use?

Start by identifying the dominant category of your task (text, code, images, research — see Appendix D's decision tree). Within each category, consider: What does the output get used for? Who sees it? How much accuracy or quality matters? What tools does your team already use?

For most knowledge workers starting out, a general-purpose chat AI (Claude or ChatGPT) will cover 80% of use cases. Add specialized tools only when you encounter clear gaps — for example, finding that standard chat tools do not perform well for academic literature review (use Elicit or Consensus) or for in-editor code assistance (use Copilot or Cursor).

Resist the temptation to build a complex multi-tool stack before you have real experience. Start with one tool, develop genuine proficiency, then expand deliberately based on actual needs.


Is the free version good enough, or do I need to pay?

For occasional, exploratory use: almost certainly yes. For professional, high-volume, or quality-critical use: probably not.

The free tiers of Claude, ChatGPT, and Gemini give you access to capable (if not frontier) models with meaningful daily usage limits. The main things you give up with free tiers are: access to the most capable models, higher or unlimited usage, priority access during peak times, longer context windows, and advanced features like image generation or web browsing.

If you are using AI tools for more than 30-45 minutes per day for work tasks, the $20/month individual subscription is almost always cost-effective. If you are spending time fighting usage limits or switching between tools, that is a signal to pay for at least one subscription.

For API access to build integrations, you will need to pay per-token regardless of subscription status. See Appendix B for cost estimation guidance.


Prompting

Why does AI ignore my instructions?

Usually one of three things: the instruction conflicts with the model's safety guidelines, the instruction is ambiguous enough that the model interprets it differently than you intend, or the instruction gets "diluted" in a long prompt where other context takes precedence.

For safety-related refusals: reframe the task to clarify legitimate purpose. "Write a villain's threatening monologue for my novel" succeeds where "write a threatening message" might not.

For ambiguity: be more explicit. "Do not use bullet points" can be overridden by implicit formatting cues in your content. "Format your entire response as continuous prose paragraphs — do not use any bullets, headers, or lists" is much clearer.

For dilution in long prompts: put critical instructions at the end of the prompt, where they get more weight, or state them separately as a constraint block. For very long conversations, periodically restate key constraints.


How do I get shorter or longer responses?

Be direct and specific. Vague requests like "be concise" are interpreted relative to the model's default behavior, which varies by context.

For shorter: "Answer in three sentences or fewer." / "Give me only the bullet points — no explanations." / "Keep this under 150 words." / "One paragraph only."

For longer: "Expand on each of the following points in a full paragraph." / "I want a thorough treatment of this topic — at least 800 words." / "Do not summarize or skip ahead — work through each step fully."

If the model still does not comply, try moving your length instruction to the very beginning of your prompt and making it even more explicit. You can also end your prompt with a reminder: "Remember: respond in under 100 words."


How do I stop AI from adding disclaimers to everything?

Acknowledge the concern proactively in your prompt, and give explicit permission to skip the standard caveats.

Examples: - "I understand this is not medical advice. Please skip the standard disclaimers and just answer the question directly." - "I am aware of the legal complexity here. I need a practical starting point, not a referral to a professional. Please answer plainly." - "Do not hedge. Do not add disclaimers. I want your best assessment, stated confidently."

Most models respond well to this framing because it signals that you are an informed adult who has already considered the caveats. Providing context about your role also helps: "As a nurse reviewing patient cases..." establishes that excessive hedging is unhelpful in your context.


Why does giving more detail sometimes make the output worse?

More detail can actually work against you in several ways. Long prompts can create conflicting constraints that the model tries to satisfy simultaneously, producing a compromise that fully satisfies none of them. Irrelevant details can distract the model away from what matters most. And over-specification can suppress the model's own judgment, which is often valuable — you get a rigid literal interpretation instead of a thoughtful response.

The fix: identify what is truly essential context versus what is nice to have. Move from "here is everything I know" to "here is what you specifically need to answer well." Include constraints, not backstory. If you find a long prompt underperforming, try stripping it back to the core request and adding constraints one at a time until quality degrades.


Trust and Accuracy

How do I know if AI output is accurate?

You often cannot know with certainty without independent verification. This is a fundamental characteristic of current AI systems, not a bug that will be patched.

A practical framework: categorize the stakes and verifiability of each claim. For low-stakes content (blog post phrasing, brainstormed ideas, formatting suggestions), the verification cost exceeds the error risk — use freely and accept occasional imperfection. For medium-stakes content (research summaries, data-containing reports), spot-check key claims against primary sources. For high-stakes content (medical, legal, financial, or anything with safety implications), treat AI output as a starting hypothesis and verify each material claim independently before acting.

Also watch for signals of uncertainty: hedging language ("may," "generally," "it is believed that"), a model noting its training cutoff, or a willingness to say "I'm not certain" are all positive signs. Confident, unqualified assertions about specific facts, statistics, or recent events deserve more scrutiny, not less.


Can I cite AI-generated content?

Whether you can and whether you should are different questions.

Technically: most AI-generated text is not citable as a source in the academic sense because it is not a fixed, retrievable document with an identifiable author, peer review, or accountability. A citation implies that a reader can go verify the original source — AI outputs cannot fulfill that function.

Professionally and practically: whether disclosure is required depends on your field, employer policy, publication guidelines, and the nature of the task. Many professional contexts now require disclosure when AI was used substantively in creating work product. See the question on disclosure below for more detail.

If you want to cite the actual research or ideas that an AI helped you locate or synthesize, cite those original sources. Use AI as a research aid, then cite what it found.


Why does AI confidently say wrong things?

This behavior — commonly called hallucination — arises from the mechanics of how language models work. They predict the most statistically plausible sequence of text given the prompt, drawing on patterns in training data. A confident, declarative statement is often statistically "what comes next" after certain types of questions, regardless of whether the underlying claim is true.

The model does not have a truth-checking mechanism separate from language generation. It cannot "look things up" or "double-check" in the way a human researcher would. Its internal representation of knowledge is distributed across billions of parameters, not stored as discrete verifiable facts.

This is why hallucination is most common in areas with less training data coverage, for specific details (dates, names, statistics), and for things that have changed since the training cutoff. The model fills gaps with plausible-sounding information rather than uncertainty signals.


What is the difference between what AI knows and what it makes up?

AI models have genuine knowledge — absorbed from training data — and they have generation patterns that can produce plausible-sounding content that was never in any source. The distinction is not clearly visible from the output.

Rough rule of thumb: well-established, widely documented facts that appeared in many training sources (how photosynthesis works, the plot of a famous novel, the syntax of a programming language) are more reliable than specific details (a particular statistic, a lesser-known historical date, a niche regulation, any recent event). Unique, specific, hard-to-verify claims deserve more skepticism than general concepts.

Using an AI that cites sources (like Perplexity) or asking the model to identify what it is less certain about can help, but neither is a complete solution. The core skill is recognizing which outputs need external verification before being acted upon.


Workflow

How do I maintain context across multiple conversations?

AI chat tools currently do not have persistent memory across separate conversations by default (though some, like ChatGPT's Memory feature, are building this in). Each new conversation starts fresh.

Practical strategies: build a reusable context packet (see Appendix C, Template 3) that you paste at the start of sessions where context matters. Save prompts that worked well. Keep key project documents handy to paste as needed. For long projects, maintain a running document of important decisions and background that you can share at the start of each session.

For ongoing projects, consider whether an AI with memory features or a system prompt that includes project context (available via the API or tools like custom GPTs) would serve you better than standard chat.


How do I use AI for tasks that require knowing my specific situation?

The model cannot know your specific situation unless you tell it. This is actually a strength — you have the opportunity to provide exactly the context that matters. The challenge is that most people under-share context and then blame the AI for generic output.

For any task where your specific situation changes the answer, provide: who you are and your role, what you are trying to accomplish and why, relevant constraints (budget, timeline, audience, policy), what you have already tried, and what "good" looks like in your context. This is what the Context Packet template in Appendix C is designed to help you do systematically.

The AI's job is to apply general knowledge to your specific situation. Your job is to provide that specific situation clearly.


How much time should I spend prompting versus just doing it myself?

This depends entirely on the task and your current skill level. A reasonable heuristic: if writing a good prompt and reviewing the output will take longer than just doing the task yourself, do it yourself. AI assistance is most cost-effective when the gap between the time to prompt and the time to produce manually is large.

For tasks you repeat often, invest more in prompting even if the first few iterations are slow — you are building a reusable asset. For rare, highly contextual tasks, the overhead of prompting may genuinely not be worth it.

As your prompting skills improve, this calculus shifts. Experienced AI users prompt faster and need less iteration, making it efficient to delegate more tasks. Track your time honestly for a few weeks (see the journal in Appendix C, Template 7) to get real data on where AI is actually saving you time.


Ethics and Professional

Do I have to disclose when I use AI?

This depends on your context, and the answer is evolving quickly.

Academic contexts: most universities now have explicit AI use policies. When in doubt, disclose. Using AI to produce work represented as your own in academic contexts often violates honor codes regardless of whether a specific policy exists.

Professional publishing: many journals and publications now require disclosure of AI use in manuscript preparation. Always check the publication's current policy.

Client work: if a client is paying for your expertise and you are substituting AI output for your judgment without their knowledge, this raises genuine professional ethics concerns even where no explicit rule exists. When in doubt, disclose.

Internal work: most employers do not currently require disclosure for internal documents, but policies are emerging quickly. Check your organization's AI use guidelines.

The general principle: wherever representation of authorship or expertise matters, be transparent about AI assistance.


Is AI-assisted work "cheating"?

Cheating requires violating an agreed-upon rule. If your context has no rule against AI assistance and you disclose use where relevant, it is not cheating — it is using available tools. The comparison to using a calculator (a tool that caused identical moral panics in its era) is instructive.

The more substantive question is: are you maintaining the skills and judgment that you need, and that others rely on you to have? A surgeon who uses AI for literature synthesis is fine. A surgeon who uses AI to make clinical decisions without their own judgment is a different matter. The concern is not the tool but the abdication of professional responsibility.

Use AI in ways that augment your capability. Be honest about what you used. Maintain the skills that constitute your professional value.


Will using AI make me worse at my job over time?

It might, for specific skills, if you stop practicing them. This is called cognitive offloading, and research (see Appendix F) shows it is a real risk with sustained use of any cognitive aid.

The relevant question is which skills. Drafting a first paragraph quickly is a skill. Is it the same as strategic thinking, client judgment, or expertise in your domain? For most knowledge workers, AI offloads lower-order production tasks, not the higher-order judgment that constitutes professional value.

The risk is highest when you use AI for tasks that were previously forcing you to do productive hard thinking — not just the tedious parts, but the parts that were building your knowledge. Be intentional: occasionally do tasks manually to maintain the underlying skill. Use AI as a collaborator, not a replacement for your own thinking.


Technical

What is a token and why does it matter?

A token is the basic unit AI language models use to process text — roughly 3-4 characters or about 0.75 of an English word on average. "ChatGPT" is two tokens. A 500-word document is approximately 650-700 tokens.

Tokens matter for three reasons. First, cost: API pricing is per-token, so longer conversations and prompts cost more. Second, context window: every model has a maximum number of tokens it can process in a single call (input + output combined). Exceed this limit and the conversation is truncated or rejected. Third, performance: very long contexts can dilute the model's attention on what matters most.

For practical API use, knowing token counts helps you estimate costs, manage context windows, and design efficient prompts. See Appendix B for a token cost estimator utility function.


What happens to my data when I use these tools?

This varies by provider, tier, and specific settings, and the policies change. The general principles across major providers:

Free and standard paid tiers: your conversations may be used to improve the model. This means what you input is potentially reviewed by human trainers or used in future training runs. Do not input confidential, personally identifiable, or proprietary information through standard interfaces.

API access: most providers have explicit policies that API data is not used for training by default. Review each provider's API terms.

Enterprise tiers: typically include data processing agreements (DPAs) that prohibit training on your data and include stronger security and privacy commitments.

Your most important practical steps: read the privacy policy for any tool you use for work, check whether your organization has enterprise agreements with providers, and establish internal guidelines about what categories of information may not be entered into external AI tools.


How do I set up API access?

The process is consistent across major providers:

  1. Create an account at the provider's developer platform (platform.openai.com, console.anthropic.com, or aistudio.google.com).
  2. Navigate to the API keys section and generate a new secret key.
  3. Store the key securely — treat it like a password. Use environment variables or a secrets manager, never hardcode it in source code.
  4. Install the appropriate Python SDK (see Appendix B, Section 1).
  5. Add funds or set up billing — most APIs use a pay-per-token model with prepaid credits.
  6. Set spending limits to prevent unexpected costs.

API access unlocks capabilities not available in chat interfaces: programmatic integration, batch processing, custom system prompts, fine-grained control over model behavior, and the ability to build your own tools and workflows. For most non-developers, the chat interface is sufficient. API access becomes valuable when you have repetitive tasks to automate, want to integrate AI into existing tools, or need custom behavior that the standard interface does not support.


For additional troubleshooting and current documentation, refer to the official guides: docs.anthropic.com, platform.openai.com/docs, and ai.google.dev.