Module 3: Lesson 9 of 16

Module 3 · Lesson 9 – High-Stakes Decision Making

Where AI must not be used alone

The Knowing-Doing Gap

A global survey of 32,000+ workers revealed a stark finding: 66% of workers rely on AI output without any form of evaluation or verification.

More troubling: 56% of AI users acknowledge making mistakes in their work due to unverified AI assistance.

This is the knowing-doing gap: awareness of risk does not translate into verification behavior. People know AI can be wrong. They use it anyway without checking.

Risk Stratification Framework

Not all uses of AI carry the same risk. The War Room framework divides tasks into three categories:

RED ZONE — Do Not Use AI Alone

  • Legal advice or contract review
  • Medical diagnosis or treatment decisions
  • High-stakes financial planning or investment decisions
  • Safety-critical engineering or infrastructure
  • Any decision where being wrong has legal, medical, or financial consequences

In these domains, AI can assist research, but a qualified human expert must verify and take responsibility.

YELLOW ZONE — Verification Required

  • Business strategy and planning
  • Customer-facing communications
  • Code that will be deployed to production
  • Content published under your name
  • Recommendations you will pass to others

AI can generate options, but you must verify accuracy, check assumptions, and test outputs before acting.

GREEN ZONE — Low-Risk Experimentation

  • Brainstorming and idea generation
  • First drafts for internal use only
  • Learning new concepts (with cross-checking)
  • Formatting and restructuring existing content

Mistakes are cheap and easily corrected. Still requires judgment, but failure is not expensive.

Real Case Examples

Legal: A chatbot advised a user to file a lawsuit without understanding jurisdiction or statute of limitations. The user followed the advice and missed critical deadlines.

Financial: An AI recommended a tax strategy that violated IRS rules. The user implemented it and faced penalties and an audit.

Medical: Users report chatbots giving confident but wrong medical advice, including dangerous drug interactions and misdiagnosis of serious conditions.

The Rule

If you cannot verify the output yourself OR you are not qualified to take responsibility for being wrong, do not use AI for that decision.

Interactive Exercise

Practice risk classification on a hypothetical scenario:

Here is a scenario: I'm a freelance marketing consultant. A client wants advice on whether to pivot their SaaS product to target enterprise customers instead of SMBs. This would require a 6-month development cycle and $200K investment. They want me to provide a strategic recommendation by Friday. For this scenario: 1. Classify the risk level (Red/Yellow/Green) 2. List what I MUST verify before giving advice 3. Define where AI can help and where it cannot 4. State who holds final accountability Be specific about verification steps.

Checkpoint: Proof of Understanding

Identify YOUR current highest-stakes decision (real work, not hypothetical). Classify it as Red/Yellow/Green. State whether and how AI should be involved, what you must verify, and who holds final accountability. Be brutally honest about whether you are currently over-delegating to AI.

0 / 100 characters minimum
Continue to Lesson 10 →