Module 3: Lesson 10 of 16

Module 3 · Lesson 10 – Professional Use & Ethics

Disclosure, accountability, and organizational policy

Professional Risk Patterns

Certain professions carry heightened risk when using AI because mistakes have legal, financial, or safety consequences:

Consultants & Advisors

  • Risk: Clients pay for your judgment, not AI-generated recommendations
  • Failure mode: Passing off AI analysis as expert insight without verification
  • Accountability gap: Who is liable when AI-assisted advice causes client losses?

Lawyers & Legal Professionals

  • Risk: Legal advice must be accurate; errors cause missed deadlines, lost cases, malpractice claims
  • Failure mode: Using AI for case law research without verifying citations exist and are relevant
  • Documented cases: Lawyers sanctioned for submitting AI-generated briefs with fake case citations

Financial Advisors

  • Risk: Investment and tax advice errors result in financial loss and regulatory penalties
  • Failure mode: AI recommends strategies that violate regulations or client-specific constraints
  • Fiduciary duty: Cannot delegate responsibility to AI systems

Medical & Healthcare Professionals

  • Risk: Diagnostic and treatment errors cause patient harm
  • Failure mode: Trusting AI symptom checkers or drug interaction databases without clinical verification
  • Standard of care: AI cannot replace clinical judgment

Disclosure Templates

If you use AI in professional work, you must be willing to disclose:

Template 1 — Internal Documentation
"AI was used to [specific task]. Output was verified by [method]. Final decision made by [name/role]. Accountability rests with [name/role]."

Template 2 — Client/Stakeholder Disclosure
"This analysis used AI assistance for [research/drafting/data processing]. All findings were independently verified against [sources]. Final recommendations are based on my professional judgment and I take full responsibility for accuracy."

Template 3 — When You Cannot Verify
"I cannot verify this independently, so I will not use AI for this decision."

Organizational AI Policy Components

If you manage a team or run a business, your AI policy must define:

  • Who can use AI: Roles, seniority levels, training requirements
  • Approved tasks: Specific use cases where AI is permitted (e.g., drafting internal docs) vs. prohibited (e.g., client-facing legal advice)
  • Required documentation: What must be recorded (task, verification steps, final decision-maker)
  • Review cadence: How often AI-assisted work is audited
  • Accountability assignment: Who is responsible when AI-assisted work fails

The Prime Directive Reminder

AI generates. You execute. You document. Reality decides.

In professional contexts, "you execute" means you verify, you take responsibility, and you face the consequences when it's wrong. If you're not willing to do that, don't use AI for the task.

Interactive Exercise

Draft an AI usage statement for your real role or business:

I am a [your role]. My work involves [brief description of what you do]. Help me draft: 1. An AI Usage and Verification Statement that I would be willing to show a client or stakeholder 2. A 1-paragraph internal policy defining: - Which tasks I can/cannot use AI for - What verification is required - Who holds final accountability Be specific about my actual work, not generic guidelines.

Checkpoint: Proof of Understanding

Identify where you are currently over-delegating to AI in your professional work. Write a disclosure statement you would be willing to show a client, boss, or stakeholder that honestly describes: (1) what AI does, (2) what you verify, (3) who is accountable. If you're not willing to show this disclosure, you are over-delegating.

0 / 80 characters minimum
Continue to Lesson 11 →