Human-AI Incident Investigation and Fair Review
This lesson applies Just Culture principles directly to incident investigation when AI and humans jointly shape care processes and outcomes.
Learning outcomes
- Describe how to investigate incidents involving both humans and AI tools.
- Identify evidence categories relevant to AI-related events.
- Support fair accountability in complex human-AI cases.
Building the timeline
Human-AI investigations should include the workflow timeline, user actions, prompts or alerts presented, recommendations generated, overrides used, escalation decisions, and downstream effects on care.
Evidence to gather
Relevant evidence may include screenshots, audit logs, configuration settings, training records, policy expectations, user interviews, governance approvals, and known model or interface limitations.
Reviewing behavior fairly
Reviewers should avoid hindsight bias. They must assess what information was visible to the user, whether the AI output appeared credible, whether the interface supported safe interpretation, and whether the organization created realistic expectations.
Improvement actions
Improvement may involve redesigning interfaces, strengthening override procedures, clarifying accountability, updating training, improving transparency, or restricting use cases until safeguards are stronger.