Air Canada Chatbot Hallucination Legal Case
Analysis of the Air Canada chatbot case where a customer was awarded compensation after the airline's AI chatbot fabricated a bereavement fare policy. The first major legal ruling holding a company liable for its AI chatbot's hallucinations.
In 2022, Air Canada passenger Jake Moffatt used the airline's website chatbot to inquire about bereavement fares for a trip to attend a family funeral. The chatbot provided detailed information about a bereavement fare policy, including specific discount amounts and a process for applying retroactively after purchase. Moffatt purchased a full-fare ticket and later applied for the discount as the chatbot had described. Air Canada denied the claim because the bereavement policy the chatbot described did not exist -- the chatbot had hallucinated the entire policy. In February 2024, a Canadian tribunal ruled in Moffatt's favor, ordering Air Canada to honor the fare the chatbot had quoted.
Incident Timeline
| Date | Event |
|---|---|
| November 2022 | Jake Moffatt uses Air Canada's website chatbot to inquire about bereavement fares |
| November 2022 | Chatbot provides detailed (fabricated) bereavement fare policy information |
| November 2022 | Moffatt purchases a full-fare ticket based on the chatbot's information |
| December 2022 | Moffatt applies for the bereavement discount as described by the chatbot |
| December 2022 | Air Canada denies the claim, stating no such policy exists |
| 2023 | Moffatt files a complaint with the Civil Resolution Tribunal of British Columbia |
| February 2024 | Tribunal rules in Moffatt's favor, ordering Air Canada to pay the difference plus damages |
What the Chatbot Said
The chatbot provided specific, confident, and entirely fabricated information:
- Discount amount. The chatbot stated a specific percentage discount on bereavement fares.
- Retroactive application. The chatbot explained that passengers could purchase a regular fare and apply for the bereavement discount within 90 days of travel.
- Required documentation. The chatbot listed specific documents needed for the retroactive application (death certificate, proof of relationship).
- Application process. The chatbot described a step-by-step process for submitting the retroactive application.
None of this information reflected any actual Air Canada policy. The chatbot generated it as a plausible-sounding response based on its training data, which likely included information about bereavement fare policies from other airlines.
Root Cause Analysis
Model-Level Causes
| Factor | Explanation |
|---|---|
| Hallucination | The model generated confident, detailed responses about a non-existent policy because the topic (bereavement fares) was common enough in its training data to produce fluent responses |
| No knowledge grounding | The chatbot was not connected to a verified database of Air Canada policies. It generated responses from its parametric knowledge rather than retrieving from authoritative sources |
| Confident tone | The model presented fabricated information with the same confident tone as accurate information, giving users no way to distinguish hallucination from fact |
Application-Level Causes
| Factor | Explanation |
|---|---|
| No RAG integration | The chatbot did not retrieve from Air Canada's actual policy documents before responding |
| No disclaimer | The chatbot did not warn users that its responses might be inaccurate or that they should verify policy details through official channels |
| No human escalation | For sensitive topics like fare policies and financial commitments, the chatbot did not escalate to a human agent |
| No hallucination detection | No system checked whether the chatbot's responses about policies were consistent with actual Air Canada policies |
Organizational-Level Causes
| Factor | Explanation |
|---|---|
| Accountability gap | Air Canada treated the chatbot as a cost reduction tool without establishing accountability for its accuracy |
| Inadequate testing | Policy-specific accuracy was not tested before deployment |
| Legal risk assessment | The legal risk of chatbot hallucinations was not assessed before deployment |
The Legal Ruling
The Civil Resolution Tribunal's ruling established several important precedents:
-
The chatbot speaks for the company. Air Canada argued that the chatbot was a separate entity and that customers should not rely on it. The tribunal rejected this argument, ruling that Air Canada is responsible for all information on its website, "whether it comes from a static page or a chatbot."
-
Reasonable reliance. The tribunal found it reasonable for Moffatt to rely on the chatbot's information because it was presented authoritatively on Air Canada's official website with no disclaimers about potential inaccuracy.
-
The hallucination is the company's problem. The tribunal ruled that Air Canada could not deploy a chatbot that provides inaccurate information and then disclaim responsibility for that information. The duty of care to provide accurate information to customers extends to AI-generated responses.
Impact Assessment
| Dimension | Impact |
|---|---|
| Legal | Precedent-setting ruling establishing corporate liability for AI chatbot hallucinations |
| Financial | Direct: compensation paid to Moffatt. Indirect: cost of rebuilding chatbot with accuracy controls |
| Reputational | Significant negative press coverage of Air Canada's attempt to blame its own chatbot |
| Industry | Put all organizations deploying customer-facing AI on notice about hallucination liability |
| Regulatory | Cited in subsequent discussions about AI accountability regulations |
Lessons Learned
For Organizations
- You are liable for your chatbot's statements. The legal principle is now established: AI-generated information presented on your platform is attributable to your organization.
- Hallucination prevention is a legal obligation, not a nice-to-have. For any chatbot that discusses policies, pricing, or contractual terms, hallucination prevention must be implemented.
- Disclaimers are not sufficient. Simply adding "AI-generated, may be inaccurate" disclaimers may not protect against liability if the chatbot is the primary customer interaction channel.
For AI Application Developers
- Ground responses in authoritative data. Policy chatbots must use RAG with verified policy documents, not ungrounded generation.
- Implement confidence thresholds. When the model's response cannot be verified against source documents, escalate to a human agent.
- Add factual verification. For responses about policies, pricing, and contractual terms, verify generated content against a structured database before delivering it.
- Topic-specific guardrails. Identify high-risk topics (pricing, policies, medical advice, legal guidance) and implement additional verification for these topics.
For Red Teams
| Test Category | Specific Tests |
|---|---|
| Policy accuracy | Ask about specific policies and verify responses against actual documentation |
| Hallucination probing | Ask about policies that do not exist to see if the chatbot fabricates details |
| Confidence without grounding | Assess whether the chatbot presents uncertain information with inappropriate confidence |
| Financial commitment | Test whether the chatbot makes promises about pricing, discounts, or refunds that could create legal obligations |
Related Topics
- Incident Analysis Methodology - Framework applied in this analysis
- Customer Service AI Security - Domain-specific risks for CS chatbots
- Healthcare AI - Similar hallucination risks in healthcare context
- Legal & Ethics - Legal and ethical frameworks for AI deployment
References
- "Moffatt v. Air Canada, 2024 BCCRT 149" - Civil Resolution Tribunal of British Columbia (February 2024) - The full tribunal decision
- "Air Canada must honor refund policy invented by airline's chatbot" - Ars Technica (February 2024) - Detailed coverage of the ruling
- "Air Canada Found Liable for Its Chatbot Giving a Passenger Bad Information" - The Verge (February 2024) - Analysis of the legal precedent
- "AI Chatbot Hallucinations and Legal Liability" - Harvard Law Review Blog (2024) - Legal analysis of the implications
What legal precedent did the Air Canada chatbot case establish?