Skip to content
AI Education & Case Studies

Search or browse our knowledge base for information on AI, human values at risk, case studies, policy & standards around the globe and AIFCS advocacy.

< All Topics
Print

Air Canada – when Chatbots lie

Air Canada chatbot wrong

Air Canada offers reduced rates for passengers wishing to travel due to bereavement. When Jake Moffat asked Air Canada’s chatbot about the airlines bereavement rates he was told that he could claim a refund retrospectively, within 90 days of the ticket being issued.

By Atlantic Aviation Media - https://www.flickr.com/photos/193032273@N08/51171146574/, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=107555395

When Moffat claimed a refund from the company after purchasing a ticket he was told that he should have applied for the reduced rate BEFORE purchasing the ticket and they refused to refund him.

Company refuses to honour chatbot output

Despite providing Air Canada with a screen shot of the chatbot’s output, the company insisted that they could not be liable for information provided by the chatbot. Moffat spent two and a half months trying to resolve the issue with Air Canada and eventually took it to the Civil Resolutions Tribunal.  

In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website,

Civil Resolutions Tribunal Feb 2024

Tribunal Rules against Air Canada

The airline argued that the chatbot’s response to Moffat’s query included a link to the company’s policy that states that requests for a discounted fare cannot be claimed after purchasing a ticket.

I find Air Canada did not take reasonable care to ensure its chatbot was accurate,

Negligent misrepresentation can arise when a seller does not exercise reasonable care to ensure its representations are accurate and not misleading,

Civil Resolutions Tribunal Feb. 2024

Trustworthiness of AI chatbots in question

A key issue raised by this case is whether chatbots based on Generative AI and Large Language Models can be relied upon. Other studies also show that Generative AI is very error prone, sometimes referred to as ‘hallucinations’. The tribunals ruling shows that companies should be accountable for the outputs of generative AI. 

While Air Canada argues Mr. Moffatt could find the correct information on another part of its website, it does not explain why the webpage titled 'Bereavement travel' was inherently more trustworthy than its chatbot. It also does not explain why customers should have to double-check information found in one part of its website on another part of its website,

Civil Resolutions Tribunal Feb. 2024

References

https://decisions.civilresolutionbc.ca/crt/crtd/en/item/525448/index.do

Impact on Human Values

Human Values Risk Analysis

Truth & Reality

HIGH RISK

Factual error

Authentic Relationships

MEDIUM RISK

Replaces human interaction

MEDIUM RISK

Replaces human agents

Privacy & Freedom

HIGH RISK

Uses copyright data

Moral Autonomy

LOW RISK

Cognition & Creativity

MEDIUM RISK

Can reduce critical thinking

Governance Pillars

Accountability

Justice

Policy Recommendations

Organisations deploying a chatbot for use by the public or clients must be accountable for the output of the chatbot, even if it is in error or conflicts with other publicly viewable data. Legislation may be needed to assign ‘product’ liability where the chatbot is the ‘product’.

Copyright protection should be enforced and no exception made for AI companies.

Table of Contents