Where is AI Liability heading?
By Laith Sarhan
The rapid integration of artificial intelligence (AI) into commercial and public-sector operations has started to prompt legal scrutiny in Canada, particularly concerning liability for AI-driven decisions and misinformation.
While the judiciary has not had a wholesome opportunity to consider the question of liability around AI, previous decisions regarding parallel technologies presents some options for where things are headed.
Early Rumblings in Lower Tribunals
The Moffatt v. Air Canada Decision
In February 2024, the British Columbia Civil Resolution Tribunal (CRT) ruled that Air Canada was liable for negligent misrepresentation after its AI chatbot provided incorrect guidance to a customer regarding bereavement fare policies. The plaintiff, Jake Moffatt, had relied on the chatbot’s advice to purchase a full-price ticket under the assumption that he could later apply for a partial refund under Air Canada’s bereavement fare program. When the airline denied his refund request, citing a policy discrepancy between the chatbot’s statements and its official guidelines, Moffatt filed a claim for damages.
The tribunal dismissed Air Canada’s argument that the chatbot constituted a “separate legal entity” exempting the company from liability. Tribunal member Christopher Rivers emphasized that businesses deploying AI systems must ensure the accuracy of all information disseminated through their platforms, whether static or dynamically generated. The ruling affirmed that organizations cannot delegate accountability for AI outputs to the technology itself, as doing so would undermine consumer protections and erode trust in automated systems.
Continuation of Existing Principles
The decision clarified several key points relevant to AI liability in Canada that mirror an employer's vicarious liability for the actions of its employees:
- Duty of Care: Service providers owe a duty of care to consumers to ensure the accuracy of information provided through AI tools.
- Standard of Care: Companies must implement reasonable safeguards to verify AI-generated content, including routine audits and disclaimers where necessary.
- Causation: Reliance on AI misinformation can establish a direct causal link to financial or reputational harm, even if the error originates from machine learning algorithms.
While the quantum of damages was minor (the tribunal awarded Moffatt damages totaling CAD 1,640.36, including partial reimbursement, interest, and tribunal fees), this outcome signals a growing judicial willingness to hold businesses accountable for AI failures, irrespective of the technology’s complexity or autonomy.
Broader Implications for AI Governance
Provincial Initiatives and Tort Law Adaptations
In October 2024, the British Columbia Law Institute (BCLI) published a report advocating for tort law reforms to address AI-driven harms. The report rejected strict liability regimes, instead proposing a fault-based framework where plaintiffs must demonstrate that developers or deployers failed to meet a reasonable standard of care. Key recommendations include:
- Evidentiary Adjustments: Courts should infer causation in cases where AI systems’ opacity prevents plaintiffs from tracing harm to specific design flaws.
- Algorithmic Discrimination Remedies: Legislators should create civil remedies for biases embedded in training data or decision-making processes.
These proposals align with the reasoning in Moffatt, emphasizing that existing negligence principles remain applicable but may require judicial flexibility to account for AI’s unique challenges.
Beyond negligence, given the absence of legislation around liability for AI and while legislators consider specific AI-focused provisions, several other liability theories are starting to emerge in AI-related cases:
- Copyright Infringement: The Canadian media lawsuit against OpenAI center on unauthorized use of copyrighted materials to train AI systems and several other cases are arising in the United States to determine the consequences of foundational models training on copyright data.
- Product Liability: Traditional product liability concepts are being applied to AI systems, alleging defective design, failure to warn, and negligence.
- Contractual Liability: Some cases involve breach of contract claims, particularly when AI systems fail to perform as promised or when terms of service are violated.
- Class Action Liability: Where damage may be minor on an individual level but grand when considered broadly, AI developers may face class action lawsuits (especially as seen in more recent BC decisions).
Industry Responses and Risk Mitigation
Canadian businesses have accelerated efforts to implement AI governance frameworks. The federal government’s March 2025 release of a Guide for Managers of AI Systems provides practical steps for compliance with the ISED Voluntary Code of Conduct, including:
- Regular audits of AI outputs for accuracy and bias.
- Clear disclaimers informing users of AI-generated content’s limitations.
- Human oversight protocols for high-stakes decisions, such as financial advisement or medical diagnostics.
Major signatories to the Voluntary Code, including TELUS, CIBC and Intel, have pledged to integrate these measures into their AI deployment strategies.
Emerging Jurisprudential Challenges
As AI systems grow more sophisticated, courts will likely confront novel questions, such as:
- Liability for Autonomous Actions: Whether developers can be held liable for unforeseeable AI behaviors arising from machine learning adaptations, including in sensitive areas like healthcare.
- Third-Party Integrations: How liability should be apportioned when AI tools incorporate external data sources or APIs.
- Cross-Border Disputes: Jurisdictional conflicts when AI systems operate across provincial or national boundaries.
The ongoing CanLII v. Caseway AI and Toronto Star v. OpenAI cases, which address copyright infringement in AI training data, may further shape liability standards by clarifying the scope of “fair dealing” under Canadian law.
Conclusion
Canadian AI jurisprudence is developing and starting to affirm that traditional tort principles of negligence and misrepresentation apply to AI deployments. While legislative efforts like AIDA have stumbled, existing rulings provides immediate guidance for businesses: invest in AI governance, prioritize transparency, and assume liability for algorithmic outputs. As the technology evolves, Canada’s legal framework must similarly adapt, balancing innovation with accountability to safeguard public trust in an increasingly automated world.
Future developments will hinge on collaborative efforts between policymakers, industry leaders, and the judiciary to create a cohesive strategy for AI accountability—one that learns from both the successes and shortcomings of early cases like Moffatt. Until then, organizations would be prudent to treat AI not as a shield against liability, but as a tool requiring diligent stewardship.