# Sarhan Data Law — Full Content Library Generated: 2026-05-05T15:41:34.269Z Articles: 10 --- ## Party Roles in Data Processing Agreements: When Controller-to-Controller Makes Sense URL: https://sarhandata.law/resources/dpa-party-roles-controller-to-controller Date: 2026-01-19 Category: Product Counsel Every data processing agreement starts with a fundamental question: who is the controller and who is the processor? Most agreements will default to the familiar pattern: the customer is the controller and vendors are processors. Some of the most important data flows in modern business, however, operate on a controller-to-controller basis, and mischaracterizing these relationships creates legal risk and operational confusion. Understanding when controller-to-controller arrangements are appropriate is essential for companies navigating complex data ecosystems and trying to ensure that their chain of responsibilities is accordingly accounted for. The Core Distinction Controller: The organization that determines the purposes and means of processing personal information. The controller decides why data is collected and how it will be used. Under PIPEDA, this is the organization accountable to individuals for their data. Processor: The organization that processes personal information on behalf of a controller, according to the controller's instructions. The processor only executes the controller's decisions. Independent Controllers: Two organizations that each determine their own purposes for processing the same personal information. Neither acts on behalf of the other. Each is independently accountable. The distinction matters because it determines who bears accountability, what agreements are needed, and what obligations flow to individuals. When Controller-to-Controller Applies Several common business relationships are properly characterized as controller-to-controller rather than controller-to-processor: 1. Employer of Record (EOR) Services When you engage an EOR like Deel, Remote, or Oyster to employ workers in jurisdictions where you don't have an entity, the EOR isn't processing employee data on your behalf. The EOR is the employer. It determines how to process employee data for its own employment, payroll, benefits, and compliance purposes. You provide the EOR with information about the individual (who they are, what role, what compensation). The EOR then processes that data—and collects additional data—for its own purposes as the legal employer. This is two controllers: * You: Controller of business relationship data, work assignments, performance information * EOR: Controller of employment data, payroll processing, benefits administration, local compliance The agreement between you and the EOR should reflect this. It's not a DPA with you as controller and EOR as processor. It's a services agreement between two controllers, with provisions addressing each party's independent obligations. 2. GTM and Sales Intelligence Tools Platforms like Clay, Apollo, ZoomInfo, and similar GTM tools present an interesting case. When you use these tools, data flows in multiple directions: * Data you provide: You may upload your prospect lists, CRM data, or target account information * Data they provide: The platform provides enrichment data, contact information, intent signals from their own databases * Data they generate: The platform creates derived insights by combining your data with their data and third-party sources Is the GTM platform your processor? Not exactly. These platforms maintain their own databases, determine their own data collection practices, and use data from multiple customers to improve their products. They have independent purposes. The appropriate framing is often: * For data you upload: You're the controller, they process according to your instructions (processor relationship for this data) * For data they provide: They're the controller of their database, licensing you access (controller-to-controller data sharing) * For combined/derived data: This gets complex—the agreement needs to specify who controls what Many GTM tools use hybrid agreements that address both the processor and controller-to-controller elements. 3. B2B Data Partnerships When two companies share data to create joint value—combined analytics, shared insights, co-marketing—neither is typically acting as a processor for the other. Both have their own business purposes for the data. Examples: * A SaaS company sharing anonymized usage patterns with a research partner * Two companies in a strategic partnership sharing customer overlap data * A platform sharing seller data with a payment processor that uses it for fraud modeling These are controller-to-controller relationships where each party must ensure it has a lawful basis for its own processing. 4. Professional Services Firms Law firms, accounting firms, and consultants often function as independent controllers rather than processors. When you engage a law firm, you share information about your business, your employees, your counterparties. The law firm determines how to use that information to provide legal services—it applies its own professional judgment, maintains its own records, and has its own obligations. The law firm isn't processing data "on your behalf" in the processor sense. It's providing professional services that require it to make independent determinations about data handling. 5. Payment Processors and Financial Services Payment processors like Stripe, Adyen, or PayPal occupy an interesting position. When you integrate their services: * They process transaction data to execute payments (arguably processor-like) * They use transaction data for their own fraud prevention, risk modeling, and compliance (controller purposes) * They're subject to their own regulatory obligations that require independent data handling decisions Most payment processor agreements reflect this hybrid reality, with some data flows treated as processing on your behalf and others treated as independent controller activities. Why the Distinction Matters Accountability to Individuals Under PIPEDA and GDPR, controllers are accountable to individuals for how their data is handled. If you're the controller, you're responsible for ensuring lawful processing, responding to access requests, and notifying about breaches—even for data processed by your processors. In a controller-to-controller relationship, each controller is independently accountable for its own processing. You're not responsible for the other controller's compliance failures (though you may face reputational consequences from association). Consent and Legal Basis If you're a controller sharing data with a processor, your existing consent or legal basis covers the processor's handling (because they're acting on your behalf). If you're a controller sharing data with another controller, the receiving controller needs its own legal basis. Your consent doesn't automatically extend to their purposes. This is why controller-to-controller agreements include provisions about each party's compliance responsibilities. Instructions and Autonomy Processors act on controller instructions. Controllers make independent decisions. If you engage a vendor expecting them to follow your instructions precisely, and they're actually operating as an independent controller making their own determinations, you have a mismatch that creates risk. The vendor might use data in ways you didn't anticipate, and you won't have contractual recourse if those uses were within their legitimate controller purposes. Agreement Structure Processor agreements (DPAs) are inherently asymmetric—the controller directs, the processor obeys. Controller-to-controller agreements are between peers—each party has its own rights and obligations, and the agreement governs the interface between them. Structuring Controller-to-Controller Agreements When you've determined that a controller-to-controller relationship is appropriate, the agreement should address: 1. Data Flows Specify exactly what data moves in which direction: * What personal information does each party provide to the other? * What derived data is created, and who controls it? * What enrichment or combination occurs? 2. Purposes Each party's permitted purposes should be explicit: * What can Party A do with data received from Party B? * What can Party B do with data received from Party A? * Are there prohibited uses? 3. Legal Basis Each party should represent that it has a lawful basis for: * Disclosing data to the other party * Processing data received from the other party This doesn't mean each party guarantees the other's compliance—it means each party takes responsibility for its own. 4. Data Subject Rights How do the parties coordinate on individual requests? * If an individual asks Party A for access, and some of that data came from Party B, how is that handled? * If an individual asks Party A for deletion, does Party A notify Party B? 5. Security Obligations Even though neither party is directing the other's processing, both should commit to reasonable security measures. A breach at either party affects the shared data. 6. Breach Notification How do parties notify each other of incidents affecting shared data? What are the timelines and procedures? 7. Liability How is liability allocated for: * Breaches of the agreement * Third-party claims arising from each party's processing * Regulatory actions Controller-to-controller agreements often include mutual indemnification provisions rather than the one-way indemnities typical of DPAs. The Hybrid Reality Many vendor relationships don't fit neatly into controller or processor categories. The same vendor may be: * A processor for some data (customer data you upload for them to process according to your instructions) * A controller for other data (their own product improvement, aggregated analytics, fraud prevention) Sophisticated agreements acknowledge this reality with provisions that address each data flow according to its proper characterization. The agreement might include: * A DPA attachment for data processed on your behalf * Controller-to-controller terms for data the vendor processes for its own purposes * Clear delineation of which data falls into which category Practical Guidance When evaluating a new vendor or partner: 1. Map the data flows. What data goes where, and for what purposes? 2. Ask: Is this vendor acting on our instructions, or making independent determinations? 3. If the vendor has its own purposes for the data, it's at least partially a controller 4. Review their standard agreements—do they acknowledge their controller role, or do they try to disclaim all accountability? 5. Ensure the agreement structure matches the actual relationship Red flags: * A vendor that clearly operates as a controller but insists on a pure processor agreement (avoiding accountability) * An agreement that doesn't address the vendor's own uses of data * Lack of clarity about who handles data subject requests * Processor agreements with vendors that obviously make independent decisions about data For your own vendor agreements: If you're a SaaS company and your processing involves independent purposes (product improvement, aggregated analytics, ML training), be transparent about it. Controller-to-controller framing for those uses is more accurate—and more defensible—than trying to squeeze everything into a processor box. The Strategic Perspective The controller-processor framework made sense when data processing was simpler: you hired a vendor to do a specific task with your data, and they did exactly that. Modern data ecosystems are more complex. Data flows through multiple parties, gets enriched and combined, and serves multiple purposes. Companies that understand this complexity—and structure their agreements accordingly—reduce legal risk and build more sustainable data relationships. The goal isn't to avoid accountability by claiming everything is controller-to-controller. It's to accurately characterize each relationship so that accountability is properly allocated and everyone understands their obligations. When controller-to-controller is the right framing, use it. When processor is right, use that. And when the relationship is genuinely hybrid, build an agreement that reflects reality. --- ## Preparing for the Enterprise Sales Cycle: A Governance Playbook for Growth-Stage Companies URL: https://sarhandata.law/resources/enterprise-sales-governance-playbook Date: 2026-01-05 Category: Product Counsel Your product is ready. You've closed SMB customers and proven market fit. Now a Fortune 500 company wants to pilot. Suddenly you're drowning in security questionnaires, redlined DPAs, and procurement calls that feel like depositions. This is the enterprise sales cycle. For growth-stage companies, it's where deals go to die. The companies that break through aren't necessarily the ones with the best product. They're the ones that anticipated what enterprise buyers would ask for and built the answers into their operations before the RFP arrived. What Enterprise Buyers Actually Want Enterprise procurement isn't about checking boxes. It's about risk management. The buyer's security, legal, and compliance teams are asking a fundamental question: If we bring this vendor into our environment, what's our exposure? Their concerns cluster around several themes: Data Handling * Where does our data go? * Who at your company can access it? * What happens if there's a breach? * How do you handle data when we leave? Operational Security * How do you protect your systems? * What's your vulnerability management program? * Do you have incident response capabilities? * Who are your subprocessors? Compliance Posture * Do you meet industry standards (SOC 2, ISO 27001)? * Can you meet our regulatory requirements (PIPEDA, GDPR, sector-specific rules)? * Do you have appropriate insurance? Organizational Maturity * Is this company going to exist in two years? * Do they have the processes to support an enterprise relationship? * Can they scale with us? Every question on a security questionnaire traces back to one of these concerns. Understanding that helps you prepare answers that satisfy the underlying worry, not just fill in the blank. The Documents That Close Deals Enterprise deals require specific collateral. Having these ready—not scrambling to create them mid-deal—is the difference between a 60-day close and a 6-month slog. 1. Security Documentation * SOC 2 Type II Report: This is table stakes for enterprise SaaS. A Type I report (point-in-time) is a start, but buyers want Type II (period of time, typically 6-12 months). If you don't have it, be prepared to answer why and when. * Penetration Test Results: Annual third-party penetration testing, with a summary of findings and remediation. Buyers want to see you test your own defenses. * Security Whitepaper: A clear, well-organized overview of your security architecture, practices, and controls. This is technical documentation for security teams. 2. Privacy and Data Governance * Data Processing Agreement (DPA): Your standard DPA, drafted to be acceptable to sophisticated buyers without extensive negotiation. If every enterprise deal requires a ground-up DPA negotiation, you're creating friction. * Privacy Policy: Comprehensive, accurate, and specific about your data practices as a controller. Generic templates may signal that you're willing to say anything to get a deal rather than have done your due diligence. * Subprocessor List: Current list of all third parties who process customer data, with geographic locations. Buyers need this for their own compliance. * Data Flow Documentation: Where does data go? How does it flow through your systems? Can you explain it clearly? 3. Compliance Evidence * Compliance Mapping: If you operate in regulated sectors (healthcare, financial services), documentation showing how you meet applicable requirements. * Certifications and Attestations: SOC 2 (Type 2 if possible) report, ISO 27001 certificate, any sector-specific certifications. * Insurance Certificates: Cyber liability insurance is typically required. Have certificates ready to share. 4. Operational Documentation * Incident Response Plan: Documented procedures for security incidents, including notification timelines and communication protocols. * Business Continuity / Disaster Recovery: How do you maintain operations during disruptions? What are your recovery time objectives? * Vendor Management Policy: How do you evaluate and monitor your own vendors? Enterprise buyers care about your supply chain. The Security Questionnaire Strategy Security questionnaires are universally dreaded. Enterprise buyers send 200-500 question documents; vendors scramble to respond; both sides know the process is inefficient. But it's the game, and you need to play it well. Build a Response Library Don't answer each questionnaire from scratch. Build a master response library covering: * All questions from the common frameworks (SIG, CAIQ, VSA, HECVAT) * Your answers, with evidence references * Question mappings (different questionnaires ask the same thing differently) When a new questionnaire arrives, 80% of the answers should be copy-paste from your library. Your effort goes to the 20% that's unique. Pre-Position with a Security Package Before the questionnaire arrives, send your security documentation proactively: * SOC 2 report * Security whitepaper * Penetration test summary * Subprocessor list * Standard DPA This accomplishes two things: it signals maturity, and it often reduces the questionnaire burden. Security teams that see a SOC 2 report may abbreviate their review. Staff Appropriately Questionnaire responses require input from engineering, security, legal, and ops. Designate an owner—typically someone in security, compliance, or ops—who can coordinate responses and maintain the library. This person becomes your enterprise readiness quarterback. The DPA Negotiation Playbook Data Processing Agreements are where legal teams spend their energy. A poorly drafted or inflexible DPA creates friction that kills momentum. Start with a Strong Standard Your template DPA should: * Meet the requirements of PIPEDA, GDPR, and major US state laws * Include Standard Contractual Clauses for international transfers * Address breach notification with reasonable timelines * Define data retention and deletion obligations * Specify subprocessor notification and objection rights If your starting point is weak, every negotiation becomes a battle. Know Your Red Lines Certain requests are common and reasonable: * Specific breach notification timelines * Audit rights (with reasonable limitations) * Subprocessor restrictions tied to their compliance program * Data residency commitments if you can support them Certain requests are problematic: * Unlimited liability for data breaches * Audit rights with no advance notice or scope limitations * Requirements to maintain certifications you don't have * Data localization you can't technically support Know what you can accommodate, what you can negotiate, and where you have to hold firm. Document your rationale so your sales team can explain positions without escalating every issue. Empower Your Sales Team Sales should be able to handle routine DPA negotiations without involving legal on every call. This means: * Training on your DPA and common negotiation points * Authority to accept certain modifications (within defined parameters) * Clear escalation paths for issues outside their authority Your legal team should be closing edge cases, not reviewing every standard negotiation. Building the Muscle Enterprise readiness isn't a one-time project. It's an operational capability that compounds over time. Quarterly Cadence * Review and update security documentation * Refresh questionnaire response library * Update subprocessor list and DPA if needed * Analyze recent deals: What slowed them down? What can be improved? Feedback Loops Your sales and customer success teams hear what enterprise buyers care about. Create a mechanism for that feedback to reach whoever owns your security and compliance program. If the same objection comes up repeatedly, address it systematically. Investment Signals Enterprise buyers pay attention to how you invest. A SOC 2 audit isn't cheap. A dedicated security hire isn't cheap. These investments signal that you're building for the long term and taking their concerns seriously. The Trust Advantage Enterprise sales cycles are fundamentally about trust. The buyer is taking a risk by bringing you into their environment. Your job is to make that risk feel manageable. Companies that treat compliance as a checkbox create friction. Companies that treat it as a trust-building exercise accelerate deals. The difference: * Checkbox: "Here's our SOC 2 report, let us know if you have questions." * Trust-building: "Here's our SOC 2 report. I also want to walk you through our security architecture and how we'd handle an incident. What concerns does your team have?" The first response answers the question. The second response builds the relationship. When enterprise buyers trust you, procurement moves faster, negotiations are smoother, and you close. That's the competitive advantage that governance creates—not compliance for compliance's sake, but trust that translates into revenue. --- ## Building Sovereign AI: A Framework for Canada's Public Sector and Critical Infrastructure Leaders URL: https://sarhandata.law/resources/viss Date: 2025-10-15 Category: Reflections This article is an extension of the ideas I presented during my lunch remarks at the 2025 Vancouver International Security Summit (VISS). The conversation around building sovereign AI capabilities for Canada's public and critical infrastructure sectors is more urgent than ever, and this post offers a tangible governance framework for the leaders driving that mission. The global race for AI leadership is on. While many focus on computing power and algorithms, Canada's true, untapped advantage lies in our vast, high-quality public and private datasets. This is the fuel for the next generation of innovation in public services, economic growth, and national security. Yet, this strategic asset remains largely locked away, paralyzed by legitimate fears of privacy violations, security breaches, and ethical missteps. Public trust is low, and the risk of failure is high. A proactive governance framework doesn't inhibit innovation, however, it enables it. For Canada's public sector and critical infrastructure leaders, building a sovereign AI capability isn't just a technical challenge but primarily a governance one. This article provides a three-step framework to do it right. Step 1: Turning Data Liability into Your Greatest Asset It's not about having data; it's about having usable data. Before you can build, you must prepare the ground. This means legally and ethically engineering your datasets so they are safe and ready for AI and machine learning applications. * Go Beyond Anonymization: Under Canadian law, there are major differences between pseudonymized datasets, truly anonymized datasets, and de-identified datasets. True anonymization is the gold standard for unlocking sensitive health, transit, or civic data for broad research and model training, as it often removes the data from the scope of privacy law entirely. Aligning your technical teams on these definitions can substantially accelerate responsible and confident innovation. * Develop Data Trusts and Sandboxes: Create legally defined "sandboxes" where validated partners and researchers can access de-identified datasets for specific, public-benefit projects. This controlled environment allows for innovation without risking the integrity or security of the core dataset. Building data sharing frameworks that envision broader access and use can help reduce friction for secondary uses as they come up. * Rethink Consent for the AI Era: The traditional model of specific, informed consent is brittle and often impractical for the dynamic nature of AI. The modern, more defensible approach focuses on two pillars: * Reasonable Expectations: In accordance with new direction from the Federal Court of Appeal, aligning data use with what individuals would reasonably expect when they provided their information. This has been an unspoken cornerstone of Canadian privacy law and, as individual expectations re: data use are crystallizing over time, is paramount for maintaining public trust. * Demonstrable Accountability: Shifting the burden from individual consent to organizational accountability. This means being able to demonstrate that your systems are fair, secure, and used for their stated purpose, regardless of the consent obtained. Step 2: Vetting Your AI Supply Chain Most public sector AI will be procured, not built in-house. Remember: you're not just buying software; you're inheriting a supply chain of risk. It's important to understand the different layers of your AI supply chain, from the foundational model to the end-user application, and to contractually define ownership and risk at each stage. When you assess a vendor, differentiate between: * Primary Providers (Foundation Models): These are the base layers, like large language models. The key risks here are the provenance of their training data and inherent model biases that you will inherit. * Secondary Providers (AI-Driven SaaS): These vendors build applications on top of primary models, often incorporating complex data pipelines like Retrieval-Augmented Generation (RAG) or vector embeddings. The risks here multiply, involving how your data is processed, enriched, and secured at every step. 6 Key Questions to Ask Your Next AI Vendor: 1. "Where will our data live, and who can access it?" This covers data residency, security safeguards, and cross-border data flows. 2. "How was your model trained, and can you demonstrate bias mitigation?" This is essential for primary providers and helps you understand the risks you're inheriting. 3. "Can you explain how your application makes its decisions?" You need to push back against the "black box" excuse to meet your own transparency and accountability obligations. 4. "Who owns the outputs? The insights, reports, or new models generated using our data?" This is a critical IP question. Define ownership clearly in your contract to avoid giving away valuable derivative assets. 5. "What are our rights if we terminate the service? Can our data and the outputs be securely and verifiably deleted?" Ensure you have a clear exit path that doesn't lock you in or leave your data behind. 6. "Who is legally liable when the AI makes a harmful error?" Your contract must clearly allocate risk and provide indemnification for failures that are not your fault. Building on the AI Impact Assessment (as mentioned below), be sure to include specific indemnities for risks that may be foreseeable based on the use-case and/or model. Step 3: Preparing for When, Not If, Things Go Wrong Good governance shines brightest in a crisis. The final piece is establishing clear lines of human accountability before an AI system is deployed. This requires planning for a new class of novel, AI-specific incidents. * Appoint an "AI Accountable Executive": Just as you have a Chief Privacy Officer, a designated senior leader must be formally responsible for the performance, ethics, and impact of the organization's AI systems. This individual should be cross-functionally fluent and must be in touch with wide swaths of the organization. * Mandate AI Impact Assessments (AIA): Before any high-risk AI system goes live, a mandatory internal review must be conducted to proactively identify and mitigate risks related to bias, discrimination, and security. The federal government's own AIA is a useful starting point. * Create an "AI Incident Response Plan": A standard data breach plan is insufficient. This specialized playbook must address unique AI threats and failures, including: * Adversarial Attacks & Model Poisoning: A plan for malicious attempts to corrupt your training data or trick the model into producing harmful outputs. * Algorithmic Bias Discovery: A process for identifying, escalating, and remediating systemic biases that are discovered after deployment. * Cascading System Failure: A playbook for when an AI error causes significant, widespread operational disruption or public harm. Conclusion: From Governance to Advantage Building sovereign AI is a deliberate act of strategic governance, not just technological development. The initial three steps above provide a clear path forward. By embedding legal and ethical principles into our AI lifecycle from the start, we can turn a source of national anxiety into a source of enduring national advantage, building AI systems that are not only powerful but also trustworthy. Navigating the intersection of AI, data law, and public trust is complex. If your organization is starting this journey, I offer a complimentary 30-minute strategic call to discuss your specific challenges. --- ## BC Venture Funds: Which Structure Fits Your Strategy? URL: https://sarhandata.law/resources/venture-funds-in-bc Date: 2025-08-08 Category: Business Transactions British Columbia’s tech ecosystem is thriving. With this energy comes a new wave of ambition: visionary investors and operators are looking to launch their own venture capital funds to back the next generation of innovators. But before the first investment is made, a fund’s founders face a critical strategic decision that will define their fundraising, investment strategy, and operational reality. In B.C., there are two primary paths for structuring a venture fund: the traditional, flexible Limited Partnership (LP) model, and the provincially-incentivized Small Business Venture Capital Act (SBVCA) program. Choosing the right path isn’t just a legal formality; it’s the foundation of your fund’s identity. At Sarhan Data Law, we help our clients make this choice with clarity and foresight. Let’s break down both options. Path 1: The Traditional VC Fund – The Limited Partnership (LP) Model The LP model is the global standard for venture capital and private equity for a reason: flexibility and scalability. * How it Works: You, the fund manager, create a General Partner (GP) entity to manage the fund. Investors join as Limited Partners (LPs), contributing capital and limiting their liability. The relationship is governed by a comprehensive Limited Partnership Agreement (LPA). * The Core Advantage: Unrestricted freedom. The LP model allows you to define your own investment thesis. You can raise capital from accredited investors anywhere in the world—from Vancouver to New York to London—and you can invest in promising companies regardless of their location, stage, or industry. Your fund’s strategy is limited only by what you and your LPs agree to in the LPA. The engine of the traditional model is pure capital appreciation. Your success is measured by the returns you generate for your investors, free from government-imposed constraints. Path 2: The Provincial Advantage – The Small Business Venture Capital Act (SBVCA) The SBVCA program is a powerful tool created by the B.C. government to stimulate the local economy. It does this by offering a compelling incentive to investors. * How it Works: Your fund is set up as a B.C. corporation, registered with the province as a Venture Capital Corporation (VCC). When B.C.-based investors buy shares in your VCC, they receive a 30% provincial tax credit on their investment. Your VCC must then invest this capital into pre-approved "Eligible Small Businesses" (ESBs) based in British Columbia. * The Core Advantage: A supercharged fundraising tool. The 30% tax credit is a significant incentive that can dramatically de-risk the investment for B.C. residents and corporations. For a fund with a hyper-local B.C. focus, this can make raising your first fund significantly easier. The engine of the SBVCA model is the tax credit. It’s a powerful accelerator for attracting local capital, but it comes with a strict set of rules. The Strategic Crossroads: A Head-to-Head Comparison To put it simply, the traditional model offers freedom, while the SBVCA offers a powerful but constrained incentive. Think of it this way: the SBVCA program is like a government co-pilot that gives you a major boost, but it comes with a pre-approved flight plan limited to B.C. airspace. The traditional model gives you control of the cockpit to fly anywhere you see opportunity. Here’s how they stack up on key factors: Factor Traditional LP Fund SBVCA VCC Fund Investor Pool Global. You can fundraise from accredited investors and institutions anywhere. Local. Your primary fundraising advantage is with B.C. residents who can use the tax credit. Investment Mandate Flexible. Invest in any company, any geography, any stage that fits your thesis. Restricted. You must invest in B.C.-based "Eligible Small Businesses" in approved sectors. Geographic Focus Unrestricted. Support your companies as they grow and expand globally. Mandatory B.C. Focus. You risk non-compliance if a portfolio company moves its head office. Primary Incentive High Returns. Investors are motivated solely by the potential for significant capital gains. Tax Credit. The 30% tax credit is the main draw, lowering the effective risk for investors. Regulatory Body Primarily Securities Commissions. Primarily the B.C. Investment Capital Branch. Choosing the Right Path for Your Vision The right choice depends entirely on your fund's mission. * The SBVCA path is likely for you if: Your investment thesis is already hyper-focused on early-stage, B.C.-based companies in eligible sectors, and your target investors are primarily high-net-worth individuals and corporations in B.C. * The Traditional LP path is for you if: You envision a fund with a broader geographic scope, want the flexibility to invest in any sector or stage, and plan to target institutional investors from Canada and abroad. How Sarhan Data Law Can Help Choosing your path is just the beginning. At Sarhan Data Law, we provide the practical legal guidance to turn your vision into a reality. We assist fund managers with: * Strategic Structuring: Helping you select and form the right legal entity (LP or VCC) for your fund’s goals. * Document Drafting: Preparing the essential legal documents, including Limited Partnership Agreements, Subscription Agreements, and offering documents. * Securities Compliance: Navigating the complex prospectus and registration exemptions to ensure your fundraise is compliant. * Specialized Due Diligence: As a data-focused firm, we go one step further. We help you design and implement a sophisticated due diligence playbook to assess the data, AI, and privacy risks in your potential investments—a critical advantage in today's market. Starting a fund is a significant undertaking. Building it on the right foundation is the key to long-term success. Ready to explore the right path for your venture fund? Contact Sarhan Data Law today for a strategic consultation. --- ## Where is AI Liability heading? URL: https://sarhandata.law/resources/where-is-ai-liability-heading Date: 2025-04-21 Category: Product Counsel The rapid integration of artificial intelligence (AI) into commercial and public-sector operations has started to prompt legal scrutiny in Canada, particularly concerning liability for AI-driven decisions and misinformation. While the judiciary has not had a wholesome opportunity to consider the question of liability around AI, previous decisions regarding parallel technologies presents some options for where things are headed. Early Rumblings in Lower Tribunals The Moffatt v. Air Canada Decision In February 2024, the British Columbia Civil Resolution Tribunal (CRT) ruled that Air Canada was liable for negligent misrepresentation after its AI chatbot provided incorrect guidance to a customer regarding bereavement fare policies. The plaintiff, Jake Moffatt, had relied on the chatbot’s advice to purchase a full-price ticket under the assumption that he could later apply for a partial refund under Air Canada’s bereavement fare program. When the airline denied his refund request, citing a policy discrepancy between the chatbot’s statements and its official guidelines, Moffatt filed a claim for damages. The tribunal dismissed Air Canada’s argument that the chatbot constituted a “separate legal entity” exempting the company from liability. Tribunal member Christopher Rivers emphasized that businesses deploying AI systems must ensure the accuracy of all information disseminated through their platforms, whether static or dynamically generated. The ruling affirmed that organizations cannot delegate accountability for AI outputs to the technology itself, as doing so would undermine consumer protections and erode trust in automated systems. Continuation of Existing Principles The decision clarified several key points relevant to AI liability in Canada that mirror an employer's vicarious liability for the actions of its employees: 1. Duty of Care: Service providers owe a duty of care to consumers to ensure the accuracy of information provided through AI tools. 2. Standard of Care: Companies must implement reasonable safeguards to verify AI-generated content, including routine audits and disclaimers where necessary. 3. Causation: Reliance on AI misinformation can establish a direct causal link to financial or reputational harm, even if the error originates from machine learning algorithms. While the quantum of damages was minor (the tribunal awarded Moffatt damages totaling CAD 1,640.36, including partial reimbursement, interest, and tribunal fees), this outcome signals a growing judicial willingness to hold businesses accountable for AI failures, irrespective of the technology’s complexity or autonomy. Broader Implications for AI Governance Provincial Initiatives and Tort Law Adaptations In October 2024, the British Columbia Law Institute (BCLI) published a report advocating for tort law reforms to address AI-driven harms. The report rejected strict liability regimes, instead proposing a fault-based framework where plaintiffs must demonstrate that developers or deployers failed to meet a reasonable standard of care. Key recommendations include: * Evidentiary Adjustments: Courts should infer causation in cases where AI systems’ opacity prevents plaintiffs from tracing harm to specific design flaws. * Algorithmic Discrimination Remedies: Legislators should create civil remedies for biases embedded in training data or decision-making processes. These proposals align with the reasoning in Moffatt, emphasizing that existing negligence principles remain applicable but may require judicial flexibility to account for AI’s unique challenges. Beyond negligence, given the absence of legislation around liability for AI and while legislators consider specific AI-focused provisions, several other liability theories are starting to emerge in AI-related cases: * Copyright Infringement: The Canadian media lawsuit against OpenAI center on unauthorized use of copyrighted materials to train AI systems and several other cases are arising in the United States to determine the consequences of foundational models training on copyright data. * Product Liability: Traditional product liability concepts are being applied to AI systems, alleging defective design, failure to warn, and negligence. * Contractual Liability: Some cases involve breach of contract claims, particularly when AI systems fail to perform as promised or when terms of service are violated. * Class Action Liability: Where damage may be minor on an individual level but grand when considered broadly, AI developers may face class action lawsuits (especially as seen in more recent BC decisions). Industry Responses and Risk Mitigation Canadian businesses have accelerated efforts to implement AI governance frameworks. The federal government’s March 2025 release of a Guide for Managers of AI Systems provides practical steps for compliance with the ISED Voluntary Code of Conduct, including: * Regular audits of AI outputs for accuracy and bias. * Clear disclaimers informing users of AI-generated content’s limitations. * Human oversight protocols for high-stakes decisions, such as financial advisement or medical diagnostics. Major signatories to the Voluntary Code, including TELUS, CIBC and Intel, have pledged to integrate these measures into their AI deployment strategies. Emerging Jurisprudential Challenges As AI systems grow more sophisticated, courts will likely confront novel questions, such as: * Liability for Autonomous Actions: Whether developers can be held liable for unforeseeable AI behaviors arising from machine learning adaptations, including in sensitive areas like healthcare. * Third-Party Integrations: How liability should be apportioned when AI tools incorporate external data sources or APIs. * Cross-Border Disputes: Jurisdictional conflicts when AI systems operate across provincial or national boundaries. The ongoing CanLII v. Caseway AI and Toronto Star v. OpenAI cases, which address copyright infringement in AI training data, may further shape liability standards by clarifying the scope of “fair dealing” under Canadian law. Conclusion Canadian AI jurisprudence is developing and starting to affirm that traditional tort principles of negligence and misrepresentation apply to AI deployments. While legislative efforts like AIDA have stumbled, existing rulings provides immediate guidance for businesses: invest in AI governance, prioritize transparency, and assume liability for algorithmic outputs. As the technology evolves, Canada’s legal framework must similarly adapt, balancing innovation with accountability to safeguard public trust in an increasingly automated world. Future developments will hinge on collaborative efforts between policymakers, industry leaders, and the judiciary to create a cohesive strategy for AI accountability—one that learns from both the successes and shortcomings of early cases like Moffatt. Until then, organizations would be prudent to treat AI not as a shield against liability, but as a tool requiring diligent stewardship. --- ## Negotiating AI Contracts: Data, Training & Liability URL: https://sarhandata.law/resources/architecting-strategic-ai-service-agreements Date: 2025-04-14 Category: Business Transactions Artificial Intelligence is rapidly becoming foundational to the way enterprises deliver services. From optimizing operations to creating entirely new customer experiences, the potential is immense. As you integrate AI services into your ecosystem, a critical, often underestimated, challenge emerges: contracting. The agreements governing your AI partnerships are far more than legal formalities; they are strategic instruments that can significantly impact your innovation trajectory, risk exposure, and the long-term value derived from your data assets. Standard Service Agreements (SSAs) or boilerplate templates need deep changes to capture the unique complexities of AI. At Sarhan Law, we work at the intersection of technology, privacy, and commercial law. We see firsthand how crucial sophisticated AI contract negotiation and drafting are in 2025. Based on the evolving landscape, here are key strategic considerations every forward-thinking leader must address: Your Data is Your Strategic Asset In the AI era, data isn't just input; it's the fuel for insight and potentially the core of your competitive advantage. Your AI contracts must establish unambiguous data sovereignty from the outset. * Explicit Ownership: Best practice dictates clearly stating that all data provided by you (input) and generated specifically for you by the AI (output) remains your "sole and exclusive property." Don't settle for vague language. * Granular Definitions: Sophisticated agreements now differentiate between Customer Data, Personal Data, and Training Data, applying specific rules to each. This precision prevents accidental oversharing or misuse, particularly of sensitive personal information. Resist provider attempts to claim broad rights beyond service delivery, especially concerning raw output data. Navigating the Model Training Tightrope This is the crux of modern AI contracting – the "training data clause." Can the AI provider use your data to train or improve their underlying models? Allowing this can inadvertently leak sensitive patterns, enhance a tool used by competitors, or create complex compliance headaches. This isn't just a permission slip; it's a strategic decision with long-term consequences. The market offers a spectrum: * Full Prohibition: The default safe harbour – contractually forbidding any use of your data for model training. * Explicit Opt-In: Requiring your specific, written consent per use case before any training occurs, giving you maximum control. * Aggregated/Anonymized Use: Permitting training only on data demonstrably stripped of identifying features and aggregated. This requires rigorous technical validation and clear contractual safeguards. * Limited Purpose Use: Allowing training solely to improve the service for you, strictly prohibiting use for broader model enhancement. Consider adding model deletion requirements upon contract termination and audit rights to verify adherence – crucial mechanisms for maintaining control. Embedding Trust & Compliance AI systems operate within a complex web of privacy regulations (like PIPEDA, GDPR) and societal expectations of fairness. Your contracts must proactively address these: * Fortifying Privacy: Explicitly restrict or prohibit the use of personal data for model training due to compliance risks and the technical difficulty of "forgetting" data. * Mandating Fairness: Include provisions requiring the AI provider to implement bias detection and mitigation measures. This isn't just ethical; it protects your brand reputation and reduces legal liability arising from discriminatory AI outputs. De-Risking the Black Box: Clear Liability and Accountability When AI systems falter – generating incorrect information, infringing IP, or contributing to a data breach – ambiguity over responsibility is costly. Strategic contracts establish clear lines: * Explicit Provider Liability: Define the provider's accountability for data breaches originating from their service, AI-generated errors, infringement claims related to the AI's output, and regulatory fines tied to the service's function. * Indemnification: Secure appropriate indemnification clauses covering these AI-specific risks. * Monitoring & Explainability: Where feasible, include rights for ongoing monitoring and requirements for the provider to offer transparency or explainability regarding AI-driven decisions impacting your business. Architecting for Resilience: Termination and Data Governance Your AI partnerships will evolve. Clear exit protocols are essential for business continuity and preventing lock-in: * Defined Data Disposition: Mandate secure return or certified destruction of all your data upon termination, with clear timelines. * Model Management Clarity: Address what happens to AI models trained (even partially) on your data. While complex, especially for aggregated use, aim for decommissioning where your proprietary data was central to the training. The Strategic Imperative: Specialized Counsel for Specialized Technology Navigating the nuances of AI contracting – especially data rights, training permissions, and liability allocation – requires more than general legal knowledge. It demands a deep understanding of the technology, the evolving legal landscape, and the strategic implications for your business. Trying to adapt old templates or relying solely on provider paper creates unnecessary risk and can undermine the very value you seek from AI. How Sarhan Law Can Help: Whether you are a CIO integrating enterprise AI solutions or a Startup CEO building AI into your core offering, getting the contractual foundation right is paramount. Sarhan Law specializes in precisely this intersection. We help clients: * Negotiate complex AI service agreements with major providers, ensuring your rights and assets are protected. * Draft robust, future-proof standard AI service agreements for AI-first companies. * Advise on data privacy and ethical AI considerations within your commercial relationships. Don't leave your AI strategy vulnerable at the contractual level. Let's discuss how we can help you architect agreements that enable innovation while safeguarding your interests. Contact Sarhan Law today for a consultation on your AI contracting needs. --- ## Decoding Your Cyber Policy: 6 Critical Things to Review URL: https://sarhandata.law/resources/decoding-your-cyber-policy-6-critical-things-to-review Date: 2025-04-08 Category: Data Protection & Cybersecurity In today's digital world, cybersecurity threats are an unfortunate reality for enterprises across Canada. From ransomware attacks to sophisticated denial of service attacks, the risk for disruption and significant financial loss is high. Cybersecurity insurance is a vital backstop to mitigate these risks. However, simply having a policy isn't enough; understanding the details within that policy is important to ensure it aligns with your enterprise risk posture and incident response capabilities. As a firm dedicated to the intersection of law, technology, and security, we frequently advise clients on navigating the complexities of cyber risk. Here are six critical areas every enterprise should deeply understand within their cybersecurity insurance policy: 1. The Data Exclusion Dilemma: Why Traditional Insurance Often Falls Short Traditional business insurance policies, like Commercial General Liability (CGL) or professional liability insurance, may include exclusions for covering cyber incidents. Many businesses mistakenly assume their general liability coverage extends to digital risks. However, data exclusion clauses, which are becoming more common and strictly interpreted in standard CGL and similar traditional policies, can specifically carve out cybersecurity coverage. Court decisions have progressively clarified this gap, solidifying the understanding that traditional policies were not built for the digital age. The 2021 Ontario Court of Appeal case, Family and Children's Services of Lanark, Leeds and Grenville v. Co-operators General Insurance Company, provides an illustration. There, the Ontario Court of Appeal found that found that an exclusion related to the "display or distribution of data on the Internet" effectively nullified the insurer's duty to defend claims arising from a cyber breach under that particular policy framework. This ruling serves as a powerful warning for businesses to review data exclusion clauses within their existing policies. Depending on your risk profile and business, this could be a sign to seek a separate, dedicated cybersecurity insurance policy. Such policies are specifically designed to address the unique liabilities and costs associated with data breaches and other cyber incidents, filling the void left by exclusions in more traditional insurance products. Meticulous review isn't just about understanding your cyber policy; it's also about recognizing the limitations explicitly written into your general liability coverage regarding data-related risks. 2. Meeting the Bar: Minimum Security Requirements Cyber insurers don't issue policies into a vacuum; they expect baseline security controls. Nearly all policies stipulate minimum required security controls as a condition of coverage. These aren't suggestions – they are contractual obligations assessed at application and renewal. Failure to implement or maintain these controls can be grounds for claim denial. Common requirements include: * Multi-Factor Authentication (MFA): Especially for remote access and privileged accounts. * Employee Cybersecurity Training: Regular awareness programs. * Reliable Data Backups: Regularly tested and stored securely (often offline/immutable). * Identity Access Management (IAM): Controlling user permissions. * Data Classification: Understanding and protecting sensitive data. Failure to implement and consistently maintain these measures can lead to claim denial or policy cancellation. It's essential to meet these requirements before finalizing your insurance coverage. Treat these requirements as auditable controls and map your existing security stack and processes directly against the policy's stipulations. Maintain documentation and evidence of compliance. Ensure your security roadmap aligns with insurer expectations, which often evolve with the threat landscape. 3. Scrutinize Coverage Definitions, Sub-limits, and Exclusions The devil is truly in the details of what constitutes a covered "event" or "loss." When you get your policy, ensure you know exactly what it covers and what it doesn't: * Definitions: How does the policy define a "cyber incident," "wrongful act," "data breach," or "network interruption"? Does it align with your operational reality? * Coverage Areas: Confirm coverage for key risks: network security/privacy liability, breach response (forensics, notification, credit monitoring, legal), business interruption (BI), data recovery, cyber extortion/ransomware payments, regulatory defence/fines. * Sub-limits: Be aware of lower limits for specific coverages (e.g., ransomware payments, regulatory fines, social engineering fraud). Are these adequate given your risk profile? * Exclusions: Pay close attention to what's not covered. Common exclusions include acts of war/terrorism (interpretations vary), failures attributed solely to internal negligence without an external trigger, loss of certain unencrypted data, or incidents pre-dating the retroactive date. Work with your risk management and legal teams to model potential incident scenarios against the policy language. Identify critical risks specific to your enterprise (e.g., environment impacts, large-scale regulatory exposure under PIPEDA/GDPR/CCPA if applicable) and confirm adequate coverage or negotiate endorsements. 4. Understand Incident Response Integration When an incident strikes, speed and expertise are critical. Your policy dictates how incident response (IR) resources, such as legal counsel and forensic investigators, are engaged. Many insurers require the use of pre-approved "panel" vendors. While these firms are vetted, they may not have specific expertise relevant to your industry or technology stack, or they may conflict with your established IR relationships. Review the policy's requirements regarding IR vendors before an incident. If you have preferred, trusted legal and forensic partners, negotiate with the insurer to have them pre-approved and formally added to the policy at the time of purchase or renewal. Failure to do this can lead to delays, suboptimal response, or disputes over cost coverage if you engage non-panel vendors during a crisis. Also, clarify the "Duty to Defend" provisions – understand when the insurer's obligation to cover legal defence costs triggers and any conditions attached (like reimbursement undertakings). There are circumstances where you must initially incur the cost of IR and only later determine claim eligibility. 5. Clarify Coverage Timelines Policies contain critical time limitations. The Retroactive Date typically excludes coverage for wrongful acts occurring before this date (often the inception date of the first policy with that insurer). This prevents coverage for long-standing, pre-existing vulnerabilities. If available, "Prior Acts Coverage" can mitigate this gap. For Business Interruption (BI), understand the Waiting Period (e.g., 8-12 hours of downtime before coverage begins) and the Indemnity Period (the maximum duration coverage applies, e.g., 90-180 days). Confirm the retroactive date and assess potential exposure related to historical activities or inherited risks (e.g., from M&A). For BI, ensure the waiting period is realistic for your recovery plan and the indemnity period is sufficient to cover a potentially prolonged recovery from a major incident. 6. Verify Coverage for Modern Threats Threat vectors evolve. Ensure your policy adequately addresses prevalent modern attacks: * Social Engineering Fraud: Attacks manipulating employees into transferring funds or divulging credentials may be excluded or heavily sub-limited in standard policies. Specific endorsements are often required. * Third-Party/Supply Chain Risk: Incidents originating from compromised vendors or service providers are increasingly common. Review how your policy addresses liability and losses stemming from these third-party relationships. Explicitly check for social engineering fraud coverage and its limits. Understand policy language regarding incidents caused by third-party providers your enterprise relies upon. Advocate for endorsements if coverage gaps exist for these high-probability risk vectors. Conclusion: Proactive Partnership for Optimal Protection Your enterprise cybersecurity insurance policy is a strategic asset, but only if its intricacies are fully understood and aligned with your security posture and risk profile. Actively participating in the policy review process alongside legal, risk management, and your insurance broker can be really helpful to develop a cohesive strategy for cybersecurity. Treat the policy not just as a financial backstop, but as another layer in your comprehensive security strategy that requires ongoing validation and alignment. Proactive diligence ensures that when a crisis hits, your insurance coverage performs as expected, supporting effective response and recovery. --- ## PIPEDA: A Resilient Framework for Privacy in the AI Era URL: https://sarhandata.law/resources/pipeda-a-resilient-framework-for-privacy-in-the-ai-era Date: 2025-03-31 Category: Product Counsel The Personal Information Protection and Electronic Documents Act (PIPEDA) is Canada's federal privacy law governing the collection, use, and disclosure of personal information by private sector organizations during commercial activities. As technological advancements like artificial intelligence (AI) reshape industries, PIPEDA's foundational principles have demonstrated adaptability and resilience to emerging challenges. This post delves into PIPEDA's ten principles, provincial adequacy rules, and its capacity to remain relevant amid rapid technological change. The 10 Principles of PIPEDA PIPEDA is built on ten Fair Information Principles, which serve as a robust framework for protecting personal information: 1. Accountability: Organizations must designate individuals responsible for ensuring compliance with PIPEDA. They are required to implement privacy management programs and policies to safeguard personal information. 2. Identifying Purposes: Organizations must clearly identify and document the purposes for collecting personal information before or at the time of collection. Transparency is key, especially when purposes evolve. 3. Consent: Individuals must give informed consent for the collection, use, or disclosure of their personal information, except where inappropriate (e.g., legal requirements). 4. Limiting Collection: The collection of personal information must be restricted to what is necessary for identified purposes. Fair and lawful means are mandatory. 5. Limiting Use, Disclosure, and Retention: Personal information can only be used or disclosed for the original purposes unless consent is obtained or legally required. Retention must align with necessity. 6. Accuracy: Organizations must ensure that personal information is accurate, complete, and up-to-date to fulfill its intended purpose effectively. 7. Safeguards: Security measures must protect personal information from unauthorized access, disclosure, or misuse. Safeguards should correspond to the sensitivity of the data. 8. Openness: Organizations are required to make their privacy policies and practices readily accessible to individuals. 9. Individual Access: Individuals have the right to access their personal information upon request and challenge its accuracy or completeness. 10. Challenging Compliance: Individuals can challenge an organization’s adherence to PIPEDA principles through established procedures. These principles collectively empower individuals while ensuring organizations maintain high standards for privacy protection. Provincial Adequacy Rules While PIPEDA applies across Canada, certain provinces have enacted privacy laws deemed "substantially similar" to PIPEDA: * Québec, British Columbia, and Alberta have comprehensive private-sector privacy laws that exempt organizations operating within these provinces from PIPEDA's direct application. * In provinces like Ontario, New Brunswick, Nova Scotia, and Newfoundland and Labrador, legislation governing health information has also been deemed substantially similar for specific entities such as healthcare providers. This "patchwork" of provincial laws requires organizations operating interprovincially or internationally to navigate both federal and provincial regulations carefully. Québec's recent enactment of Law 25 exemplifies evolving provincial privacy standards. It mandates proactive measures like privacy impact assessments for activities involving AI tools and imposes transparency requirements for automated decision-making systems. These developments highlight how provincial laws complement PIPEDA by addressing emerging technologies and can introduce new specific requirements from private sectors actors. Resilience Amid Technological Change Over time (and especially with the death of Bill C-27), PIPEDA has proven adaptable to the times. That stems from its principles-based approach rather than rigid prescriptive rules. This flexibility allows it to address new challenges posed by technologies like AI without requiring constant legislative amendments: * The principle of reasonableness ensures that organizations collect, use, or disclose personal information only in ways that a reasonable person would consider appropriate under the circumstances—this standard inherently accommodates technological advancements. * While PIPEDA does not explicitly regulate AI tools, its provisions apply broadly to any activity involving personal information, including AI-driven data processing. * Safeguards against inappropriate uses of personal data—such as profiling leading to discriminatory treatment—are particularly relevant in AI contexts where biases can arise from algorithmic decision-making. Moreover, PIPEDA’s emphasis on transparency (e.g., identifying purposes) aligns well with emerging global trends demanding accountability in AI systems. For instance, organizations deploying AI tools can leverage PIPEDA’s framework to ensure compliance with both domestic privacy laws and international standards. PIPEDA has also consistently obtained adequacy status under the General Data Protection Regulation (GDPR) in Europe, enabling seamless data transfers between the EU and Canada. Since the European Commission first recognized PIPEDA as adequate in 2001, this status has been upheld through periodic reviews, most recently in January 2024. Adequacy ensures that personal data can flow from the EU to Canadian organizations without requiring additional safeguards, such as standard contractual clauses or binding corporate rules, simplifying compliance for businesses operating across borders. Conclusion PIPEDA remains a resilient legislative framework capable of addressing privacy concerns in an era dominated by technological innovation. Its principles-based approach provides flexibility while maintaining robust protections for individuals’ rights. As AI continues to transform industries, PIPEDA’s focus on accountability, consent, transparency, and safeguards ensures it adapts effectively without losing relevance. Provincial adequacy rules further strengthen Canada’s privacy landscape by introducing tailored requirements that complement PIPEDA’s federal scope. Together, these frameworks create a cohesive system that balances innovation with privacy protection—an essential consideration as businesses increasingly adopt AI-driven solutions. Organizations navigating Canada’s privacy laws should view PIPEDA not as a static regulation but as a dynamic tool capable of evolving alongside technological progress while safeguarding fundamental rights. --- ## New Frontiers in Canadian Privacy Law URL: https://sarhandata.law/resources/new-frontiers-in-canadian-privacy-law Date: 2025-03-24 Category: Product Counsel Two landmark decisions in 2024, Clearview AI Inc. v. Information and Privacy Commissioner for British Columbia (2024 BCSC 2311) and Canada (Privacy Commissioner) v. Facebook, Inc. (2024 FCA 140), provide critical guidance for organizations navigating privacy and AI regulation. These rulings establish new compliance paradigms for extraterritorial jurisdiction and digital consent frameworks. Extraterritorial Reach: Clearview’s Precedent In Clearview AI, the BC Supreme Court extended provincial privacy law to foreign entities without physical presence in British Columbia. The court found jurisdiction under BC’s Personal Information Protection Act (PIPA) based on two factors: (1) that Clearview AI was marketing services to BC clients; and (2) that Clearview was collecting facial recognition data from BC residents’ publicly available online images This “real and substantial connection” test (applied to privacy law for the first time) signals that any organization harvesting data from BC residents may face PIPA compliance obligations, regardless of corporate headquarters or server locations. The Court also rejected Clearview’s reliance on the “publicly available information” exemption, emphasizing that bulk scraping of biometric data creates disproportionate risks of harm. Consent Revolution: Facebook’s PIPEDA Reckoning The Federal Court of Appeal’s Facebook decision redefined consent standards under PIPEDA, addressing the Cambridge Analytica scandal. Key holdings included: A. Meaningful Consent Requires Clarity: An objective “reasonable person” standard ensures that consent validity depends on what a hypothetical informed user would understand, not subjective interpretations As part of this standard, the Federal Court of Appeal clarified the following prohibited practices: * Buried disclosures in lengthy privacy policies (Facebook’s 4,300-word Data Policy failed this test) * Passive consent through default sharing settings * Vague references to third-party data use in adhesion contracts This holding pushes companies to innovate with their digital consent frameworks online towards a model of consent as a reasonable user may expect. B. Unshakable Safeguarding Obligations: The Court rejected Facebook’s argument that safeguarding responsibilities ended when data reached third-party apps. Key failures included: * Ignoring “red flags” from apps requesting excessive data * Failing to audit developers’ compliance with privacy policies * Creating an unmanageable ecosystem of 40,000+ apps while contracting away accountability This spelling away of accountability points to a regulatory trend of shifting the onus on consumers and data subjects to protect their data through consent towards a model in which data controllers have broader obligations towards their data subjects. Cross-Canada Enforcement Landscape These cases, taken together, drive at a compliance baseline with three key features: 1. No Digital Exceptionalism: Traditional territorial analysis adapts to borderless data flows 2. Consumer-Centric Standards: Complexity no longer excuses obscurity – privacy notices must facilitate genuine understanding 3. Chain of Custody Accountability: Data controllers remain responsible for downstream uses, even by third parties Building Future-Proof Compliance For enterprises and organizations working with personal information, these decisions demand proactive strategies: * Revise jurisdictional assessments to consider data subject residency rather than corporate footprints * Redesign consent workflows using plain-language disclosures tested against the “reasonable person” standard * Implement AI governance frameworks that document safeguarding measures for training data and model outputs Organizations should treat these cases as warning shots across the bow – Canadian regulators now wield sharpened tools to enforce privacy rights in algorithmic systems. The path forward requires embedding privacy-by-design into AI development lifecycles, ensuring compliance keeps pace with technological innovation. --- ## VIPSS 2025: At the Nexus of Privacy, Security, and Emerging Tech URL: https://sarhandata.law/resources/vipss-2025-at-the-nexus-of-privacy-security-and-emerging-tech Date: 2025-03-17 Category: Reflections Last week, I had the opportunity to attend the Victoria International Privacy & Security Summit (VIPSS) 2025. The event was a convergence point for a diverse and engaged group of cybersecurity professionals, privacy experts, legal minds, and representatives from various public sector agencies. There was a ton of urgent collaboration at the intersection of technology, regulation, and human rights in our increasingly digital world. Several themes emerged throughout the sessions, painting a picture of a future demanding proactive adaptation and cross-disciplinary partnership. The Twin Revolutions: AI and Quantum Computing on the Horizon A sense of overwhelming technological acceleration permeated most sessions. Folks particularly flagged the convergence of Artificial Intelligence (AI) and quantum computing. As we experience, our daily encounters with AI breakthroughs only foreshadow a future arriving faster than anticipated. This rapid advancement brings the prospect of "Q-Day"—the day quantum computers become powerful enough to break current classical encryption standards—into sharper focus. Several session speakers talked about the profound implications Q-Day holds for global security, financial systems, and fundamental data privacy. The discussion wasn't just theoretical; it centered on actionable strategies for transitioning to quantum-resistant cryptographic standards. The panel stressed the necessity of starting now to understand the timelines, develop preparedness frameworks, and foster collaborative approaches to mitigate these existential risks while simultaneously exploring the potential benefits of quantum advancements. The consensus was clear: proactive planning for a secure quantum future is no longer optional, but imperative, especially as AI development continues its exponential trajectory, potentially creating a "twin revolution" scenario that will reshape our technological landscape. Bridging the Silos: The Imperative of Privacy and Security Collaboration There was also a recurring theme around the need for better collaboration between privacy and security professional. The traditional silos are becoming untenable. As data collection and usage become more complex, particularly fueled by AI, robust cybersecurity practices like the principle of least privilege and zero-trust architectures are not just security measures but also becoming essential components of effective privacy management. One of the most compelling keynotes vividly talked about the overwhelming complexity defenders face, grappling with nuanced controls across sprawling hybrid environments—from traditional data centers to cloud, virtual, containerized, and serverless architectures. This complexity is the enemy of effective security, creating vulnerabilities that adversaries exploit. Based on the risks of complexity, organizations must change how they defend, adopting dynamic, adaptable strategies. This perfectly aligns with the need for cross-functional teams where privacy and security insights inform each other, enabling faster, more iterative development cycles. Furthermore, there is potential for AI to act as an "on-the-job coach," to help professionals quickly grasp concepts from adjacent disciplines. Canadian Privacy Landscape: Stalled Reforms and the Search for Pressure Points While the technological frontiers are advancing rapidly, the summit also spoke critically on the state of Canada's privacy framework, revealing significant challenges and a sense of frustration. A key takeaway, echoed in multiple sessions, is that many organizations lack sufficient pressure – either regulatory or societal – to truly innovate and invest meaningfully in privacy-enhancing practices. Privacy lawyers mourned the likely failure of the federal government's Digital Charter Implementation Act and the fact that this leaves Canada's private sector privacy law, PIPEDA, as the primary legislation governing AI development and deployment – a potential gap given the technology's proliferation. The discussion explored the implications of this legislative inertia and debated necessary steps forward for any future government. This legislative vacuum exists alongside a rising tide of litigation. Increases in cyberattacks and evolving legal theories (potentially allowing damages without proof of specific harm) are fueling class action activity, making robust risk mitigation strategies crucial for organizations. Against this backdrop, BC Information and Privacy Commissioner Michael Harvey's Keynote Address on "Consent in a Big Data Age" felt timely. Harvey argued against the notion that the consent model is broken, advocating instead for its reinforcement as a rights-centric keystone of privacy law, especially vital for placing guardrails around AI data collection. While acknowledging the need for supplementary legal authorizations, he stressed that prioritizing a robust, rights-respecting consent framework can build trust and empower individuals without unduly hindering innovation. Collectively, these sessions painted a picture where significant change is needed. Without stronger, more punitive regulatory frameworks or a significant societal shift in demanding better privacy practices – potentially driven by public education – meaningful progress and investment in privacy innovation may lag. Conclusion VIPSS 2025 was a thought-provoking and energizing summit. It highlighted the interconnectedness of privacy, security, AI ethics, and looming technological shifts like quantum computing. The key takeaways underscore the urgency for organizations to break down internal silos, foster deep collaboration between privacy and security teams, proactively prepare for quantum threats, and champion robust, rights-based approaches to data governance. While facing challenges like stalled legislative reform, the collective expertise and commitment shown at VIPSS provide optimism that through continued dialogue, strategic planning, and potentially increased regulatory or public pressure, we can navigate the complexities ahead and build a more secure and privacy-respecting digital future. ---