We use cookies and similar tools to understand how our site is used and to improve your experience. This may include information about pages visited, browser and device settings and approximate location. We use this data for site analytics and performance. By continuing to use this site, you agree to the use of cookies.

From AI Ambition to Operational Trust: Reflections from Navigating AI Risk, Privacy & Governance

On April 20, 2026, PwC and EviDataTM by Woodway Assurance brought together leaders working across AI, privacy, risk and governance for a timely discussion about what it will take to move from AI ambition to responsible implementation.

A clear thread ran through the day: organizations are feeling real pressure to move on AI, but many are still working through how to responsibly build in governance, accountability and operational discipline. AI is no longer a side experiment. It is part of how organizations work, compete and deliver value. The harder question is how to build trust in ways that is practical, scalable and credible.

That theme came through clearly in PwC Canada’s Trust in AI report, which featured prominently in the discussion. The findings highlighted at the event showed a market in transition. Responsible AI is now a priority for many organizations, yet readiness remais uneven. The report notes that 36% of respondents still do not have a responsible AI or AI governance function, that responsible AI ownership is still concentrated in IT in many organizations and that 71% expect building trustworthy AI to have a positive financial impact. In other words, organizations increasingly see trust in AI as strategically important, but most are still building the structures needed to support it.

Across the discussions, participants kept returning to the practical tension many organizations now face:  putting the right controls in place early enough to shape outcomes but not becoming the friction as AI is adopted. The conversation pointed to a more mature view of governance, one that is not just about stopping risk, but about helping organizations move forward responsibly and with greater clarity, on pace.

From the government perspective, there was a strong sense of urgency around adoption, competitiveness and the need for Canada to move decisively in the AI space. The message was clear: trust is not separate from innovation but is part of what makes broader adoption possible. The balance between speed and trust came up repeatedly throughout the day.

From industry, one of the more useful themes was the idea that AI governance must be broader than privacy or legal review. Participants spoke about the need to distribute responsibility more widely across organizations and to assess not only downside risks, but also the potential benefits and harms for individuals, society and the business itself. This positions governance as an enabler, and not about ways to say ‘no’.

Suzanne Morin (SunLife), Pam Snively (TELUS), Jordan Prokopy (PwC)

There was also substantial discussion about implementation risk. As organizations move from pilots to scaled use, they face a wider range of operational and technical issues, including model drift, data poisoning, data leakage, security vulnerabilities, prompt injection and the challenge of monitoring systems over time. The broader point was that AI risk management is not a checklist at the outset; it is ongoing through the AI lifecycle.

A key discussion focused on how data de-identification drives AI innovation. As organizations look for ways to use data more effectively, they also need defensible approaches to reducing privacy risk. Methodologies and tools to manage data privacy risk are some of the key enablers of AI adoption. The message from the Information and Privacy Commissioner of Ontario was pragmatic and balanced: de-identification, done well, can support valuable data use while protecting privacy and sustaining public trust. A high-level review of the 2025 IPC de-identification guidelines emphasized clearer terminology, a structured process, appropriate thresholds, documentation, and ongoing monitoring as the avenue to achieve this.

That discussion connected directly to a question many organizations are now facing: how to operationalize anonymization and de-identification requirements in a rigorous, workable, and repeatable manner. This is especially relevant in the context of Quebec’s anonymization regulation and the EU’s GDPR, where expectations around risk assessment, documentation and residual risk are becoming more explicit.

Khaled El Emam (Woodway Assurance and University of Ottawa), Christopher Parsons (Information and Privacy Commissioner of Ontario), Amanda Maltby (Environics Analytics)

At the event, the Woodway team unveiled a new EviData capability designed to help organizations address anonymization risk in that context.  Our Founder and CEO, Khaled El Emam, demonstrated this capability for the audience, showing how automated re-identification risk assessment can help make this work more practical, more consistent and more scalable.

The event reinforced something we are hearing more often: organizations are no longer asking only whether they should be using AI. They are asking how to do it in a way that can stand up to scrutiny from regulators, partners, customers, and internal decision-makers. That is where trust becomes operational. And that is where governance, de-identification and risk assessment need to work in practice, not just on paper.

To learn more, read our press release, explore our EviData brochure and join our upcoming webinar on addressing anonymization requirements under Quebec regulations and the GDPR.