17/06/2025 News

(L-R) Colin Rooney, Olivia Mullooly, Ciaran Flynn, Rhiannon Monahan, Ian Duffy and Rob Corbet (Fennell Photography)

The survey, conducted in the first half of 2025, gathered responses from 80 professionals from across a range of industries, with the majority from the financial services and ICT sectors. Respondents included technology leaders, legal counsel, risk and compliance professionals, and C-suite executives.

AI adoption is growing, but maturity varies

The survey shows that nearly all organisations are engaging with AI in some form, with 10% saying that AI is fundamental to their operations. Many are still in the early stages of adoption, with 21% providing employee access to generative AI tools and 33% developing or testing proof-of-concepts.

“AI is clearly on the agenda for most organisations, but the journey from experimentation to integration is still underway,” said Colin Rooney, Partner and Head of Technology & Innovation. “The challenge now is to move from isolated use cases to enterprise-wide strategies that are legally sound and ethically grounded.”

Governance gaps and role uncertainty

Despite growing adoption, 38% of organisations have not yet assigned responsibility for AI implementation to a specific individual. Where responsibility has been assigned, there is no clear consensus on where it should sit—roles range from CTOs and Heads of Data to General Counsel and COOs.

“As AI strategies mature, we’re seeing a growing need for clear ownership,” said Ciaran Flynn, Head of Governance and Consulting Services. “Establishing defined accountability structures will help organisations manage risk effectively and meet evolving regulatory expectations.”

Rhiannon Monahan, Associate Director, Governance and Consulting Services, also commented “We’re seeing more clients ask how to embed AI governance into their existing risk and compliance frameworks. It’s not just about meeting regulatory requirements—it’s about building trust and resilience into how AI is used across the business.”

EU AI Act: Awareness rising, but impact potentially underestimated

While 61% of respondents identify as Deployers under the EU AI Act and a further 14% identify as Providers, 25% are still unsure of their classification—an important distinction given the differing compliance obligations. Only 14% believe the Act will have a high impact on their organisation, despite 28% developing AI tools in-house and 40% working with third parties to build bespoke solutions.

“Misclassification under the EU AI Act could lead to serious compliance gaps,” said Ian Duffy, Technology and Innovation Partner. “Organisations need to understand their role in the AI ecosystem and prepare accordingly.”

“We’re also seeing a shift in how AI is being viewed—not just as a technology issue, but as a strategic business priority,” added Olivia Mullooly, Partner, Technology and Innovation. “That shift is driving more cross-functional collaboration, particularly between legal, compliance, and technology teams.”

Governance frameworks still developing

While 53% of organisations have an AI usage policy and a technology committee in place, other governance elements are less mature:

  • Only 20% have an approved AI risk policy
  • Only 20% have an AI procurement policy
  • Just 25% have approved AI literacy training for employees and board members

“The uneven development of governance frameworks highlights the need for a more coordinated approach,” said Rob Corbet, Technology and Innovation Partner. “AI literacy, in particular, is becoming not just a best practice—but a legal obligation.”

Global outlook

Despite regulatory divergence globally, 76% of respondents say EU regulations take precedence for their organisation, reinforcing the EU’s role as a global leader in AI governance.

You can explore the full Governing AI, Powering Innovation survey findings here.