
Governing AI, Powering Innovation Video Series
The survey, conducted in the first half of 2025, gathered responses from 80 professionals from across a range of industries, with the majority from the financial services and ICT sectors. Respondents included technology leaders, legal counsel, risk and compliance professionals, and C-suite executives.
In this video, Colin Rooney, Partner and Head of our Technology and Innovation Group, shares his insights from the recent Governing AI, Powering Innovation survey event hosted at Arthur Cox. Colin discusses key findings from the survey, explores evolving perceptions of AI-related risks, and offers practical perspectives on how organisations can strategically harness AI as the EU AI Act approaches implementation.
Video Transcription
Colin Rooney
I’m Colin Rooney, head of Technology and innovation at Arthur Cox. AI is no longer theoretical. 97% of the organisations that we surveyed are using it in some form. When clients come and speak to us about AI, it’s no longer to say, can we use AI, it’s more now to say, how should we use AI? How do we do that compliantly? What sort of regulatory or governance framework should we have in place? What sort of risks should we be considering?
The EU AI Act and the enforcement of that legislation is coming over the horizon and so we thought it was a good time to benchmark readiness and risk perception within the market. I think three particular things were surprising in terms of the results. First of all, while there is a degree of governance preparation in terms of roles and people being appointed to particular positions, I think there’s still some work to do in relation to governance, certainly in relation to AI literacy. We expect there will be further rollout of awareness and training over the next number of weeks and months. Risk classification, in particular, understanding the role that you play within the EU AI act ecosystem is particularly important so some degree of confusion between whether you are a provider or a deployer and I think that that’s important to understand because certain risks flow from that and finally, a sense that AI is being used largely to reduce cost and drive efficiency whereas we think there’s some potential in AI in the future, in the near future, around driving competitiveness and distinguishing your company from another company.
I think over the next two to three years we move from a position of experimentation with AI to embedding AI in your business, by which I mean there will be more of a focus on decision making and more of a focus on driving value from AI and more of a focus on assessing and managing risk. We think in particular that those organisations which spend the time to plan governance, to implement risk management, and just to give general consideration to how AI is responsibly used in their business are likely to scale AI in a lucrative and meaningful fashion. I also feel that there’s a potential for greater competitiveness for your organisation arising from AI usage.
So the regulatory road that we’re on will certainly result in some constraints, but it can also be a catalyst for trust and innovation. If you’d like to discuss any of the findings from our survey or any of the other issues that have been thrown up from our presentation, please feel free to reach out to any of the partners at arthurcox.com/technologyandinnovation
In this video, Ciaran Flynn and Rhiannon Monahan from our Governance and Consulting Services Group share their expert perspectives on embedding AI strategy at a senior leadership level and why integrating risk management from the outset is essential. They highlight the importance of cross-functional collaboration, central governance, and AI literacy in ensuring effective and ethical implementation. As organisations navigate the evolving landscape of artificial intelligence, Ciaran and Rhiannon offer practical insights into building a resilient and responsible AI framework fit for future success.
Video Transcription
Rhiannon Monahan
In the same way that your Chief Executive Officer will be responsible for executing corporate strategy, it is important to have someone at a senior leadership level to make sure that your AI strategy is driven forward and execute it effectively.
Ciaran Flynn
AI strategy can’t be divorced from risk management so it’s really important when organisations are thinking about their AI strategy, that they’re also thinking about how this is going to impact on their culture, how it’s going to impact on their brand, and how they can manage any risk or ethical concerns. It’s too easy to think in the current environment that organisations need to be using AI for AI’s sake. What’s far more important from specific organisations to think about when they’re forming their strategy is really what’s right for us? What’s aligned from an efficiency perspective and from a operating model development, but also from a cultural and ethical dimension in terms of what’s right for the organisation, its brand, and its people. So it’s fundamental that when you’re developing your AI strategy, you are also are thinking about AI risk management. They’re two sides of the same coin and unless you’re starting thinking about risk management from the get go, it’ll be a challenge.
Rhiannon Monahan
It’s very important as well to have this senior person operating centrally, given that your AI systems are going to be implemented by employees across all functions, so they’re not going to be limited to any one department, and you need to have that holistic oversight. You also need to make sure then that those AI systems are being governed centrally by your operational policies, but also are going to be in compliance with your legal and regulatory obligations. What’s also most important when it comes to AI and having that central leadership role is that you’re talking about cross-functional teams. When you’re implementing AI, you’re taking together your technologist, your governance professionals, your legal and compliance experts, and you need a true leader to make sure that those people can work effectively together.
Ciaran Flynn
Having more people in your organisation, being aware of the power of AI and how it can change their particular lived experience within the workplace is key to generating use cases and that stretch between uplifting AI literacy and hence driving use case generation, which can then help drive AI strategy, is fundamental to success in this area.
Rhiannon Monahan
I think what’s important to note here is that AI literacy rules have now been in effect since February 2025, so they’re already here and live. What organisations need to remember is that there can be a significant disparity across their organisation and then groups of employees, board members, and other stakeholders that they have. All of these groups need to be assessed to determine their base level of AI literacy and to develop AI literacy programmes which suit their needs. When it comes to AI literacy, it’s going to be one of the most key foundational tools when it comes to implementing AI, because what you’re doing is really empowering and encouraging your employees to use it and know how to use it to its best advantage.