
Governing AI, Powering Innovation Video Series
The survey, conducted in the first half of 2025, gathered responses from 80 professionals from across a range of industries, with the majority from the financial services and ICT sectors. Respondents included technology leaders, legal counsel, risk and compliance professionals, and C-suite executives.
In this video, Colin Rooney, Partner and Head of our Technology and Innovation Group, shares his insights from the recent Governing AI, Powering Innovation survey event hosted at Arthur Cox. Colin discusses key findings from the survey, explores evolving perceptions of AI-related risks, and offers practical perspectives on how organisations can strategically harness AI as the EU AI Act approaches implementation.
Video Transcription
Colin Rooney
I’m Colin Rooney, head of Technology and innovation at Arthur Cox. AI is no longer theoretical. 97% of the organisations that we surveyed are using it in some form. When clients come and speak to us about AI, it’s no longer to say, can we use AI, it’s more now to say, how should we use AI? How do we do that compliantly? What sort of regulatory or governance framework should we have in place? What sort of risks should we be considering?
The EU AI Act and the enforcement of that legislation is coming over the horizon and so we thought it was a good time to benchmark readiness and risk perception within the market. I think three particular things were surprising in terms of the results. First of all, while there is a degree of governance preparation in terms of roles and people being appointed to particular positions, I think there’s still some work to do in relation to governance, certainly in relation to AI literacy. We expect there will be further rollout of awareness and training over the next number of weeks and months. Risk classification, in particular, understanding the role that you play within the EU AI act ecosystem is particularly important so some degree of confusion between whether you are a provider or a deployer and I think that that’s important to understand because certain risks flow from that and finally, a sense that AI is being used largely to reduce cost and drive efficiency whereas we think there’s some potential in AI in the future, in the near future, around driving competitiveness and distinguishing your company from another company.
I think over the next two to three years we move from a position of experimentation with AI to embedding AI in your business, by which I mean there will be more of a focus on decision making and more of a focus on driving value from AI and more of a focus on assessing and managing risk. We think in particular that those organisations which spend the time to plan governance, to implement risk management, and just to give general consideration to how AI is responsibly used in their business are likely to scale AI in a lucrative and meaningful fashion. I also feel that there’s a potential for greater competitiveness for your organisation arising from AI usage.
So the regulatory road that we’re on will certainly result in some constraints, but it can also be a catalyst for trust and innovation. If you’d like to discuss any of the findings from our survey or any of the other issues that have been thrown up from our presentation, please feel free to reach out to any of the partners at arthurcox.com/technologyandinnovation
In this video, Ciaran Flynn and Rhiannon Monahan from our Governance and Consulting Services Group share their expert perspectives on embedding AI strategy at a senior leadership level and why integrating risk management from the outset is essential. They highlight the importance of cross-functional collaboration, central governance, and AI literacy in ensuring effective and ethical implementation. As organisations navigate the evolving landscape of artificial intelligence, Ciaran and Rhiannon offer practical insights into building a resilient and responsible AI framework fit for future success.
Video Transcription
Rhiannon Monahan
In the same way that your Chief Executive Officer will be responsible for executing corporate strategy, it is important to have someone at a senior leadership level to make sure that your AI strategy is driven forward and execute it effectively.
Ciaran Flynn
AI strategy can’t be divorced from risk management so it’s really important when organisations are thinking about their AI strategy, that they’re also thinking about how this is going to impact on their culture, how it’s going to impact on their brand, and how they can manage any risk or ethical concerns. It’s too easy to think in the current environment that organisations need to be using AI for AI’s sake. What’s far more important from specific organisations to think about when they’re forming their strategy is really what’s right for us? What’s aligned from an efficiency perspective and from a operating model development, but also from a cultural and ethical dimension in terms of what’s right for the organisation, its brand, and its people. So it’s fundamental that when you’re developing your AI strategy, you are also are thinking about AI risk management. They’re two sides of the same coin and unless you’re starting thinking about risk management from the get go, it’ll be a challenge.
Rhiannon Monahan
It’s very important as well to have this senior person operating centrally, given that your AI systems are going to be implemented by employees across all functions, so they’re not going to be limited to any one department, and you need to have that holistic oversight. You also need to make sure then that those AI systems are being governed centrally by your operational policies, but also are going to be in compliance with your legal and regulatory obligations. What’s also most important when it comes to AI and having that central leadership role is that you’re talking about cross-functional teams. When you’re implementing AI, you’re taking together your technologist, your governance professionals, your legal and compliance experts, and you need a true leader to make sure that those people can work effectively together.
Ciaran Flynn
Having more people in your organisation, being aware of the power of AI and how it can change their particular lived experience within the workplace is key to generating use cases and that stretch between uplifting AI literacy and hence driving use case generation, which can then help drive AI strategy, is fundamental to success in this area.
Rhiannon Monahan
I think what’s important to note here is that AI literacy rules have now been in effect since February 2025, so they’re already here and live. What organisations need to remember is that there can be a significant disparity across their organisation and then groups of employees, board members, and other stakeholders that they have. All of these groups need to be assessed to determine their base level of AI literacy and to develop AI literacy programmes which suit their needs. When it comes to AI literacy, it’s going to be one of the most key foundational tools when it comes to implementing AI, because what you’re doing is really empowering and encouraging your employees to use it and know how to use it to its best advantage.
In this video, Rob Corbet, Partner in our Technology and Innovation Group, shares some of the key insights from the recent Governing AI, Powering Innovation survey, the results of which were shared at a launch event at our offices last month. Rob discusses the opportunities and cost reductions that clients have identified through AI. He also highlights areas where clients have concerns, particularly regarding confidentiality, data privacy, and IT security. Additionally, Rob emphasises the importance of an AI strategy and outlines its different stages.
Video Transcription
I’m Rob Corbet, and I’m a partner in the Technology and Innovation Group in Arthur Cox. So what was interesting and not really surprising was the fact that most of our clients have identified opportunities around efficiencies and cost reductions in AI, and most clients are well on their way on that journey. Only around one third of clients would say they were at an early stage of identifying potential use cases within their organisations. Another one third of the respondents to our client survey are a little bit further down the line in that they’ve identified and are developing proof of concept type projects and rolling those through their organisations as well. So well, there is trepidation, I think, in being too quick out of the blocks in terms of deploying AI into businesses. We are seeing it rapidly evolving because it is very early in the cycle.
I suppose the areas of trepidation that have come through in our survey, our clients are concerned about making sure that they get their governance right, that they get their risk management right and in particular, issues are emerging from the survey around confidentiality, data privacy, IT security, and in particular, 40% of our clients are partnering with third party vendors and obviously, that takes a little bit of time in terms of building trust with those vendors so that the AI that they are building and bringing into their businesses will fulfil the purposes and will be adequately risk-managed. A mature AI strategy starts with AI literacy across across the organisation. One quarter of our client respondents are struggling around some of the concepts in AI, and for example, one quarter of them don’t understand the difference between a deployer and a provider, which is a very important distinction under the EU AI Act. 38% of the respondents to our survey do have a single person who is charged with being the AI business owner or AI champion. About a third of our clients have developed an AI strategy that’s now in place. Another third are working on that strategy, and it’s in train and is likely to be developed in the short term. So the strategy comes from the top down, and then an AI governance framework sits beneath the strategy, and that enables the business then to really fulfil the potential of AI in a way that’s thoughtful and manages the associated legal and technical challenges associated with bringing any new technology into the business.
I suppose the final part of a mature strategy is embedding it into the organisation and the areas where we see most attention we’re focusing here is areas like procurement and partnering with third-party vendors who build AI solutions for the business. Hr, where there can be particular use cases that can present particular risks around discrimination, equality, confidentiality, privacy, and so forth. It security is obviously also integral to any AI strategy that’s going to fulfil its purpose. And of course, the legal team should be involved to make sure that all the legal obligations and risk management frameworks are in place to ensure that they’re both legally compliant and in line with the company’s own ethical framework around AI.
For more information, please visit arthurcox.com/technologyandinnovation