AI governance: From chaos to ROI – Spyrosoft’s case study
You know the challenge: compliance and legal teams have veto power over AI rollouts, but they often lack clear frameworks to assess risk properly. These bans stem from understandable fears around data security, privacy, and compliance. With the EU AI Act now in force and compliance deadlines approaching, the pressure to get governance right has never been higher. Administrative fines can reach up to 35 million USD or 7% of global annual turnover, making this not just a technical challenge, but a business-critical one.
We faced this dilemma firsthand. We’ve also resolved it at Spyrosoft and with our clients.
The real bottleneck appears long before a single line of code gets written: it’s in the boardroom, the legal department, and the compliance office. Your engineering team has already figured out how to build it. Your business team knows why you need it. Yet, there’s a veto.
This is the story of how we solved that problem at Spyrosoft. Not by bypassing our compliance and legal teams, but by working with them to build a governance model that works.
Organisational friction
Our teams quickly started experimenting with AI. ChatGPT for copywriting. GitHub Copilot and Cursor for development. The productivity gains were evident with some developers reporting 30-40% time savings on routine tasks.
At the same time compliance and legal teams worried about regulatory and security implications, understandably. When they raised concerns, they weren’t wrong. The questions were legitimate:
- Where does our code go when it’s sent to these tools?
- What happens to client data?
- Who owns the outputs?
- How do we audit AI-generated work?
- What if it produces something that violates a client contract?
The result was a tug-of-war: engineers felt compliance was pumping the brakes on promising AI ideas, and compliance felt inundated by technologies they had no policies for.
Our technical teams had good intentions but incomplete answers. “We’re being careful” isn’t a compliance strategy. “Everyone else is doing it” isn’t a risk assessment.
Many organisations face this challenge. This is why AI adoption stalls and ROI from AI evaporates. According to MIT’s State of AI in Business 2025 report, 95% of GenAI pilots fail, not due to lack of technology, but due to lack of governance.
It’s a signal that we needed better governance.
From “don’t do it” to “do it responsibly”
We brought together representatives from engineering, compliance, legal, information security, and business leadership. Not to debate whether AI should be used, but to design how it would be used.
The goal was clear: create a framework that protects the business whilst enabling innovation. One that compliance could approve. One that engineers would follow. One that clients would trust.
We started with a simple premise: not all AI use cases carry the same risk. Using ChatGPT to brainstorm project names? Low risk. Using AI to generate production code for a medical device client? High risk.
If we could classify use cases by risk level, we could create proportional governance: strict controls where needed, streamlined approval for low-risk applications.
Developing AI governance at Spyrosoft
Now we’re building AI-specific assessment frameworks on top of our existing software evaluation processes.
Rather than starting from scratch, we’re adapting methodologies to address AI’s unique characteristics: breaking down AI tools into specific sub-types, mapping them to concrete use cases, and developing evaluation criteria tailored to different AI applications.
The structure of our AI governance, transparency, AI literacy, oversight, and strategy in this area is based on four pillars.
The framework is being developed with hands-on experimentation and cross-functional input. Rather than copying a one-size-fits-all policy from elsewhere, we’re developing a model according to needs.

Pillar 1: Security and tool control
First question from compliance: Which AI tools are people actually using?
We still don’t have a good answer, but we’re getting closer.
We conducted an internal audit understand what was already happening. Engineers using Copilot. Content teams trying ChatGPT. Project managers experimenting with AI assistants.
Then we categorised tools into three tiers:
- Approved for general use: Tools that meet our security standards, have clear data policies, and don’t expose us to unacceptable risk. These require only basic authorisation.
- Approved with restrictions: Tools that can be used for specific purposes with additional safeguards. For example, using a particular AI tool is fine for internal documentation but not for client code.
- Prohibited or requires special approval: Tools that present unacceptable risk or haven’t been assessed yet.
Critically, we also identified when private instances or on-premise solutions were needed. For sensitive client work, especially in regulated industries like healthcare or automotive, we deploy AI tools in controlled environments where we own the infrastructure.
The security team built a process to evaluate new tools as they emerge. Banning everything isn’t a strategy. New AI tools launch every week. If we don’t provide approved options, people will use unapproved ones.
This process has become mandatory under the EU AI Act for organisations deploying AI systems.
Pillar 2: Information classification and data usage rules
At the heart of AI compliance is knowing what data can (and can’t) be used with AI tools.
We updated our information classification policy to explicitly cover AI use cases. All company and client information is categorised by sensitivity:
- Public: Anyone can see this
- Internal: Spyrosoft employees only
- Confidential: Restricted to specific teams
- Client confidential: Belongs to clients, governed by contracts
- Highly sensitive: Personal data, financial information, IP
On the other hand, non-sensitive data (like open-source code or general market research info) can be used to leverage AI’s capabilities. These data usage rules are communicated company wide.
By setting these boundaries, we protect privacy while still allowing AI to crunch the data that it can safely handle. This framework directly addresses requirements under both GDPR and the EU AI Act, ensuring we can demonstrate compliance through clear audit trails and documentation standards.
See how we can help you achieve AI governance
Pillar 3: Training that enables adoption
Our compliance team wasn’t blocking AI because they were risk-averse. They were blocking it because they didn’t know how to assess risk properly. Traditional IT security frameworks don’t answer questions like “what if the AI hallucinates in production?“
Similarly, our technical teams didn’t understand why compliance asked certain questions. “Why does it matter where the training data came from?” Because bias, intellectual property, and reliability all trace back to training data.
This pillar involved upskilling both technical teams and compliance personnel. We rolled out training sessions to teach engineers and project managers about the new AI usage policies. We trained the compliance and legal teams on the basics of AI and machine learning. “We gave our compliance folks a crash course in AI,” Kamil, one of our expert notes. “Once they understood how models work and what controls we can put in place, they became confident partners in evaluating AI projects.”
This mutual education built trust. We also created an internal forum for sharing AI success stories and lessons learned from pilot projects. The effect was empowering: employees felt encouraged to try approved AI tools, knowing they had guidance to do so correctly.
This AI literacy programme isn’t just good practice. It’s becoming a legal requirement. The EU AI Act mandates that organisations demonstrate appropriate AI literacy levels across their teams, with role-based training aligned to AI risk levels and permitted actions.
Pillar 4: Working with clients and contracts
Many of our clients operate in highly regulated industries. Healthcare devices. Automotive systems. Financial services platforms. They have their own stringent compliance requirements, so our compliance needs to be exceptionally rigorous because we work across multiple industries at once, each with its own standards. We must always be ready to adapt to each client’s internal regulatory requirements.
We realised our internal AI governance needed to extend to how we engage with clients.
We updated our standard contracts to address AI explicitly:
- Do we have permission to use AI tools on this project?
- What types of AI are acceptable? (coding assistants vs. autonomous systems)
- Who owns AI-generated outputs?
- What disclosure is required?
- How do we handle AI-related incidents?
Rather than treating this as “legal boilerplate,” we made it a business conversation.
For some clients, we create bespoke AI policies. A healthcare client might prohibit any cloud-based AI tools touching patient data. An automotive client might require all AI-generated code to pass additional verification. A financial services client might mandate regular AI audits.
We document these requirements in project charters. They’re not hidden in contracts. They’re part of the working agreements that delivery teams reference daily.
This procurement and vendor due diligence process ensures that both we and our clients maintain compliance with evolving AI regulations whilst protecting proprietary systems and data.

The AI-shift manifest
At some point, we wrote down how we engage with AI as both a tool and a business opportunity.
The manifest isn’t a governance document. It’s a statement of intent. It articulates five core principles that guide how every person at Spyrosoft approaches AI:
- Using AI effectively is now a fundamental expectation for everyone.
- AI should be an essential part of your prototype delivery phase.
- AI usage will be a part of pathfinder.
- Learning is self-directed, but share what you’ve learned.
This is perhaps the most culturally significant shift: AI-first mindset.
Before writing a document, drafting code, analysing data, or solving a problem, pause and ask: “Could AI help here?” Not every task needs AI. But every task deserves that consideration. This small habit has compounded into significant productivity gains across the company.
The manifest has done something critical: it’s given everyone in the company permission and expectation to engage with AI seriously. It signals that this isn’t a side project or an IT initiative. It’s company strategy, backed by leadership, resourced appropriately, and measured in performance conversations.

And critically, it aligns leadership support behind the governance framework. When compliance asks for resources to assess a new tool, the answer is “yes, how can we help?” not “is this really necessary?” Because the manifest makes clear: AI adoption is strategic, and doing it responsibly is non-negotiable.
This combination of clear strategic intent through the manifest and practical enablement through governance is what makes AI adoption work. Strategy without governance leads to chaos.
Lessons learned
Here’s what we’ve learnt, often the hard way:
- Build common level of understanding: The breakthrough came when we stopped assuming everyone understood how LLMs work. We built a baseline level of technical literacy across all teams, not turning compliance into engineers, but giving them enough understanding so potential risks became concrete rather than abstract. When compliance grasps how LLMs process data and what technical safeguards are possible (private instances, anonymisation, verification layers), they can have substantive conversations instead of defaulting to “no.”
- Involve compliance early and often: One of the biggest takeaways was the importance of bringing compliance and legal teams into the loop from day one. Early involvement turns would-be blockers into co-designers of the solution, preventing roadblocks at later stages.
- Experimentation is essential: Developing a working governance model required iteration. We learned that you can’t draft a perfect AI policy in one go. By running controlled pilots (and occasionally failing in safe settings), the company discovered what safeguards worked in practice and what needed adjustment. This experimental mindset was crucial for refining policies that enable AI usage.
- No one-size-fits-all solution: There is no universal template for AI governance. Our four pillars fit its own structure and client services, but every organization’s risk appetite, industry regulations, and culture differ. Flexibility is key. Any governance framework must be tailored to an organisation’s specific context and updated as that context changes.
- Top-down support with bottom-up engagement: Leadership support was vital to drive the AI initiative forward, but so was grassroots buy-in. We found success by securing executive sponsorship (to legitimise the shift toward AI with responsibility) and simultaneously empowering employees through training and open feedback channels. Both levels working in tandem created a sustainable adoption culture.
- Compliance as an enabler, not a gatekeeper: Perhaps the most profound lesson was a mindset shift. When compliance teams are equipped with knowledge and given a mandate to facilitate safe innovation, they can actively accelerate projects instead of slowing them. We eventually became champions of certain AI initiatives, identifying potential issues early and suggesting solutions.
Client adoption models: One model does not fit every organisation
Here’s something we’ve learnt from helping clients build their own AI governance: there’s no universal playbook.
The framework that works for a 50-person startup won’t work for a 10,000-person enterprise. The approach that fits a fast-moving tech company won’t suit a heavily regulated financial institution.
We’ve seen two particularly instructive examples from clients we’ve worked with—and they represent opposite ends of the spectrum.
Experimentation: Adeo
Adeo is one of the world’s largest home improvement retailers (you might know their Leroy Merlin brand). When they approached us about AI, they had a specific challenge: they wanted to encourage innovation across a massive, distributed organisation without creating chaos.
The approach was developer-led rather than management-mandated.
Rather than rolling out AI top-down, Adeo created controlled environments where teams could experiment with AI tools for their specific needs. The team focused first on using AI to improve their own software development process. The frontend team particularly embraced Cursor, and the team experimented with various implementations including integration with their company CI/CD server.
The key was guardrails: each sandbox had clear boundaries (what data could be used, what tools were permitted, what required additional approval), but within those boundaries, teams had freedom to explore.
The result: Successful pilots were then used to persuade stakeholders and gradually expand AI’s footprint. This bottom-up approach created internal proof of value and allowed policies to coalesce organically around what the teams learned.
We supported Adeo by providing expert guidance during these experiments and helping craft the necessary data safeguards as the projects grew.
When this model works: When you need to build AI literacy across a large organisation, when you’re not sure which use cases will deliver value, or when you want innovation to emerge organically rather than being imposed. Also, when leadership is hesitant towards AI.
See how we can help you achieve AI governance
Top-down mandate: Nvidia
NVIDIA presents a top-down approach.
The result: Rapid, organisation-wide adoption. AI was embedded across product development, operations, sales, and support. Productivity gains were measurable and significant.
When this model works: When you have strong technical leadership, when competitive pressure demands speed, when you have the resources to do it right, or when AI is core to your business strategy.
What this means for you?
Most organisations fall somewhere between these extremes. You might start with experimentation to build confidence, then shift to more structured rollout once you understand what works. Or you might mandate adoption in specific high-value areas whilst allowing experimentation elsewhere.
The key lesson: choose a model that fits your culture, your risk tolerance, and your strategic urgency. Don’t copy someone else’s approach just because it worked for them.

Governance as enablement
AI governance isn’t a constraint on innovation. It’s what makes innovation sustainable.
Without governance, you get chaos: unapproved tools, inconsistent practices, hidden risks, and, eventually, a crisis that forces a reactionary crackdown. With governance, you get clarity: people know what they can do, how to do it safely, and where to get help when they’re unsure.
AI governance demands establishing clear roles and accountability, whether that’s appointing a Head of AI Governance, creating cross-functional AI oversight committees, or assigning compliance liaisons to major AI projects.
Someone needs to own AI governance, not just contribute to it.
The organisations that will thrive with AI aren’t the ones moving fastest. They’re the ones moving deliberately with clear frameworks, engaged leadership, and genuine collaboration between technical and compliance teams.
If you’d like to learn more about how we can help your organisation implement AI in a compliant and impactful way, feel free to contact us to schedule a conversation about our AI governance processes.
About the author