Case study: Data Governance in the AI Era
This case study is designed for data management professionals, data governance specialists, and organisational leaders aiming to work through the complexities of artificial intelligence (AI) implementation. It provides strategic insights into why data governance is the fundamental prerequisite for building ethical and trustworthy AI systems. Whether you are a practitioner looking to align your framework with new regulations or a senior leader seeking to ensure AI project viability, this study offers expert guidance based on the evolving landscape of 2026.
Introduction
The rapid adoption of artificial intelligence has truly transformed organisational data requirements. While the ‘shiny box’ of AI promises unprecedented efficiency, traditional data management approaches often fail to account for the specific demands of machine learning models.
This case study explores the vital intersection of data governance and AI, originally presented in the DAMA UK webinar "The Crucial Role of Data Governance in the AI Era" by Caroline Bishop, a Global Data Protection Officer with over 30 years of compliance expertise. As enterprises move toward automated decision-making, the role of the data professional has shifted from simply working away in the background to a frontline strategic necessity.
The Governance-First AI Strategy
The core finding of this study is that AI cannot exist in a functional or practical sense without high-quality, governed data. Key insights include:
* Data is the fuel for AI. Without accurate, complete, and unbiased information, AI models cannot produce reliable predictions or actionable insights.
* Industry research, including reports from Gartner, suggests that up to 85% of AI projects are destined for failure because the underlying data has not been properly governed.
* A New Strategic Balance: Successful AI integration requires a shift in investment, moving toward a model that prioritises people and process (70%) and data technology (20%) over the algorithms themselves (10%).
The 85% Failure Rate
The promise of AI is often met with significant operational hurdles. Current industry analysis suggests that 85% of AI projects will fail to reach production or achieve their intended success. This failure rate is largely attributed to a lack of foundational data housekeeping and a misunderstanding of the complexities inherent in automated systems.
The effectiveness of any AI algorithm is entirely dependent on the quality of the data used to train it, a concept often referred to as ‘fueling the machine’. Poor data quality leads to amplified errors, resulting in incorrect business decisions, supply chain inefficiencies (such as overstocking), and degraded customer experiences. AI models must be trained on data that is relevant to their specific use case. For example, training a gambling-sector AI on raw university data will prevent the system from recognising meaningful patterns in its actual operational environment. So, accurate and unbiased data is required to enable the reliable predictions that boards expect.
The Black Box Problem
As AI systems become more complex, they often become opaque, creating a significant challenge for transparency and accountability. The Black Box refers to the difficulty humans have in fathoming exactly how an AI system arrived at a specific output or decision. Even the system itself may eventually forget the logic used to reach a result.
This has regulatory consequences:
* Under frameworks like GDPR, organisations must be able to explain automated decisions to individuals, such as why a mortgage was denied or a betting account was flagged. If the logic remains hidden within the Black Box, the organisation faces significant legal and reputational risks.
* To mitigate this, professionals must build audit trails into their AI systems from day one to ensure that even as the model evolves, its decision-making process remains traceable and justifiable.
Regulatory and Ethical Anchors
The implementation of AI is no longer a purely technical endeavour but a deeply regulated one. There is a lot of complex legislation which data governance professionals must be aware of now, particularly with the EU AI Act and GDPR serving as the primary pillars of compliance. These frameworks move beyond simple data protection, mandating a ‘governance by design’ approach where transparency and data quality are embedded into the lifecycle of every AI model.
A critical focal point for data practitioners is Article 10 of the EU AI Act, which specifically addresses the management of training, validation, and testing datasets. For systems categorised as high-risk, such as those used in recruitment, credit scoring, or law enforcement, the Act mandates that datasets must be relevant, representative, and, to the best extent possible, free of errors. Failure to meet these standards can result in penalties of up to 7% of global annual turnover, a figure that surpasses even the maximum fines under GDPR. This reinforces the role of the data governance professional as somewhat of a frontline defender of organisational value.
Operationalising Governance: Law, Ethics, and Frameworks
While the EU AI Act focuses on system safety, GDPR remains the primary authority on the personal data flowing through AI models. Data Protection Officers (DPOs) and governance teams must collaborate to ensure automated decisions remain transparent and contestable.
GDPR Compliance: Organisations must document data lineage and original purpose to avoid unauthorised use of legacy customer data in new AI models.
Article 22 Rights: Individuals retain the right to challenge solely automated decisions, such as loan denials, requiring firms to explain how a Black Box reached its conclusion.
AI systems are susceptible to learning patterns that result in skewed or entirely fabricated outputs. For example, unrepresentative training data creates discriminatory outcomes (biases), such as an algorithm incorrectly concluding that all librarians must wear glasses. Sometimes systems may produce confident but false information (hallucinations), like suggesting a backpack is as effective as a parachute, which can compromise business integrity if not audited. Addressing these requires proactive human-in-the-loop oversight to ensure reality-grounding.
To translate high-level legal requirements into a RAG-status view of risk for the board, organisations should adopt established global standards:
- NIST AI RMF provides a detailed, globally applicable guide for identifying and mitigating risks across the entire AI lifecycle.
- ISO 42001 offers a certifiable standard for AI management systems, aligning AI goals with existing ISO 27001 security frameworks.
Case Study: The 10/20/70 Strategy
In her career overseeing data protection and governance across heavily regulated sectors, Caroline Bishop identified a recurring pattern: organisations were consistently convinced by the technical allure of AI while ignoring the operational foundations required to make it work.
This observation led to the adoption of the 10/20/70 model, a strategic framework designed to rebalance how businesses approach AI transformation:
* The 10% (Algorithms): While the code is essential, it is often the most straightforward part of the process. Caroline warns against over-focusing here at the expense of governance.
* The 20% (Technology and Data): This is where data must be cleaned, structured into machine-readable formats, and verified for accuracy before it is put in an AI model.
* The 70% (People and Process): This is the most critical and frequently underfunded area. It involves training staff to identify hallucinations, establishing human-in-the-loop oversight, and redefining roles like AI Data Guardians to ensure long-term accountability.
By following the 10/20/70 approach, the practitioner moves governance from an afterthought to a core feature.
Key takeaways for professionals:
1. Invest in Literacy. The 70% investment must include educating the C-suite to move away from Terminator-style sentient AI fantasies toward a realistic understanding of AI as a tool that requires constant human calibration.
2. To enforce this model, it is recommended to have a practical PO (Purchase Order) Gateway. By working with finance to pause funding for AI projects until they pass a Governance and Security review, the data professional ensures the 20% and 70% are addressed before the 10% is purchased.
3. Commit to continuous testing. AI systems are not static. The 70% effort must include stress testing and monitoring for data drift to ensure the model doesn't become biased or inaccurate over time.
----------------------------
The integration of AI into data governance is a complex but achievable endeavour when anchored by professional standards and cross-functional collaboration. This case study underscores that the AI era is, in reality, the era of the data professional. By focusing on high-quality data and ethical transparency, organisations can not only avoid becoming part of the 85% failure statistic but also realise the full transformative potential of modern data management practices.
By aligning efforts across cross functional teams, data practitioners enable organisations to build faster, more intelligent, and reliable AI solutions.
.png)




