brand logo
Data and AI Governance: Independent or interdependent?

Data and AI Governance: Independent or interdependent?

10 May 2026 | By Dr. Chandrika Subramaniyan


Artificial Intelligence (AI) is reshaping industries across the world, creating new data‑driven products, services, and ways of working. For organisations to benefit from these advances, AI systems and the data that fuel them must be tightly connected to business strategy. 

This is where Data Governance (DG) and AI Governance (AIG) become essential. As expectations for responsible practice grow in both domains, a natural question emerges: to what extent do these governance frameworks depend on one another?


Relationship between DG and AIG


DG provides the foundations. It ensures that an organisation’s data is accurate, consistent, and secure. It establishes the policies, standards, and processes that determine how data is collected, stored, accessed, and protected. Strong DG also clarifies ownership and accountability, defines roles and responsibilities, and puts controls in place to safeguard sensitive information.

AIG builds on this foundation. It focuses on how machine learning models and other AI systems are designed, deployed, and monitored. Its purpose is to ensure that AI delivers value while remaining fair, transparent, and aligned with regulatory and ethical expectations.

In simple terms, DG protects the quality of the inputs, while AIG protects the integrity of the outputs. When combined, they create the conditions for trustworthy, transparent, and responsible AI. They are not separate disciplines operating in isolation, but complementary components of modern enterprise governance.

Yet governing data for AI is far more complex than traditional data management. AI systems rely on vast, heterogeneous, and often sensitive datasets. They raise new questions about bias, privacy, security, explainability, and performance. As a result, Data Governance for AI requires deeper scrutiny, stronger controls, and more nuanced ethical judgement than ever before.

Organisations that bring Data Governance and AI Governance together are far better equipped to navigate the complexities of the digital era. They are able to extract greater value from their data and AI investments, manage emerging risks with confidence, and position themselves for sustained, long‑term success.

Effective Data Governance is ultimately realised through collaborative practice. Its purpose is to meet the organisation’s need for reliable, data‑driven insights, support the efficient use of technology, and enable broader digital transformation. 

In doing so, DG leaders make a series of strategic decisions such as  allocating resources to advance the data strategy and meet business priorities; defining data roles that clarify who produces, manages, and consumes data; understanding data lineage to track how information moves and changes across systems; balancing access and security to enable innovation while maintaining compliance; and assessing performance through maturity models that guide continuous improvement.

Each of these decisions directly shapes the conditions under which AI systems are trained and generate content. AI models increasingly rely on organisational data that is itself governed data that has defined owners, quality standards, and controls. As a result, AI outputs inevitably influence, and are influenced by, core DG deliverables.

However, AI initiatives represent only one dimension of the broader DG landscape. Data Governance must address far more than the needs of AI systems: it spans enterprise‑wide data quality, stewardship, privacy, lifecycle management, regulatory compliance, and operational integrity. 

In this sense, AI Governance sits within the broader remit of Data Governance, not the other way around. DG provides the structural, ethical, and operational foundations upon which responsible AI can be built.


Similarities and differences 


Despite their different scopes, AIG and DG share a common responsibility: guiding the lifecycle of data as both a product and an input that AI systems create, transform, and consume. Both governance programmes assess how data is integrated, its quality, its security, its privacy protections, and its accessibility across the organisation.

At a practical level, both frameworks aim to ensure that information is reliable and fit for purpose. Consider a major retailer discovering that its AI-powered recommendation engine is suggesting irrelevant products to customers. Both DG and AIG would view this as a governance issue requiring attention, because poor recommendations signal a breakdown in either data quality, model performance, or both.

However, the appropriate governance response depends on the underlying cause. The issue may stem from flawed data, a model logic problem, or a combination of the two. Determining the right intervention requires analysing the root of the problem rather than assuming one governance domain is solely responsible.

While DG and AIG often intersect, they approach problems from different angles. Returning to the example of inaccurate product recommendations:

  • A DG team might audit the product data pipeline and discover inconsistent standards, missing attributes, or poor metadata feeding into the AI model.
  • An AIG team, by contrast, might identify weaknesses in the model’s logic, such as how it weights customer preferences or interprets behavioural signals.

In this scenario, it is beneficial to adapt a collaborative approach. DG can resolve the data quality issues, while AIG refines the model’s mechanics. Together, these efforts produce more accurate and valuable recommendations for organisations, and form the foundation for trustworthy, transparent, and responsible AI. Their interdependence becomes clear when we consider the following dimensions:

  • Foundation for AI models: Good AI starts with good data. If the data is accurate, consistent, and well managed, AI models can learn and perform reliably. If the data is flawed, even the best AIG cannot fix the problem.
  • Transparency and explainability: For AI systems to be transparent, we need to understand how they make decisions. This depends on knowing where the data came from, how it was changed, and who used it. These are all core responsibilities of DG and are essential for explaining AI behaviour.
  • Risk management: Both DG and AIG help organisations manage risk. DG protects against issues like data breaches or poor‑quality information. AIG focuses on risks such as biased decisions, unpredictable model behaviour, or hallucinations. Together, they create a stronger, more complete risk management approach.
  • Compliance: Organisations must meet a growing mix of data and AI regulations. Privacy laws sit alongside new AI‑specific rules and can be complied with effectively by placing governance frameworks in organisations that bring data and AI oversight together rather than treating them separately.

By understanding how DG and AIG connect, organisations can unlock several practical benefits:

  • Better decision making: When data is trustworthy and AI models are reliable, organisations can make clearer, faster, and more confident decisions that support innovation and competitiveness.
  • Stronger stakeholder trust: Showing that the organisation takes both DG and AIG seriously builds confidence among customers, staff, and regulators.
  • Greater operational efficiency: When governance is integrated, processes become smoother, duplication is reduced, and resources are used more effectively.
  • Simpler compliance: A unified approach makes it easier to keep up with changing data protection and AI regulations, rather than managing them separately.
  • Improved risk management: AI decisions depend on understanding where data comes from and how it changes. A streamlined governance approach helps track data lineage, ensuring transparency and reducing the risk of using unverified or biased information in AI systems.



(The writer is a solicitor and community mediator. Drawing on her knowledge and skills in various areas, she has trained and taught law, leadership, IT, and community management in TAFE institutes and universities in Sri Lanka, Australia, and India. She is currently a Director of the Western Sydney Local Health District Board and SydWest Multicultural Services, and is involved with Riverlink and Participate Australia. She is also an Advisory Member of the Justice Department of NSW, the Cumberland Council, and many other organisations, as well as a Fellow of the Asian Institute of Alternative Dispute Resolution)


(The views and opinions expressed in this article are those of the writer and do not necessarily reflect the official position of this publication)



More News..