David Menninger's Analyst Perspectives

Market Observations on AI Governance

Written by David Menninger | Aug 1, 2024 10:00:00 AM

Having just completed our AI Platforms Buyers Guide assessment of 25 different software providers, I was surprised to see how few provided robust AI governance capabilities. As I’ve written previously, data governance has changed dramatically over the last decade, with nearly twice as many enterprises (71% v. 38%) implementing data governance policies during that time. With all this attention on data governance, I had expected AI platform software providers would recognize the needs of enterprises and would have incorporated more AI governance capabilities. Good governance efforts can lead to improved business processes, but as we saw with analytics, AI is emerging as a weak link in data governance. As a result, we expect that through 2026, one-third of enterprises will realize that a lack of AI and machine learning (ML) governance has resulted in biased and ethically questionable decisions.

Let’s look at some of the things enterprises should consider as they implement AI governance within their organizations. First of all, AI is heavily dependent on data, so all the same data governance and privacy issues exist in AI platforms that exist in data and analytics platforms. Data access should be restricted, and privacy should be protected with the appropriate access controls, encryption and masking. In addition, the output of the models—particularly generative AI (GenAI) models—may contain data about individuals that could be protected information. Only 6 of the 25 software providers we evaluated provided adequate data governance.

The process of developing AI models and maintaining them is iterative. Understanding how a model was created and being able to recreate the model is important and, in some cases, may be necessary to comply with regulations. Reproducibility requires versioning and archiving various artifacts used in the model training process. Most platforms provided limited capabilities. While many software providers supported some level of reproducibility, particularly in the data preparation process, only three providers fully met the requirements for end-to-end reproducibility.

Another issue of concern during the model-building process is bias detection. Bias is a measure of the fairness, impartiality and neutrality of a model. Only five software providers had adequate mechanisms for detecting bias in the models they produced. And once a model is produced, enterprises need to monitor drift, or how much the model has deviated in its accuracy over time. The model may have been deemed adequate at the time it was created, but changing market conditions or business operations may cause the accuracy to decline over time. In this area, providers did slightly better, with nine providers fully meeting the requirements.

Another area with a parallel to data governance is the concept of a catalog. Data catalogs are indispensable for good data governance, as my colleague Matt Aslett has written. Similarly, model catalogs play a key role in AI governance. Catalogs provide an inventory of what models exist within an enterprise as well as metadata about those models, such as their development or production status and other characteristics. Ideally, the catalog would include an indication of whether a model was certified for production use within the enterprise. However, only five of the platforms evaluated had robust, built-in approval workflows.

Other governance considerations include documentation of models and cost controls for the creation and usage of models. In addition, there are some governance issues specific to genAI, such as toxicity, hijacking, hallucinations and IP infringement.

The collection of these issues can be overcome with processes implemented outside of the AI platforms themselves. But creating and maintaining those processes is fraught with challenges. First of all, it requires additional resources. Second, any manual process is prone to errors. It’s not all doom and gloom though. Our Buyers Guide evaluations only considered generally available capabilities. Several software providers are adding additional governance capabilities that are currently in preview mode. In the meantime, enterprises will need to continue to be vigilant. ISG Research shows more than one-quarter of enterprises report their governance of AI fails to meet or falls short of their expectations.   It’s important to understand what features exist and what features are planned, but for the near term, enterprises should expect to invest resources in AI governance if they expect to utilize and trust AI in their business processes.

Regards,

David Menninger