Services for Organizations

Using our research, best practices and expertise, we help you understand how to optimize your business processes using applications, information and technology. We provide advisory, education, and assessment services to rapidly identify and prioritize areas for improvement and perform vendor selection

Consulting & Strategy Sessions

Ventana On Demand

    Services for Investment Firms

    We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.

    Consulting & Strategy Sessions

    Ventana On Demand

      Services for Technology Vendors

      We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.

      Analyst Relations

      Demand Generation

      Product Marketing

      Market Coverage

      Request a Briefing



        David Menninger's Analyst Perspectives

        << Back to Blog Index

        Detecting and Preventing Bias: A Crucial Element of AI Governance

        Detecting and Preventing Bias: A Crucial Element of AI Governance
        5:13

        In today's rapidly evolving technological landscape, artificial intelligence (AI) governance has emerged as a critical ingredient for successful AI deployments. It helps build trust in the results of AI models, it helps ensure compliance with regulations and it is necessary to meet internal governance requirements. Effective AI governance must encompass various dimensions, including data privacy, model drift, hallucinations, toxicity and perhaps most importantly, bias. Unfortunately, we expect that through 2026, model governance will remain a significant concern for more than one-half of enterprises, limiting the deployment, and therefore the realized value of AI and machine learning (ML) models. Continuing my previous discussions about AI governance, I’ll take a look at bias in this analyst perspective.

        Bias can be described as skew in the outputs of AI models (including large language models or LLMs) for some specific segment of the ISG_Research_2024_Assertion_AI_AI_Model_Governance_8_Sdomain which is being modeled. Bias is most often associated with gender, ethnicity, sexual orientation, age, disability or another group that could experience discriminatory practices. Bias in models used for hiring, credit decisions, lease applications and school admissions could result in legal issues or damage to an enterprise’s reputation. But bias can affect any group. Perhaps the model is biased with respect to residents who live in the western part of the country compared with other regions. Perhaps the model is biased with respect to high school educated adults versus those with higher education. While these latter examples may not result in discrimination that has legal consequences, the inaccuracies in the model can result in suboptimal operations and impact the bottom line.

        Typically, bias can arise from two sources: data bias and model bias. Data bias arises from unrepresentative samples and historical biases that are carried forward in the data used for training. For example, in granting credit, many lending institutions have been less inclined to offer credit to minorities. Those real-world biases will be captured in the predictions of the models unless steps are taken to compensate for them. Another related issue is when a training dataset primarily represents one demographic, the model may perform poorly for others. Model bias is the systematic and repeatable error in the predictions. The way in which data is cataloged and labeled can produce model bias. Often data is cataloged and labeled by humans, and the decisions they make may not be fully accurate or may be incomplete, inadvertently altering the resulting model outputs. Model bias can also be the result of the algorithms’ inherent design. Common examples of bias include healthcare, recruitment and predictive policing, where predictive algorithms can produce disparate outcomes based on gender, race or ethnic background.

        Enterprises need to take steps to detect and prevent bias. One of the most important steps is to establish and track metrics that measure bias. These metrics should track and compare performance across various demographic groups over time. Predefined metrics will also help maintain accountability. As mentioned in my earlier perspective, at the time of our evaluation a few months ago, only 5 of the 25 software providers evaluated in our AI Platform Buyers Guide provided tools that measure and track bias. Several others provided a generic metric capability, leaving it up to the enterprise to define and track metrics it considers relevant. And still other providers ignored the issue entirely. Additional metrics should be measured and tracked as well, including drift, fairness and model accuracy. Ideally, providers would offer a metrics framework along with predefined metrics that address core governance issues such as bias, drift and fairness. In addition to metrics, enterprises must ensure their training datasets are diverse and representative, employing measures to mitigate historical bias.

        Detecting and preventing bias is just one aspect—albeit an important one—of a robust AI governance program. Understanding the sources of bias can help an enterprise design its governance processes to address these issues. In designing those processes, start with an understanding of what capabilities your software provider offers today and what is planned in the near future. Create the metrics necessary to track bias and other key metrics. Report and review those metrics frequently, even if it must be done outside of the AI platform you are using. Where possible, establish notifications when metrics deviate from accepted tolerances.

        In summary, AI governance is not just a regulatory checkbox; it is an essential framework for building trust and improving outcomes in AI deployments. The path to unbiased AI begins with a commitment to effective governance. By prioritizing the creation of fair, transparent and trustworthy AI systems, enterprises can harness AI's transformative power while minimizing risks.

        Regards,

        David Menninger

        David Menninger
        Executive Director, Technology Research

        David Menninger leads technology software research and advisory for Ventana Research, now part of ISG. Building on over three decades of enterprise software leadership experience, he guides the team responsible for a wide range of technology-focused data and analytics topics, including AI for IT and AI-infused software.

        JOIN OUR COMMUNITY

        Our Analyst Perspective Policy

        • Ventana Research’s Analyst Perspectives are fact-based analysis and guidance on business, industry and technology vendor trends. Each Analyst Perspective presents the view of the analyst who is an established subject matter expert on new developments, business and technology trends, findings from our research, or best practice insights.

          Each is prepared and reviewed in accordance with Ventana Research’s strict standards for accuracy and objectivity and reviewed to ensure it delivers reliable and actionable insights. It is reviewed and edited by research management and is approved by the Chief Research Officer; no individual or organization outside of Ventana Research reviews any Analyst Perspective before it is published. If you have any issue with an Analyst Perspective, please email them to ChiefResearchOfficer@ventanaresearch.com

        View Policy

        Subscribe to Email Updates

        Posts by Month

        see all

        Posts by Topic

        see all


        Analyst Perspectives Archive

        See All