New Report Finds That Developing Trustworthy Algorithms and Determining What Data Is Used to Train AI Are Among Top Challenges Faced in Eliminating AI Bias
DataRobot, the leader in enterprise AI, released new research revealing that nearly half (42%) of AI professionals across the U.S. and U.K. are “very” to “extremely” concerned about AI bias. The research — based on a survey of more than 350 U.S. and U.K. executives involved in AI and machine learning purchasing decisions uncovered that “compromised brand reputation” and “loss of customer trust” are the most concerning repercussions of AI bias, prompting 93% of respondents to say they plan to invest more in AI bias prevention initiatives in the next 12 months.
“Despite this fact, we’ve observed that AI maturity varies widely — with many organizations still using untrustworthy AI systems.”
“More organizations are deploying AI as they recognize the technology as a critical success factor for competing in today’s business climate,” said Ted Kwartler, VP of Trusted AI, DataRobot. “Despite this fact, we’ve observed that AI maturity varies widely with many organizations still using untrustworthy AI systems.”
DataRobot’s research found that while most organizations (71%) currently rely on AI to execute up to 19 business functions, 19% use AI to manage as many as 20-49 functions, and 10% leverage the technology to tackle more than 50 functions. While managing AI-driven functions within an enterprise can be extremely valuable, it can also present challenges. Not all AI is treated equal, and without the proper knowledge or resources, companies could select or deploy AI in ways that could be more detrimental than beneficial.
The survey found that more than a third (38%) of AI professionals still use black-box AI systems meaning they have little to no visibility into how the data inputs of their AI solutions are being used. This lack of visibility could contribute to respondents’ concerns about AI bias occurring within their organization.
To combat instances of AI bias, 83% of all AI professionals say they have established AI guidelines to ensure AI systems are properly maintained and yielding accurate, trusted outputs. In addition:
- 60% have created alerts to determine when data and outcomes differ from the training data
- 59% measure AI decision-making factors
- 56% are deploying algorithms to detect and mitigate hidden biases in the training data
The survey also uncovered cultural differences between U.S. and U.K respondents, potentially driven by regulatory and cultural circumstances in each geography. While U.S. respondents are most concerned with emergent bias which is bias resulting from a misalignment between the user and the system design U.K. respondents are more concerned with technical bias – or bias arising from technical limitations.
To enhance AI bias prevention efforts moving forward, 59% of respondents say they plan to invest in more sophisticated white box systems, 54% state they will hire internal personnel to manage AI trust, and 48% say they intend to enlist third party vendors to oversee AI trust. Beyond these AI bias prevention measures, 85% of all global respondents believe AI regulation would be helpful for defining what constitutes AI bias and how it should be prevented.
“Our findings indicate that AI bias continues to be a real concern for today’s organizations – and for good reason,” said Colin Priest, VP of AI Strategy, DataRobot. “While many organizations have started to take the right steps to mitigate AI bias – such as moving away from black box systems and establishing internal AI guidelines – there’s more to be done to win the trust of businesses and consumers. Every business must make AI bias education a priority so they can implement critical strategies within their AI systems that will help prevent it from happening.”