Co-founder and Chief Data Scientist,

Last year, 2022, was a watershed year in AI, as quality-related issues came to the forefront. Regulation marched along in the EU, Asia, and the U.S., and we saw companies like Zillow hobbled by AI model quality snafus.

Based on my recent discussions with dozens of Fortune-500 data science teams, I expect to see a continued spotlight on AI model quality this year. Here are six areas I’m keeping an eye on.

A movement towards more formal testing and monitoring programs for AI models

Similar to software development 20 years ago, it wasn’t until testing and monitoring became common that enterprise software use really took off. AI is at a similar inflection point. AI and machine learning technologies are being adopted at a rapid pace, but quality varies. Often, the data scientists developing the models are also the ones manually testing them, and that can lead to blind spots. Testing is manual and slow. Monitoring is nascent and ad hoc. And AI model quality is highly variable, becoming a gating factor for the successful adoption of AI. Automated testing and monitoring provides quality assurances and lowers uncertainty and risk.

AI model explainability stays hot

As AI becomes increasingly important in the lives of everyday people, more people want to know exactly how the models work. This is being driven by internal stakeholders who need to trust the models they are using, consumers who are impacted by model decisions, and regulators who want to make sure that consumers are being treated fairly.

More debate about AI and bias. Is AI a friend or foe to fairness?

During the last two years, people were concerned that AI was causing bias, due to factors such as bad training data. In 2023, I think we’ll see a growing realization that AI can help eliminate bias by bypassing the historical points where bias came into play. People are often more biased than machines – we’re starting to see ways that AI can reduce bias rather than to introduce it.

More Zillow-like debacles

Until testing and monitoring are standard practice, enterprises will continue to struggle with quality-related issues like the ones Zillow faced in its home-buying division (outdated models cause the company to overbuy at inflated prices, ultimately leading to the closure of the division and massive losses and layoffs). I expect more PR disasters this year that could have been avoided with better AI model quality approaches.

New vulnerability in the data science ranks

For the past several years, there has been a severe shortage of data scientists, and companies lucky enough to have them treated them like gold. But as trends continue regarding the difficulty of demonstrating ROI on AI efforts, and as the economy softens, enterprises are taking a harder line on results. It’s common today for only 1 in 10 models developed to ever move into production use. Data science teams that aren’t able to get models into production at a faster pace will face pressure. Those jobs may not be secure forever.

Formal regulation of AI use in the U.S.

Regulatory agencies have been studying the challenges and impacts of AI, but have yet to make a significant move, unlike the European Commission.  I expect that to change in 2023, with the U.S. finally drafting its own rules at the federal level, similar to those already in effect in the EU and Asia. Guardrails are good for everyone in this market, and will ultimately help build trust in AI. U.S. regulations are not far off, and business should get ready. The White House Blueprint for an AI Bill of Rights, released in October 2022, was a step in the right direction, providing a framework for the responsible development and use of AI.

AI remains one of the fastest-moving areas of technology – but in many ways, its promise has yet to be realized. A stronger focus on quality will drive AI adoption, and help companies achieve a return on their growing AI investments.