CONTRIBUTOR
Cloud and Digital Partner,
PwC

Today, more companies than ever understand that in order to be successful they need to adopt artificial intelligence (AI). The companies who realize significant returns on investment (ROI) are not just power users of AI, they are also business leaders who understand how to use data and AI to improve business, according to PwC latest AI survey.

The companies with the most ROI are those that don’t focus on one AI goal at a time. Instead they are advancing AI in three areas simultaneously: business transformation, enhanced decision-making, and modernized systems and processes. Thirty-six percent of organizations take this more holistic approach to AI and prioritize these three goals, making them more successful than businesses that only focus on one goal.

These AI leaders take a collaborative approach to AI and bring together their top performers from across the organization. They gather AI specialists, analytics teams, software engineers, and data scientists to better align on goals for AI projects. Not only that, but this approach can also help set businesses on the right track for their projects to deliver real value to the business at reasonable costs: 44% of AI leaders are realizing value by increasing productivity with automation and 40% are using AI to innovate on their products and services.

In the past, AI projects, and data science in general, were completed in silos across the business. This often led to poor results because the projects weren’t sponsored by business leaders who understood AI or because teams didn’t have the full access to the data that they needed.

One of the most critical parts of taking this holistic approach is that it requires that organizations invest in and manage data, AI, and Cloud as a unified whole. Leaders in AI often define new roles, like Chief Data Offerings, to focus on making data accessible, increasing data literacy and governance, enabling data science, and engaging the business on transformation powered by data and AI.

Used properly, AI can create a number of benefits for businesses. However, although AI offers limitless potential to improve the way companies operate, it also introduces risk. That’s why it’s critical for organizations to understand what their AI is doing, and why. For example: Is it making accurate, bias-aware decisions? Is it violating anyone’s privacy?

To fully understand AI, organizations must be able to govern and monitor AI technology. However, although companies realize the need to implement AI responsibly, they are at different stages of the journey.

What responsible AI means for businesses

But even as more and more organizations adopt AI and reap its benefits, they must also consider the ethical and moral risks that AI may pose, such as bias, privacy violations, and potential loss of jobs as a result of automation.

Responsible AI is a set of practices and principles that guide the ethical development and use of AI systems. Companies that implement responsible AI processes can assess their AI models for explainability (enabling humans to understand how an algorithm arrived at its decision or recommendation), robustness, bias, fairness and transparency.

Responsible AI requires technology as well as business experience. For instance, when AI makes decisions based on historical data, which likely contain common biases, it could lead to biased decisions.

And risk and business experts may not have the technical skills to predict how highly complex algorithms are likely to perform as circumstances change. Consequently, since AI continuously evolves its own decision-making based on new data, governance and protection must also evolve.

When it’s effective, responsible AI governance offers checks and balances and escalation protocols when organizations evaluate and validate AI models. However, even though nearly every company has responsible AI ambitions, not all of them are planning to put those ambitions into action.

And although holistic AI leaders are doing better, there’s still room for improvement. For instance, 57% of leaders plan to confirm their AI is compliant with regulations in 2022, but only 41% plan to review third-party AI services to ensure they meet standards.

How to implement AI responsibly

Here are several ways companies can implement AI responsibly:

  • Govern the life cycle: To keep up with fast-changing AI models, organizations must implement end-to-end governance of the data/AI/cloud life cycle. To do this, risk, artificial intelligence, and business leaders all need to be integrated with the new procedures, roles, and responsibilities for each line of defense. However, to employ and enhance existing IT governance and controls, many business and risk leaders may need to learn some of the basics of AI and data science.
  • Assess the impact: To facilitate the work of integrated teams and life-cycle governance, companies should evaluate the end-to-end AI life cycle to capture risk, identify governance needs, increase accountability, and facilitate go/no-go decisions.
  • Minimize bias: Today, many organizations are focusing on the basics of responsible AI to ensure that AI is safe and does what it’s supposed to. However, as AI continues to support more business-critical decision making, it will be more important than ever for companies to identify and decrease AI bias, so that their AI models treat all their stakeholders fairly.

To lead in AI, organizations should take a more holistic approach to the technology, focusing on achieving three business goals: business transformation, enhanced decision-making, and modernized systems and processes. Companies that do this will have more success than those that take a singular approach. However, it’s also important for companies to evaluate their existing practices or create new ones to build technology and use data responsibly and ethically.