CONTRIBUTOR

AI is moving at the speed of sound and is on the cusp of delivering more exciting business outcomes than ever before. Yet, while the AI software space alone is set to be worth nearly $63 billion this year, one issue continues to plague the AI space, and could ultimately prevent the industry from reaching its full potential: Human-centric ethics.

The promise of AI is undeniable when it comes to driving business success. The problem is that these gains are often coming at significant ethical costs – or at the very least are relegating ethical concerns to the periphery. Of course, AI was designed in part to help drive greater business efficiency by building on the capabilities of human operators and end-users. However, as time has gone on, and AI technology has become more mature, AI has been deployed in a way that is seeking to replace humans – and their inputs – not empower them.

Human-centered technology integrates human sciences with computer science to design computing systems with a human focus. Researchers studying human-centric technology design broadly investigate how the relationship between people, groups, societies and technology can inform the design of computing systems that support human’s activities and enrich their lives. Herein lies where ethics concerns in AI begin to take hold. Without human-centricity at its core, AI runs an increasing risk of running afoul of ethical concerns – including a heightened risk of biasing, spreading fake news and circumventing regulatory oversight – and straying from the path of human benefits. And these ethics issues are not just a cause for concern “in theory,” as we have already seen notable walkbacks by companies such as Facebook over ethics concerns surrounding their technology.

With that in mind, if the industry has any chance of reaching its potential, the time has come for AI companies and practitioners to address their ethics issues head-on. But first, that means understanding what is underpinning these issues in the first place.

Here are a few of the most pressing ones.

Shift from Performance-Centric to User-Centric Design

Ethical concerns unfortunately often begin even before engineers sit down to build AI tools themselves. Because of the heightened value organizations place on boosting business performance, AI engineers are accustomed to building technologies that focus on three things above all else: Speed, resilience and agility. This approach in-and-of itself is a major problem.

Of course, this approach works well in terms of driving business success, but when it comes to ethics…not so much. When AI is created with these business centric concerns in mind, it results in technology that is not equipped to appropriately assess and act on the complexities and messiness of the “real world.” To correct this, engineers need to begin  building technology in a way that has the technology revolve around humans, instead of having humans revolve around the technology.

Better Assessment of User-Centric Impacts

Just because the overarching shift to user-centricity has occurred, doesn’t mean that human-centric AI tools will just begin popping out. Instead, this strategic shift needs to be underpinned by processes that can guide developers towards ethical AI development. For example, AI developers should start with a clear purpose in mind, and then subject ideas to rigorous vetting around whether this technology will actually help a human, a community, society or the planet as well as accomplish business goals. From there, organizations need to have tools in place that can help developers gain clear insights into how their AI is behaving to weed out potential ethical problem areas during production as opposed to once products are in the field.

This can help to ensure that ethics are ingrained into a tool’s DNA and help imbue them with human perspective to ensure that tools are not spewing out answers that reinforce biases or suggest poor ethical decisions just for the sake of timeliness.

Correct Blurred Lines Between Automated and Autonomous

As AI development has sped up, a worrying dilemma has begun to take shape: Where exactly is the line between automation and autonomous; moreover, where should that line be and how do we maintain it?

Automation is an incredibly powerful tool in terms of boosting efficiency and enabling real-time intelligence. However, what happens when the reins are slackened to a degree where unfettered autonomy takes over? There are, of course, numerous cases for autonomy when it comes to executing mundane processes. However, when it comes to using machine autonomy to execute more sophisticated, nuanced and sensitive tasks, significant dangers begin to arise. Chiefly among them is that given the unpredictability of machines, and how quickly AI moves, autonomous AI can move in ways that raise significant ethical questions well before human operators have any idea about what is unfolding. Take the backlash against the Chinese app Weibo, for example. Weibo has come under fire several times over ethical concerns surrounding how the app analyzes user chat data and autonomously makes suggestions without any human oversight whatsoever. And while some instances of autonomous technology stepping out of line may only have relatively benign consequences, what happens if an AI begins to suggest harmful content or promotes fake content around sensitive subjects like healthcare? Therefore, developers need to readily define where the line between autonomy and automation sits and use it as a key benchmark and consideration before making any AI decisions.

From alleviating supply chain bottlenecks to expediting pharmaceutical research, the potential of AI is truly incredible. However, if we do not take the proper ethical steps now, and continue to compromise public trust, the AI industry will not be able to deliver on the tremendous promise that it has shown up to this date.