Siemens has begun to deploy artificial intelligence (AI) agents across multiple processes and workflows after taking care to identify where approvals and handoffs involving humans will still be required.

The multinational conglomerate launched a Siemens ONE Tech initiative late last year through which the “human glue” that will be required for various agentic AI workflows and processes is now being mapped, says Andrew Allan, chief of global finance operations and technology at Siemens.

At the core of that One Tech initiative is a Siemens Data Cloud platform that is based on the Snowflake data platform that is integrated with multiple custom and software-as-a-service (SaaS) applications, including those provided by partners such as Salesforce. That approach enables Siemens to apply guardrails to the data being accessed by both humans and AI agents alike using a data manifesto that the company has defined, notes Allan.

However, in the case of AI agents that are often designed to overcome all obstacles to complete a task, the need to limit access to sensitive data is now more essential than ever, he says.

The overall goal is to provide an orchestration capability that enables Siemens to adopt a more strategic approach to incorporating AI agents into workflows versus allowing business units to randomly deploy AI agents in what are often highly regulated environments, adds Allan. “You need to make sure the right boundaries are in place,” he says. “You can’t be willy-nilly about it.”

Ultimately, Siemens like many organizations is trying to safely unlock more value from its data in the age of AI. As part of that effort, Siemens is exposing a mix of custom and SaaS applications that it invites customers and partners to subscribe to manage workflows. In the case of agentic AI workflows, Siemens is starting off by identifying repeatable processes that lend themselves more to being automated, notes Allan.

The challenge is that as organizations start to embed AI agents into various workflows there is a pressing need to limit what data is being accessed because in the absence of any guardrails an AI agent is likely to act on that data in unexpected ways that might either adversely impact a customer or, just as troubling, create one or more compliance issues. Additionally, there is a need to ensure that costs don’t spiral out of control as more AI agents are added to workflows, says Allan.

In most cases, digital business transformation leaders will need to ensure that AI agents don’t exceed the scope of their mission as they reason across large data sets. Without these constraints, an AI agent can hallucinate information, expose sensitive data, or execute unauthorized actions that could potentially harm the organization and its customers.

At this juncture, however, there is no going back. AI agents will be incorporated into innumerable applications to perform tasks at levels of once unimaginable scale. The issue, of course, is making sure the people who are ultimately responsible for those outcomes clearly understand what is being done at machine speed by both whom and, more importantly, what.