automation, workflows

Salesforce this week revealed it has added artificial intelligence (AI) capabilities to Mulesoft Intelligent Document Processing (IDP) module to make it simpler to extract and organize data consisting of multiple diverse formats, including PDF files and images, within a workflow automated using the Salesforce Flow automation framework.

In addition, Salesforce has now integrated its Einstein AI engine into Mulesoft IDP, Flow Builder and Anypoint Code Builder platforms to launch prompts that invoke any of the large language models model (LLMs) Salesforce makes available via a natural language interface to provide both generative and predictive AI capabilities.

Salesforce has also revamped the integrated development environment (IDE) to add a configuration panel. In addition, Salesforce is providing access to accelerators and templates in Anypoint Exchange along with integration with external version control systems (VCS) to make managing updates easier. Finally, observability capabilities for APIs are now accessible via an Anypoint Monitoring tool.

The overall goal is to leverage AI to make it simpler to integrate data pulled from any document into, for example, purchase orders and invoices or workflows running on all Salesforce Clouds, the MuleSoft Anypoint Platform for integrating applications, and MuleSoft Robotic Process Automation (RPA) platform, says Vijay Pandiarajan, vice president of product management for the Mulesoft arm of Salesforce.

In general, generative AI makes it easier for more organizations to automate the processing of workflows based on documents without necessarily always having to get an application developer involved. Instead, subject matter experts within organizations should be able to automate more workflows on their own, says Pandiarajan. “It lowers the bar,” he says.

It’s not clear to what degree AI will be able to help organizations to either add or revamp existing workflows more quickly, but the number of these initiatives should steadily increase as end users become more adept at using prompts to orchestrate a series of tasks using LLMs versus having to wait for a no-code or low-code application to be developed. There may still be plenty of instances where a graphical application is required, but there will also be thousands of workflows where a series of prompts will sufficiently address the requirement.

The challenge is keeping track of these initiatives as multiple end users make use of prompts on their own to automate a task that might have downstream implications for the rest of the business.

Each Digital CxO will need to evaluate the level of automation that might be needed for each individual project. In the meantime, there’s already no shortage of appetite for AI experimentation. Arguably, the most important thing to remember about generative AI is the output is probabilistic. The LLM is providing you its best guess for a desired outcome. It will provide that outcome in a convincing manner, so end users need to be careful not to assume that each output provided will work as expected at scale.

Otherwise, Digital CxOs will soon find themselves inundated by automation projects that for one reason or another all need to be revamped, simply because too much faith was placed in an LLM.