Prefect has added an open source ControlFlow orchestration framework for managing tasks assigned to artificial intelligence (AI) agents with the context of a larger automated workflow.

Based on the company’s namesake Python platform for orchestrating data workflows using a framework, ControlFlow makes it possible to granularly control how data is exposed to large language models (LLMs). “It keeps an eye on the AI agent and what it does,” says Prefect CEO Jerimiah Lowin.

That approach provides the added benefit of reducing hallucinations because the amount of data exposed to an LLM at any point is minimized, he adds.

In the event of an error or suspicious event, ControlFlow leverages the latest 3.0 version of the Prefect framework to rollback any assigned task to its previous state. In addition, Prefect is now also making the events and automation engine that is at the core of Prefect available under an open source license.

The core Prefect platform was originally created to make it easier for data engineers to manage workflows and is now being extended to include AI agents that have been trained to automate specific tasks. That capability makes it possible to include AI agents within not just batch-oriented and event-driven workflows, but also for embedded or interactive workflows, human-in-the-loop processes or any type of background task that might need to run synchronously or asynchronously.

Organizations of all sizes will soon be invoking multiple AI agents that either they have built in or have been embedded within third-party applications or services. The challenge most of them will encounter is there is no higher level of abstraction through which they can orchestrate workflows across processes that will soon include multiple AI agents.
With the launch of ControlFlow, Prefect is moving to provide a framework that makes it simpler to orchestrate multimodal data workflows spanning a range of legacy workflows that organizations are already using Prefect to manage, says Lowin.

In general, organizations are starting to move beyond trying to understand how and when to invoke LLMs to finding ways to operationalize them within the context of a workflow. The challenge is LLMs are probabilistic in the sense that the output they provide is a best guess. In addition to not generating the exact same output to the same prompt, the workflows they are being incorporated into are deterministic in the sense they are tasks that need to be completed the same way every time. Being able to roll back a task assigned to an AI agent that might have generated an erroneous output is crucial, notes Lowin.

As the overall AI maturity of organizations continues to increase, responsibility of incorporating AI agents into workflows will remain with the teams that are currently responsible for them. The challenge they all face now is determining which tasks can be reliably assigned to an AI agent versus either being performed by a human or automated using existing engines. Regardless of approach, the way workflows are automated in the age of AI will never be the same again.