Nirankush (Kush) Panchbhai, senior vice president of platform fundamentals for ServiceNow, explains why operationalizing artificial intelligence (AI) agents needs to start with a framework for ensuring trust.

 

Transcript Text

Everyone agrees “responsible AI” matters, but few teams can point to a concrete playbook. In this Digital CxO Leadership Insights chat, Panchbhai lays out a nuts-and-bolts approach that starts—not ends—with trust.

His team anchors every feature on four principles: human-centric design, transparency, inclusion and accountability. A human stays in the loop when stakes are high, model cards document version history and training data, and bias checks are baked into the software-development life cycle. By treating governance as code rather than a slide deck, the platform can prove—audibly and visually—what an algorithm is doing and why.

The centerpiece is an AI Control Tower: a real-time dashboard that tracks both in-house and third-party agents. It surfaces metrics such as latency, hallucination rate and cost so operators can dial autonomy up or down as confidence grows. If an agent underperforms, teams see the cause (stale data, bad prompt, shaky model) instead of guessing.

Model choice is intentionally disposable. Panchbhai expects large language models to change quarterly, so the platform ships with hot-swappable connectors and public model cards that show why a new engine replaced the old one. That way, customers can retire a model tainted by bias or grab a faster release without rewriting workflows.

Interoperability is equally important. An Agent Fabric understands emerging protocols like MCP and A2A, sets ACLs and logs every cross-agent request for audit. External bots get the same treatment as native ones—full visibility, scoped access and rollback paths—so enterprises can extend trust beyond the walls of a single vendor.

Panchbhai’s parting advice: start with a business problem big enough to matter, define success metrics up front and let responsible-AI guardrails guide the rollout. Trust grows “in drops,” he reminds viewers; this framework makes sure the bucket never tips over.