Chief Marketing Officers (CMOs) shouldn’t be managing AI systems as passive tools; they should see them as an intelligence layer performing jobs that need training, guardrails, course correction for misalignment with goals, and, occasionally, even terminating bad relationships.
Long-time marketing and go-to-market (GTM) expert Tim McCormick has witnessed firsthand the transformation AI is having in the traditional marketing and sales funnel. The classic funnel, once comprised of separate systems and teams that target market awareness, demand generation, sales development, and sales execution, is collapsing into something more tightly integrated and much more software‑ and intelligence-driven.
Consider this: according to Gartner’s 2026 CIO and Technology Executive Survey, only 17% of organizations have deployed AI agents to date, yet more than 60% expect to do so within the next two years. And Salesforce’s State of Sales 2026 report finds that 87% of sales organizations currently use some form of AI for tasks like prospecting, forecasting, lead scoring, or drafting emails, and 54% of sellers say they’ve used agents, with nearly 9 in 10 planning to by 2027.
Gartner projects that 95% of seller research workflows will begin with AI by 2027, up from less than 20% in 2024.
How AI Is Transforming Marketing Teams
For instance, as leads and prospects hit the funnel, agents score and re‑score accounts as new intent signals arrive. They assemble dossiers from firmographic data, review sites, and site behavior. As accounts flow deeper into the funnel, these agents now monitor deal rooms and call transcripts that keep track of intent signals throughout the conversations. Of course, humans are still very much in the loop, but agentic AIs increasingly carry the heavy lifting.
In a recent conversation with Mary Shea, co-founder and chief growth officer at Meerkat, Inc., agentic GTM enablement isn’t just training salespeople. It’s training AI teammates. It’s feeding them the ICP, the positioning, the content library, and then asking them to do meaningful work: draft proposals, write blogs, summarize earnings reports, even take a first pass at outreach. “The better these agents understand the business, the more valuable they become,” says Shea.
You don’t have to squint to be able to see what’s happening. Tasks are being assigned to non‑human actors that, until recently, were the province of interns, junior marketers, and BDRs. CMOs are giving those agents access to CRM systems, content repositories, and communication channels. And they are asking those agents to operate on behalf of the company with minimal direct supervision.
There are signs of staff reduction. When examining data from 2019 to 2024, venture capital firm SignalFire found a 50% decline in new role starts by people with less than one year of post-graduate work experience, consistent across sales, marketing, engineering, recruiting, operations, design, finance, and legal.
In sales and business development roles, Emergence Capital’s survey of more than 560 business-to-business software companies found that 36% of companies decreased their sales and business development headcount in the prior year. That’s the highest reduction rate among all sales roles. The trend is toward hybrid models in which AI manages lead qualification, initial outreach, and research. Most of the reductions in sales and business development came from not backfilling open roles rather than layoffs, with teams typically seeing a 30% to 50% reduction in related headcount within the 12 to 18 months following the adoption of AI tools.
It’s a new architectural problem that CMOs, CIOs, and CDOs can’t ignore: how to design for digital staff with the same seriousness applied to human staff.
The Need for Governance
In most organizations today, these agents exist as features within SaaS products: a copilot in the CRM, a summarizer in the call recording and transcribing software, and a prospect scoring algorithm in the marketing platform.
However, more autonomous agents are being deployed, and each has its own set of permissions, or lack thereof, its own logging behavior, and its own idea of what “good enough” means. Currently, relatively few CXOs seem to be looking across these functions and asking basic questions, such as who the agent acts on behalf of. What systems and data can this agent touch? And who does it answer to?
As these agents get deployed, not only does answering such governance questions become even more essential, but CXOs need to be able to ensure the right levels of governance remain in place:
Audit AI agents in place: Before designing any governance frameworks, CMOs should conduct an agent inventory. Today, most organizations genuinely don’t know how many AI agents are already operating in their marketing stack. A practical first step: require every SaaS vendor to disclose what their AI features can read, write, and send on behalf of users. This is analogous to a shadow IT audit, but for agentic capability. Many vendors bury this in terms of service or release notes. Pulling up explicitly reveals the governance surface area before designing controls.
Design an agentic risk tiering framework: Not all agents warrant the same scrutiny. Tier agents by consequence of their risk associated with failure. A summarizer that drafts outbound communications under a rep’s name, or that updates deal stages in the CRM, carries a higher identity and data-integrity risk than a call summarizer (although summaries need to be checked). An agent with access to pricing logic or contract terms is in a different category altogether. CMOs should establish tiers: read-only vs. write access, internal-only vs. customer-facing, and those agents that touch highly sensitive and regulated data before agents are deployed, not after something goes wrong.
Identity, observability, and ownership: Does an agent act as a named human, as a shared service account, or as itself? Which systems does it read? Which systems does it write to? And at what level of granularity can it do those tasks? Where do its prompts and outputs go? Can anyone reconstruct why the agent suggested what it did when something went wrong? When it comes to ownership, “Which business function is accountable for its purpose and behavior, and which technical function is accountable for its integration and resilience, are critical to determine,” McCormick says.
“Examine the high‑value use cases, such as an agent that prepares account briefs before executive calls, and build it as if you were hiring a real person for that job and define its remit and limit its access,” Mark Coltharp advises.
The “human in the loop” specification: For every agentic workflow, define the review gate: which outputs require human sign-off before leaving the system, and which can fire autonomously. Customer-facing communications, pricing adjustments, and any action that creates a contractual or legal artifact should almost always require a human checkpoint. Prospect scoring and internal summarization probably don’t. Creating this — and enforcing it — as a policy, not just a cultural norm, is what separates governance from hope.
Vendor contract provisions: CMOs negotiating SaaS renewals should now be asking for specific contractual commitments: data residency for AI inputs and outputs, audit log access, the right to turn off agentic features independently of core product functionality, and clarity on whether vendor AI models are trained on company data. Most procurement teams aren’t asking these questions yet, which means CMOs are accepting significant liability by default.
Naming and accountability are essential: Every agent in production should have a named business owner, such as the CMO or a VP, who is accountable for what the agent does, and a named technical owner, such as engineering or IT, who is accountable for reliability and integration. These should be documented the same way you document data stewardship. If both owners can’t be identified, the agent isn’t ready for production.
Sunset criteria: Agents should have defined performance criteria and explicit sunset conditions. What does this agent need to do to justify its continued operation? What failure mode triggers a review or shutdown? Building those checkpoints into deployment, not retroactively, is good governance.
There’s a cultural payoff here as well. GTM transformations often fail because people don’t trust the tools put in front of them, or feel they’re being disintermediated by systems they don’t understand. When agents are black boxes, that mistrust is rational. However, when they’re visible, explainable, and clearly owned, it’s easier to ask teams to collaborate with agents rather than work around them.
The industry likes to anthropomorphize agentic systems when talking about “teammates” and “copilots.” If that language is going to be taken seriously, CXOs and CMOs need to ensure the lifecycle of these agents isn’t an afterthought. Otherwise, organizations aren’t adding teammates; they are unleashing a swarm of unsupervised interns into their most sensitive systems and hoping for the best.

