In the rapidly shifting landscape of enterprise technology, the gap between a promising AI pilot and a functional, production-ready agent is often filled with the complexities of fragmented data. While Large Language Models have become increasingly sophisticated at reasoning, they remain effectively “homeless” within the corporate environment without secure, governed access to the systems where actual business happens. This interview explores the critical infrastructure required to move past the hype, focusing on the intersection of data connectivity and rigorous administrative oversight. We delve into how organizations can bridge the silos between CRMs and ERPs, the technical risks of building manual connectors in an era of constant API updates, and the importance of establishing a “human-in-the-loop” framework to maintain trust. By examining the three pillars of connectivity, context, and control, we uncover the blueprint for building AI agents that do more than just chat—they execute.
You focus on connectivity, context, and control when linking AI to enterprise systems. How do you balance providing enough semantic intelligence for an LLM to reason over data while maintaining strict governance, and what typically happens when one of these three pillars is neglected?
Achieving this balance requires more than a decade of experience in building deep, intelligent connectors that go far beyond simply wrapping an API in a thin layer of code. We have spent over 10 years perfecting the “context” pillar, which ensures the LLM understands the specific nuances of source systems like NetSuite or Salesforce, including custom entities and field relationships. If you provide connectivity without control, you risk a catastrophic event where an agent might perform unauthorized deletions or expose sensitive information to the wrong user. Conversely, neglecting context means the AI might have access to the data but lacks the “semantic intelligence” to interpret what a specific column or table actually represents, leading to hallucinations or incorrect reasoning. When these three pillars aren’t aligned, AI initiatives usually stay stuck in the pilot phase because security teams will eventually block any tool that lacks a centralized, governed framework for data access.
Many organizations face a choice between building custom connectors or using a unified platform. What are the specific technical risks of chasing constant API changes manually, and how does centralized oversight prevent AI initiatives from stalling in the pilot phase or being blocked by security teams?
The decision to build custom connectors often pulls vital engineering resources away from a company’s core business goals, forcing developers into a never-ending cycle of maintenance. APIs from major vendors change constantly, and a manual approach leaves your AI agents fragile; one update to a CRM’s schema can break an entire automated workflow overnight. By utilizing a unified platform with access to over 350 different systems, organizations can avoid the “point-to-point” trap that creates fragmented security holes across the enterprise. Centralized oversight is the “green light” for security leaders, providing them with a single dashboard to manage identity-based permissions and monitor usage across every department. This structure ensures that AI architects can move projects into production with the confidence that their data access is secure, governed, and fully aligned with enterprise policies.
Security leaders often worry about destructive actions or unauthorized data exposure. How do you implement identity-based permissions and SSO to ensure agents only perform approved tasks like updates rather than deletions, and why is a full audit trail essential for maintaining enterprise trust?
We address these security concerns by integrating directly with existing identity providers like Okta or Entra ID, ensuring that AI agents utilize their own authenticated identities within the source systems. This allows administrators to be extremely granular; for instance, you can grant an agent the ability to read, insert, and update records in Dynamics 365 while explicitly disabling the “delete” permission to prevent irreversible data loss. The sensory experience of seeing a full audit trail—where every query, metadata retrieval, and record update is logged—provides an immense sense of relief for IT teams. Without this level of transparency, an LLM becomes a “black box,” and no CIO will trust an autonomous agent to touch live production data if they can’t trace exactly what happened after the fact. Maintaining trust requires that every action taken by the AI is visible and verifiable, allowing the organization to grow its AI capabilities without sacrificing its safety standards.
In complex order-to-cash workflows, agents must bridge data across CRMs, ERPs, and billing databases. How does an agent verify its own actions across these fragmented silos to ensure accuracy, and what role does human-in-the-loop oversight play in these automated sequences?
In a typical order-to-cash scenario, an agent might start by identifying a Q4 hydraulic pumps order in Salesforce, move to check customer records in Dynamics 365, and finally insert billing data into a SQL Server database housing Zuora information. Unlike older, deterministic workflows that simply fire off a command and hope for the best, a modern agent “shows its work” by verifying the success of each step, such as retrieving the specific record ID after a successful insertion. This transparency is vital because it allows a human-in-the-loop to review the step-by-step logic through interfaces like Microsoft Teams before a final transaction is committed. If the agent encounters an ambiguity—such as two similar customer accounts—it can pause and ask for clarification, ensuring that the automation remains a tool for efficiency rather than a source of errors. This collaborative process transforms fragmented silos into a cohesive, intelligent stream of data that humans can monitor and approve in real-time.
Moving from deterministic workflows to natural-language “vibe querying” allows users to analyze warehouse correlations or order values on the fly. What technical hurdles must be cleared to ensure the LLM correctly interprets field relationships, and how does specialized workspaces help curate these specific data sets?
The primary technical hurdle for “vibe querying” is ensuring the LLM doesn’t just see a list of tables, but actually understands the underlying catalog and how different data points relate to one another across 350+ systems. To solve this, we use specialized Workspaces to organize and curate specific data sets, preventing the “noise” of irrelevant tables from confusing the agent’s reasoning process. For example, when a user asks about the correlation between warehouse locations and order values, the platform provides the semantic context needed for the LLM to identify the correct fields and execute the right investigative tools. This curation allows the AI to perform complex conversation analytics, breaking down percentages and location-based trends without requiring the user to write a single line of SQL. By narrowing the scope of what the LLM can access through these workspaces, we provide a safer, more accurate environment for natural-language data exploration.
What is your forecast for enterprise AI agents?
I believe we are entering an era where AI agents will move from being simple digital assistants to becoming the primary interface for all enterprise workflows. The pace of change is so intense that capabilities that don’t exist today will likely be standard features by next week, shifting the focus from “what” the AI can say to “what” the AI can actually do. We will see a massive transition away from rigid, manual integrations toward agentic workflows that can navigate complex, multi-system environments with minimal human intervention. As long as organizations prioritize the foundation of secure connectivity and semantic context, these agents will become the “connective tissue” of the modern corporation, making every employee more productive and every data point more accessible. Ultimately, the winners in this space will be those who stop treating AI as a separate silo and start treating it as a governed extension of their existing data ecosystem.


