Prakash Kota serves as the Chief Information Officer at UKG, where he spearheads enterprise technology strategy and digital transformation for a global workforce. With over two decades of experience, including a significant tenure at Autodesk, he has become a recognized leader in navigating the complex intersection of data strategy and employee experience. His approach focuses on moving beyond theoretical AI implementation toward practical, large-scale adoption that prioritizes human-centric trust and operational discipline.
In this discussion, we explore the systemic barriers to AI adoption and the strategies required to transition from isolated experimentation to a fully AI-native organization. The conversation covers the importance of “low-risk” entry points for technology, the necessity of embedding tools directly into existing workflows, and the governance frameworks that allow for organic innovation without compromising security.
Only about 5% of firms worldwide achieve AI value at scale despite heavy investment. What specific trust barriers prevent broader adoption, and how can leadership move beyond isolated experimentation to create a culture where technology feels like it is working for the employees?
The primary friction point isn’t a lack of technical capability, but rather a fundamental lack of psychological safety and clarity regarding intent. Employees often view AI with a sense of trepidation, asking themselves if the technology is designed to replace them or who exactly is controlling the data they input. Currently, only 38% of frontline workers use AI in their daily roles, which highlights a massive gap between executive enthusiasm and on-the-ground reality. To move beyond this, leadership must demonstrate through tangible actions that AI is a supportive partner rather than a replacement tool. By creating “safe sandboxes” where people can experiment without fear of breaking critical systems or exposing sensitive data, you shift the narrative from technology happening to them to technology working for them.
Deploying a brand-focused AI agent to thousands of workers can align a global workforce quickly. How do you select a “low risk” entry point like brand voice, and what metrics should be tracked in the first 60 days to prove its immediate utility?
Selecting an entry point requires finding a common denominator that is highly relevant across all departments but carries minimal operational risk if a mistake occurs. We chose a brand-focused agent because every employee, from sales to customer service, needs to communicate with a consistent voice, yet brand guidance is often buried in static manuals. In our first 60 days of deploying the UKG Brand Communicator, we tracked very specific utility metrics: we saw approximately 7,300 employee sessions and 13,000 AI-assisted rewrites. Most importantly, we measured the 1,500 hours saved, which were redirected back into higher-value work and customer service. This immediate, measurable return on time proves the tool’s value to the individual user right away, turning a one-time experiment into a daily habit.
Innovation labs often fail to drive sustained adoption because tools are not embedded in daily tasks. What are the practical steps for integrating AI into existing workflows, and how can organizations ensure these tools become the path of least resistance for frontline staff?
The “lab” mentality often creates a physical or digital distance between the tool and the task, which is the death of adoption. If an employee has to leave their primary workspace to open a separate sidebar or sandbox, they simply won’t do it over the long term. Practical integration means the agent must be the easiest possible way to complete a task—it must be the “path of least resistance.” We achieved this by ensuring our agents were structured and guided, removing the need for employees to learn complex prompt engineering. When the tool is baked directly into the drafting of an email or an internal memo, it ceases to be an “extra step” and instead becomes the standard, most efficient way to work.
When 80% of a workforce adopts AI, thousands of agents are often built by the employees themselves. How do you manage this volume of organic innovation without creating chaos, and what role do functional champions play in shepherding these ideas into production?
Scaling to a point where you have over 11,500 agents built by employees, as we have, requires a centralized “Idea-to-Implementation” (I-2-I) framework. This framework acts as an internal hub where employees can collaborate and submit their ideas, which prevents the duplication of efforts and ensures that successful tools can be surfaced for the entire company. Functional champions are essential in this ecosystem; they act as the bridge between raw creativity and operational reality. These champions help shepherd ideas from a basic concept into a production-ready tool, ensuring that the momentum of organic innovation is channeled into structured outcomes that generate over 155,000 supported actions every month.
A portfolio approach often categorizes AI initiatives into ROI-driven scale, new growth capabilities, and 90-day exploration pilots. How do you balance these competing priorities, and what specific criteria determine whether an experimental pilot should be expanded, pivoted, or archived?
We run our AI strategy like a venture capital portfolio to ensure we are balancing immediate returns with long-term survival. The “Scale” tier focuses on prioritized use cases with very clear ROI and defined outcomes, while the “Exploration” tier consists of time-boxed, 90-day pilots. The criteria for expansion are strictly evidence-based: does the pilot show tangible velocity and user engagement within that 90-day window? We hold “AI Demo Days” where these pilots are put to the test; if a tool doesn’t prove it can simplify a workflow or save significant time, we archive it immediately. This discipline allows us to fail fast on ideas that don’t land while doubling down on those that contribute to our monthly saving of 24,000 hours.
Governance is frequently seen as a bottleneck that slows down tech adoption. How can security and privacy checkpoints be baked directly into the workflow, and what strategies work best to redirect teams toward validated enterprise tools without damaging the trust of the workforce?
The key to governance is making sure it is “baked in, not bolted on.” We utilize a standardized risk checklist and security checkpoints that are part of the initial creation process in our AI Hub. Instead of being a “no” department, our governance teams route employees toward validated enterprise tools that we have already vetted for privacy and legal compliance. When we find teams using unvalidated point solutions, we restrict access not as a punishment, but as a protective measure for the collective trust of the organization. By providing a clear, safe path to innovation through our internal hub, we ensure that security is a facilitator of speed rather than a barrier to it.
What is your forecast for AI adoption in the workplace?
I believe we are moving toward a reality where AI adoption will no longer be a top-down mandate but a bottom-up “pull” effect. As organizations successfully build trust through transparency and tangible time-savings, AI will stop being viewed as a separate technology and will simply become the invisible infrastructure of how work gets done. Within the next few years, the distinction between “AI tasks” and “normal tasks” will vanish; we will see an AI-native workforce where every employee is an architect of their own efficiency. The firms that win will be those that realize trust is the only currency that matters in this transition.


