The evolution of enterprise technology has brought us to a fascinating crossroads where decades of institutional memory meet the rapid-fire processing of autonomous systems. Joel Raper, Chief Commercial Officer at Unisys, stands at the center of this transformation, drawing on a career that spans from the early days of service desk operations to leading global commercial strategies. With a deep background in managing mission-critical systems—including the legacy mainframes that still power the world’s financial and energy infrastructures—he offers a pragmatic perspective on how organizations can bridge the gap between “experimental AI” and “operational reality.”
In this discussion, we explore the shift from traditional document-based knowledge management to the creation of agentic systems and digital twins. We delve into the mechanics of data cleanup, the preservation of “tribal knowledge” within legacy environments, and the critical governance frameworks required to manage autonomous agents.
Traditional knowledge management often involves thousands of outdated or duplicate documents. How can organizations use graphical databases and large language models to re-evaluate this data, and what specific steps are required to transform these records into a consolidated, high-quality knowledge base?
The first stage of this transformation is admitting that we cannot clean this data manually; we recently encountered a client with over 20,000 legacy knowledge documents, many of which were redundant or conflicting. To tackle this, we ingested the entire corpus into a graphical database to map the actual correlations between specific support tickets and the articles used to solve them. By applying large language models to this map, we can identify which articles are truly effective and use generative tools to rewrite them into a single, “golden” version. This process moves us away from having 300 people manually creating articles to a system where AI identifies the most complete information. It’s about creating a consolidated foundation that is actually useful for the next phase of automation rather than just letting it sit in a digital filing cabinet.
Transitioning from human-centric documentation to “compute consumption” requires a shift in how information is structured. Could you explain the process of creating scripts for automated problem-solving and how developing digital twins of repetitive tasks allows for the effective deployment of autonomous agents?
Writing for human eyes is fundamentally different from writing for “compute consumption,” where the goal is to trigger an automated action. When we look at knowledge now, we aren’t just writing a paragraph of instructions; we are building scripts that solve the issue with at least 98% confidence. This is where we see the evolution of the “digital twin” of a process, where we document every interaction point of a task—like a specific financial transaction or a maintenance routine—and turn that into a guidebook for an agent. By creating these digital twins of repetitive tasks, we can deploy mini-agents that follow specific rule sets to execute work cheaper and more effectively than the robotic process automation of five years ago. This shift ensures that the knowledge isn’t just a reference for a person, but an executable instruction set for an autonomous system.
Many mission-critical applications still run on legacy languages like COBOL within mainframe environments. What are the practical strategies for using AI to interface with these secure systems without a full rewrite, and how does this help capture tribal knowledge from retiring experts?
At Unisys, we deal with systems that have been running for decades in sectors like energy and finance, and many of these are written in COBOL, a language fewer people learn every day. Rather than engaging in the risky and expensive process of a full rewrite, we use AI to build modern interfaces that can communicate with these ultra-secure mainframe environments. This allows us to capture the “tribal knowledge” of engineers who have maintained these systems for 30 years by digitizing their daily maintenance cycles and decision-making processes. By creating this layer of knowledge management around the legacy code, we ensure the business logic is preserved even as the original experts retire. It turns what was once a “black box” of legacy code into an accessible data asset that AI can help manage and query.
User descriptions of technical issues rarely match formal documentation tags. How do real-time voice translation and natural language processing bridge this gap, and what impact does this have on the accuracy of support agents during a live customer interaction?
There is often a massive disconnect between how an expert writes a solution and how an end-user describes a problem, which is why traditional search often fails. We use real-time voice translation and natural language processing to listen to how a user describes their issue in plain English—or any other language—and then map those specific terms to the technical tags in our knowledge base. This stage is crucial because it captures the nuance of the user’s experience, allowing the system to surface the right article for a first-level agent who might not have the deep expertise of a senior engineer. By bridging this linguistic gap in real-time, we dramatically increase the speed of resolution and ensure that the agent has the exact context they need without having to play a game of “technical translation” with the customer.
Implementing agentic systems involves significant security risks regarding data sovereignty and unauthorized actions. How can organizations utilize “mini-agents” with restricted rule sets to ensure safety, and what governance frameworks are necessary to prevent agents from creating incorrect or harmful workarounds?
Security in the AI era requires us to move beyond the “Matrix” or “Terminator” fear of a single, all-powerful AI and instead focus on “mini-agents” with very narrow, restricted rule sets. For example, you might create an agent whose only permission is to look at a local travel policy; it literally has no way to access financial data or cross-border information. This role-based access control is essential for maintaining data sovereignty, ensuring that information doesn’t move across jurisdictions where it shouldn’t. We have to apply the same old-school principles of authentication and authorization to these agents that we do to humans. By keeping the agents specialized and confined within the boundaries of a sovereign AI or an enterprise firewall, we prevent them from inventing “creative” but harmful workarounds to problems.
Moving from the experimental phase to full operational status often stalls due to disorganized data. What does a rapid assessment of an organization’s knowledge state look like in practice, and which metrics best demonstrate a clear return on investment for these AI initiatives?
To break out of “proof-of-concept hell,” we utilize a rapid value assessment that can evaluate an organization’s entire ITSM data landscape in about a week. We look at how relative and usable the current data is, identifying exactly where the gaps are between human descriptions and technical documentation. The metrics for ROI focus on the “speed to resolution” and the reduction in “manual intervention” for repetitive tasks. When you can show that an AI-driven knowledge base allows a first-level agent to solve a complex problem that previously required an expensive third-level engineer, the return on investment becomes undeniable. It’s about moving from general experimentation to a targeted operational strategy where the AI is fed high-quality, internal data that provides a distinct competitive advantage.
What is your forecast for knowledge management?
I believe we are moving toward a world where knowledge management is no longer a static library, but a living, breathing “agentic economy” where information is instantly actionable. We will see a shift away from massive, generalized models toward “Sovereign AI” where companies maintain their own highly specialized internal knowledge bases that stay behind their firewalls for maximum security. Within the next few years, the role of the “knowledge worker” will evolve into a “knowledge curator,” where humans oversee the accuracy of the digital twins and mini-agents that perform the bulk of the execution. Ultimately, the organizations that successfully digitize their internal processes and tribal knowledge today will be the ones that dominate the automated landscape of tomorrow.


