Why Is Analytics Capability More Important Than IT Tools?

Jan 14, 2026
Interview
Why Is Analytics Capability More Important Than IT Tools?

As a data protection expert specializing in privacy and governance, Vernon Yai has a unique vantage point on how organizations manage their most critical information. An established thought leader, he focuses on risk management and the innovative techniques needed to safeguard sensitive data. Today, we’re exploring a profound shift he’s observed: how true analytics capability, far more than just sophisticated AI tools, has become the defining factor for high-performing IT operations. We’ll delve into why many organizations drown in data despite their investments, how leadership behavior is the ultimate catalyst for change, and what it truly means to build an organization that learns from its failures instead of just reacting to them.

Many organizations possess advanced AIOps tools yet still struggle with alert fatigue and turning data into action. Beyond technology, what specific organizational systems distinguish an analytically capable IT function from one that is merely tool-rich? Please provide an example of this in practice.

That’s the central paradox we saw come into sharp focus last year. The issue was never a lack of data; it was the absence of a system to absorb it. An analytically capable organization builds the human and procedural wiring that connects insight to action. This wiring consists of clear governance, defined decision rights, and an operating model that has accountability baked in. It’s the difference between having a fire alarm and having a fire department. For instance, a tool-rich team sees a thousand alerts, gets overwhelmed, and prioritizes based on who’s shouting the loudest. You can feel the panic and confusion in their war rooms. In contrast, an analytically capable team has pre-defined thresholds that link specific alert patterns to a named decision-maker, who is empowered to act based on a clear standard of evidence. The intelligence from the tool isn’t just noise; it’s a trigger for a well-rehearsed, coordinated response.

High-performing teams are shifting from monitoring visibility to decision-enablement, asking what choices their data should change. How can a CIO formalize decision ownership and evidence standards for critical choices like incident triage or capacity investment? What does this governance layer actually look like?

This is the most profound and, frankly, most difficult shift for many leaders to make. It moves analytics from a passive reporting function to an active operational discipline. To formalize it, a CIO must stop asking, “What does the dashboard show?” and start asking, “Who is the owner of the decision this data informs?” This governance layer isn’t some dusty binder on a shelf. It’s a living framework. For incident triage, it might look like a simple matrix: if performance degradation exceeds X% for Y minutes and impacts Z customers, the Tier 3 on-call lead is mandated to escalate to the application owner. The evidence standard is the dashboard itself—no need for anecdotal confirmation. For a capacity investment, the governance is more rigorous: a proposal must include historical usage data, predictive models showing a 95% confidence of resource exhaustion within six months, and a cost-benefit analysis. It anchors every significant choice to a named individual and a specific body of evidence, eliminating ambiguity when the pressure is on.

AI often amplified existing operational weaknesses rather than fixing them, leading to ignored insights or misplaced confidence. What are the key warning signs that an organization’s analytics capability is not yet mature enough to effectively operationalize AI-driven systems?

The most glaring warning sign is a binary response to AI-driven insights: they are either blindly trusted or completely ignored. There is no middle ground of healthy skepticism and critical validation. When you see teams either over-relying on an AI recommendation without understanding its basis or, conversely, dismissing an alert because “the system has been wrong before,” you’re not seeing a technology problem; you’re seeing a capability gap. It’s a clear signal that the organization lacks the analytical literacy to engage with the AI as a partner. Other signs include a lot of finger-pointing when an automated action goes wrong, with debates about whether the model or the human was at fault. This indicates a profound lack of clarity around data ownership and decision accountability. AI doesn’t create these problems, but like a powerful magnifying glass, it makes them impossible to ignore.

For analytics to become a leadership concern, executives must move beyond reviewing dashboards. What specific, regular behaviors should a CIO model to embed an evidence-based culture, particularly during high-stakes incident reviews or when prioritizing technical debt? Please give a step-by-step example.

A CIO’s behavior in these moments is the most powerful tool they have. They must shift from being a sponsor of technology to a steward of analytical discipline. Imagine a high-stakes incident review. First, the CIO sets the tone, stating the goal is not blame but systemic learning. Second, when an engineer explains a decision, the CIO gently but firmly asks, “What operational data did you have at that moment to support that choice?” This isn’t an accusation; it’s a reinforcement of expectation. Third, if someone offers an anecdote or a gut feeling, the CIO redirects the conversation by saying, “I understand the pressure you were under. Let’s find the data that can help us make that call more confidently next time.” Finally, and most critically, the CIO must protect the team’s time to act on the findings, ensuring that the insights from the data lead to real changes in process or architecture. This consistent, repeated behavior is what transforms culture far more than any new platform.

Top organizations use analytics for systemic learning, not just reactive firefighting. Can you describe the process and a few key metrics an IT team could use to build this “organizational memory” and begin designing recurring failures out of their environment?

This is the leap from being good at recovering to becoming great at being resilient. The process begins by looking beyond individual incidents. Instead of just doing a post-mortem on one outage, a mature team aggregates incident data over a quarter or a year. They stop looking for a single root cause and start hunting for patterns: recurring failures in a specific service, architectural bottlenecks that appear under certain loads, or process debt that slows down every response. To build this organizational memory, they might track metrics like the Mean Time Between Recurring Incidents for a critical application, or the percentage of engineering time consumed by unplanned work versus proactive improvements. Another powerful one is tracking the lifecycle of “process debt”—how long known procedural flaws go unaddressed. These metrics shift the focus from the heroism of firefighting to the foresight of fire prevention, using analytics to systematically design failure out of the environment.

As intelligence is generated faster, decision latency is becoming a core operational risk. How can organizations begin to measure and manage this risk? What practical steps can leaders take to ensure their decision-making processes keep pace with their analytics capabilities?

Decision latency is the silent killer of operational performance. It’s the time elapsed between when your systems know something is wrong and when your people decide what to do about it. You can start to measure it by tracking timestamps from the moment a critical alert is generated to the moment a decisive action is executed. What you often find is that the technology is near-instantaneous, but the human decision cycle—filled with escalations, conference bridges, and consensus-building—takes hours. To manage this, leaders must treat this latency as a formal operational risk. The most practical step is to ruthlessly clarify decision rights and escalation thresholds before an incident. This means empowering the person closest to the problem to make a call within defined boundaries, without needing three levels of approval. By doing so, you’re not just speeding things up; you’re reducing the risk that conflicting interventions and delays cause more damage than the initial technology failure itself.

What is your forecast for how analytics capability will reshape the specific roles and skill sets required within IT operations teams over the next three years?

My forecast is that the very definition of an “operations” role will fundamentally change. The focus will shift dramatically from manual intervention to analytical interpretation. We’ll see a decline in the need for traditional system administrators who primarily react to break/fix tickets. In their place, we’ll see the rise of the “Operations Analyst” or “Reliability Strategist”—people whose core job is not to run the systems, but to analyze their performance and influence their design. The most valuable skill set will no longer be deep knowledge of a single technology stack, but the ability to synthesize data from multiple sources, identify structural patterns of risk, and communicate those findings effectively to both engineering and business leaders. In three years, the most sought-after operations professionals will be those who can use longitudinal operational data to build a compelling, evidence-based case for changing how the IT environment is built in the first place, moving analytics from a reactive tool to a primary driver of system design.

Trending

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later