The Latest in IT Security

Anomaly Detection, Knowing Normal Is the Key to Business Trust and Success

16
Jan
2014

Threats and attacks are steadily increasing, and business executives face new challenges with trust exploits. While organizations adopt cloud computing and allow employee-owned devices onto the network, the challenge of securing company data increases exponentially. When it comes to advanced persistent threats (APTs), bad actors take advantage of every exploit to steal information, and look for the weakest link in enterprise security systems.

So much emphasis in IT security today is placed on anomaly detection. Whether it is detecting abnormalities in user behavior, system states or trust relationships governed by keys and certificates, the theory is that the faster you can pinpoint anomalies, the faster you can find malicious threats and close security gaps. But the problem is that making decisions based on anomalies is predicated by one very important assumption—you must understand what “normal” looks like.

In theory, anomaly detection can be a really useful tool in many different applied use cases within IT security. For example, organizations can use it to detect unusual behavior on an endpoint that would indicate malware activity. Additionally, it could prove quite useful when examining network or application behavior to sniff out account takeover within mission-critical enterprise software. Similarly, anomaly detection can also potentially play an instrumental part in finding instances when trust is compromised through stolen and/or fabricated keys and certificates.

But the gulf between theory and reality stands to sabotage these efforts if security departments don’t truly understand the normal state of the system being evaluated. In the first example, if the department doesn’t know which software and settings are normally installed on an endpoint, it becomes difficult to tell the difference between a legitimate application and a malicious package. Likewise, if IT has no established baseline for the time of day an account logs in or type of data that account typically accesses, there’s no easy way to tell when the account is being used abnormally. And, when no one maintains accurate and continuous inventory of legitimate keys and certificates deployed on the network, there’s no predictable way to tell whether or not the network is hiding forged, fabricated or abused keys and certificates. And the attacks on keys and certificates as a new vector are on the dramatic rise.

Cybercriminals have successfully used unsecured keys and certificates to breach trust on enterprise and government systems in order to steal valuable intellectual property and classified information. The majority of global enterprises have no ability to detect anomalies or to respond to attacks on trust that leverage compromised, stolen or fabricated keys and certificates. Since the process of conferring trust through keys and certificates is an area I know well, I’ll expand on that last example.

From professional observation, I’ve seen time and time again that most organizations currently don’t have the ability to detect when keys and certificates have been fabricated, compromised, inserted into the infrastructure, forged or otherwise tampered with. This uncertainty stems from a lack of a baseline. The real trouble is most organizations don’t develop, let alone maintain, a current and running inventory of their keys and certificates. They don’t know where all the keys are, which relationships they govern and what normal behavior looks like when keys are mediating normal trusted communications.

That should be step one in the process of detecting abnormal behavior related to keys and certificates. When organizations do a better job of developing a normal baseline, it becomes much easier to spot an unexpected use, tag it as abnormal and act on it.

But there’s a curveball in this situation—aren’t there always in security, right? In this case, the curveball is that the baseline for normal isn’t static. Not even by half. The normal state of keys and certificates are constantly changing, day-by-day and hour-by-hour. In order to continuously maintain the baseline for normal, organizations must institute automated processes that secure and protect keys and certificates in order to keep up with the pace of that change. Otherwise, the baseline snapshot you take today will be invalid tomorrow.

This is why continuous baseline building is so crucial to keep that vision of the established ‘normal’ state of keys clear at all times. Without it, any form of so-called anomaly detection for key management is just vaporware.

And, really, this is a good lesson for many other niches in IT security. The universal advice stands no matter where you look. If you don’t know what normal looks like, you can’t ever define the traits of abnormal behavior.

This hunt for abnormal behavior is critical to spotting the kind of malicious activities that have potential to deliver dire consequences. According to a recent Ponemon report, key- and certificate-related data theft and IP loss is totaling up to $500 billion annually—a clear indicator that these types of attacks are imminent. For businesses that fail to take action, these incidents are costly and far-reaching. They lead to a loss in competitive advantage for established businesses, a loss in venture funding at start-ups, and a loss of trust with customers that rely on them to safeguard information.

Tweet

Jeff Hudson serves as CEO of Venafi. A key executive in four successful, high-technology start-ups that have gone public, Hudson brings over 25 years of experience in IT and security management. Prior to joining Venafi, Hudson was the CEO of Vhayu Technologies which was acquired by ThomsonReuters. Prior to Vhayu, Hudson held numerous executive leadership posts, including CEO and cofounder of MS2, SVP of Corporate Development at Informix Software, CEO of Visioneer, and numerous senior executive posts at NetFRAME Systems and WYSE Technology. He started his career with IBM. Mr. Hudson earned a B.A. in communications at the University of California, Davis.Previous Columns by Jeff Hudson:Anomaly Detection, Knowing Normal Is the Key to Business Trust and Success Malware: Identifying the Code is Only Half the BattleWhat Enterprises Can Learn from the History of Attacks SSH Keys – Improved Security Controls or Improved Protocol?The High Financial Costs of Failed Trust

sponsored links

Tags: INDUSTRY INSIGHTS

Network Security

Incident Management

Security Architecture

Security Infrastructure

Comments are closed.

Categories

THURSDAY, APRIL 25, 2024
WHITE PAPERS

Mission-Critical Broadband – Why Governments Should Partner with Commercial Operators:
Many governments embrace mobile network operator (MNO) networks as ...

ARA at Scale: How to Choose a Solution That Grows With Your Needs:
Application release automation (ARA) tools enable best practices in...

The Multi-Model Database:
Part of the “new normal” where data and cloud applications are ...

Featured

Archives

Latest Comments