Listen to this article
Our IT security programmes are failing – but it’s not for want of trying.
Every year, we spend more and more on security technologies, teams of security experts, and implementing “best practices” to comply with increasingly strict security regulations. Over the past five years, annual enterprise spending on IT security has grown by 20 per cent or more.
What kind of return are we getting on that investment? Not much, it seems. In each of the past eight years, the average organisation’s dollar losses to worms, hacking attacks and insider abuse have grown between 15 and 100 per cent, depending on the year and type of attack.
Losses from hacking alone have tripled since 2000. So in spite of ever-growing security efforts, the hackers, e-hucksters and malefactors of malicious code continue to pull ahead. Everyone else is left with growing expenditures for security that is, to a great extent, not working.
Many blame Microsoft for leaving too many openings for hackers in its software. Others point to the rapidly increasing sophistication of attacks and attackers alike: viruses have evolved into sophisticated worms. Hackers use “bot herds” of thousands of computers to launch their attacks. Con artists have refined their exploitation of the anonymity and reach of the internet into an art.
Those problems are real, but they are not the reason we are losing this battle. We are losing because we are not thinking clearly about the problem, and we are basing our efforts on several fundamentally flawed assumptions.
For example, we constantly assume that because computers are binary, protection should be binary as well – that is, protection is either 100 per cent or nothing, and we seek the 100 per cent in every countermeasure. We then mistakenly generalise the lessons learned from protecting single computers to the complex community of computers that define our businesses, which dooms us to chasing wildly after thousands of individual “micro-vulnerabilities” spread across the enterprise.
Instead of focusing on this seemingly endless stream of individual vulnerabilities – and trying to address every single “wouldn’t-it-be-horrible-if” theory – we need to step back and think in terms of business risk to the enterprise.
Business risk can be defined as annualised loss expectancy – that is, what is a given problem likely to cost the enterprise? Mathematically, we can calculate it by multiplying three factors: threat, organisational vulnerability and impact cost.
Threat is the frequency of potentially adverse events. For example, the current threat rate of an insider using somebody else’s logged-in PC inappropriately to access restricted information is approximately four attempts per 1,000 users per day.
The threat rate of virus encounters for an organisation with 1,000 PCs is about 4,000 per day, while the threat rate of “attack-related scans” averages about 340 per internet address per day.
Some threats loom larger in our imaginations than in reality. Many people worry about credit card theft conducted by eavesdropping on internet traffic, but no one at Scotland Yard, the FBI, Visa or Master Card is aware of a single case of such theft in the history of the internet. Thus, the threat rate is zero. Similarly, security experts have published about 3,700 new electronic vulnerabilities in each of the past three years – yet attack code (in most cases, a prerequisite to any threat) has materialised for less than 2 per cent of those vulnerabilities. Nevertheless, organisations still spend far too much time and energy on such theoretical but unlikely attacks, which only takes away from efforts to deal with pressing real-world risks.
Vulnerability is the likelihood that a particular threat will succeed against a specific organisation. Security experts tend to look for a few individual, supposedly “strong controls” to protect computers.
But in a complex networked environment, organisations may be better served by layering multiple simple, inexpensive, low-maintenance and non-infringing “synergistic” controls – such as policies, configurations, and filters – on top of the more fundamental countermeasures such as identity, firewall and anti-virus technologies. Each of these synergistic controls may be only 30 to 80 per cent effective against a particular category of risk, but collectively they can often provide extremely strong protection – with relatively modest expense and effort.
The third component of the formula, impact costs, refers to the hard-dollar costs that stem from lost sales, halted operations, IT-staff time devoted to repair a breach, etc.
It can also include soft-dollar costs such as reduced productivity, damaged reputation, and lost business opportunities.
When these factors are multiplied by one another, a clearer picture of risk emerges.
For example, when any of them is zero, there’s no immediate risk to the organisation – zero times anything is zero. In those cases, there is no need to “fix” anything. If, on the other hand, all three factors show high values, the need to act is urgent and real. This risk-oriented approach helps organisations eliminate unnecessary spending while providing equal or better protection.
It can also help them make better use of the products and resources they already have in place. For example, the typical corporation is equipped with firewalls, anti-virus and intrusion detection products.
Adding dozens of simpler, less sophisticated, synergistic controls to these technologies can make it unnecessary to rapidly implement Microsoft’s and other vendors “critical” patches more than 70 per cent of the time.
The solution to security challenges lies in understanding that this is not just a technical problem. It’s a business problem – and more fundamentally, a thinking problem.
Organisations can prevent costly attacks on their infrastructure – and bring security spending under control – when they stop following security dogma and replace flawed assumptions with a rational, pragmatic approach that focuses on real risks.
The author is chief technology officer of Cybertrust