Why Are Real IT Cyber Security Improvements So Hard to Achieve?

Advancements in IT cyber security lag all other areas of tech, which puts applications and data at unnecessary risk.

Christopher Tozzi, Technology Analyst

July 6, 2020

9 Min Read
Why Are Real IT Cyber Security Improvements So Hard to Achieve?
Getty Images

By most measures, information technology has advanced by leaps and bounds during the past several decades. CPUs are orders of magnitudes faster, less expensive and more focused than they were around 2000. Software reliability is much improved, with Blue Screens of Death fading into memory. The cloud has made our data and applications accessible from anywhere, enabling a level of convenience that folks could barely fathom 20 years ago. Yet, amid all these improvements, one facet of IT stands apart: IT cyber security.

In most senses, we are no better at securing applications and data today than we were 10 or 30 or even 50 years ago. Why is that? I don’t think there’s a single answer. But I do think it’s worth examining the myths surrounding IT security that are often cited as explanations for why systems and applications remain so insecure, even as they have improved markedly in other respects.

IT Cyber Security Trends: Things Are Getting Worse

You don’t need to be an IT cyber security analyst to sense that things are not exactly great on the IT cyber security front. Anyone who reads the news knows that barely a month goes by without a major IT cyber security breach at one company or another.

This has been the case for years, and yet things only seem to be getting worse. Consider the following data points:

Related:Ransomware Has Crippled Your Data Center–Now What?

These trends have emerged despite the fact that the IT cyber security market grew by 3,500% between 2004 and 2017. Clearly, there is a lot of interest in trying to improve IT cyber security. There are few tangible results.

Myths for IT Cyber Security Failures

It’s easy to point fingers in various directions to try to explain why we have done such a poor job of improving IT security over the years. Unfortunately, most of the places at which blame is typically directed bear limited, if any, responsibility for our lack of security.

Following are common myths about why IT cyber security remains so weak.

Myth 1: Software has grown too complex to secure.

It’s hard to deny that software is more complex today than it was 10 or 20 years ago. The cloud, distributed infrastructure, microservices, containers and the like have led to software environments that change faster and involve more moving pieces.

It’s reasonable to argue (as people have been doing for years) that this added complexity has made modern environments more difficult to secure. There may be some truth to this. But, on the flipside, you have to remember that the complexity brings new security benefits, too. In theory, distributed architectures, microservices and other modern models make it easier to isolate or segment workloads in ways that should mitigate the impact of a breach.

Thus, I think it’s simplistic to say that the reason IT cyber security remains so poor is that software has grown more complex, and that security strategies and tools have not kept pace. You could argue just as plausibly that modern architectures should have improved security.

Myth 2: Users are bad at security.

It has long been common to blame end users for security breaches. Yet, while it’s certainly true that some users--perhaps even most users--don’t always follow security best practices, this is a poor explanation for the types of breaches that grab headlines today.

Attacks that lead to the theft of millions of consumers’ personal data, or the taking for ransom of billions of dollars’ worth of information, aren’t typically caused by low-level employees writing their workstation passwords on sticky notes because they don’t know any better. They’re the result of mistakes like unpatched server software or data that is accidentally exposed on the internet for all to see.

So, yes, ordinary end-users could do more to help secure the systems and applications they use. But they are not the cause of the biggest problems in IT cyber security today.

Myth 3: The government doesn’t do enough to promote IT security.

Blaming the government for problems is always popular, and IT cyber security is no exception. Some folks contend that private companies can’t stop the never-ending stream of security breaches on their own, and that the government needs to do more to help.

Asking governments to do more on the IT cyber security front is reasonable. However, it’s not as if governments don’t already invest significantly in IT security initiatives. And given that government support for cyber security has only increased over the decades, you’d think that we’d be getting better at security over time, if government intervention was the key. But we’re not, which suggests that government investment in cyber security is not the make-or-break factor for achieving actual progress.

Myth 4: Developers don’t care about (or understand) security.

Another scapegoat for the lack of improvement in IT cyber security is developers. Indeed, the idea that developers either don’t care about security, or simply don’t understand how to write consistently secure code, forms the root for the DevSecOps concept, which proposes that we’d finally improve software security if only developers and security engineers worked more closely together.

Here again, this is, at best, an exaggerated effort to explain the lack of progress in IT cyber security. There are surely some developers who don’t take security seriously, or who lack proper training in secure coding best practices. But to say that all developers fail to write secure code is a gross generalization that can’t be substantiated. More to the point, it also ignores the fact that many breaches are the result not of insecure code, but of insecure deployments or configurations.

Myth 5: Security is a cat-and-mouse game, and we’ll never get ahead.

Finally, you sometimes hear IT cyber security described as a sort of cat-and-mouse game. The implication is that it’s impossible to make real improvements to cybersecurity because the bad guys will always find ways to get around whichever new defenses you come up with.

The reality, though, is that few fundamental improvements to cybersecurity have been achieved in a long time. Buffer overflow exploits, injection attacks, denial-of-service attacks and the like have been widespread attack vectors for decades. You might think that the industry would have figured out ways of fundamentally plugging these sorts of holes by now. But it hasn’t, even though other realms of the IT industry have seen astounding leaps forward.

We now have operating systems that virtually never crash, CPUs that can run on comparatively minuscule amounts of electricity and word processors that can predict what you want to write before you write it. Why, despite advances like these, are we still living with the same IT cyber security problems that have been happening since the 1980s?

Making Real Improvements to Security

Now that we know what’s not to blame for the enduringly poor security record of IT organizations today, what’s the real cause?

That’s a very complicated question. I think a mix of the factors described above are partly at play, even though no single one of them fully explains the problems. Some developers probably should work harder to write more secure code. The government could do more to support private enterprise in developing cybersecurity solutions. And so on.

However, I suspect that the single largest factor in the lack of improvement to security is that companies that suffer major breaches face few meaningful consequences for them. Organizations that report major security problems might suffer some backlash from consumers, but I’ve yet to hear of a major corporation going out of business because it failed to take IT cyber security seriously. Equifax remains profitable as ever, for example, despite the major breach it suffered.

Nor are there usually major fines imposed by governments on companies that suffer breaches due to negligence. To fine a company for a breach, you first have to prove that it violated a specific compliance requirement. Even if you do that, the fines are typically slaps on the wrist. To use Equifax as an example again, it paid $700 million in a settlement for its data breach. That may seem like a large sum, but it represents only about 20% of Equifax’s annual revenue for most of the past several years.

In short, then, I think we’d finally see real improvements to IT security if companies faced more tangible consequences for their failure to keep systems and data secure. Twenty years ago, another writer on this site proposed allowing consumers to sue vendors for security flaws in the same way that auto manufacturers are held liable for defects in their products. Maybe that sort of solution would lead to the accountability that organizations need to take security truly seriously.

About the Author

Christopher Tozzi

Technology Analyst, Fixate.IO

Christopher Tozzi is a technology analyst with subject matter expertise in cloud computing, application development, open source software, virtualization, containers and more. He also lectures at a major university in the Albany, New York, area. His book, “For Fun and Profit: A History of the Free and Open Source Software Revolution,” was published by MIT Press.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like