Monthly Archives: February 2016

Fear and Loathing in Vulnerability Management

Vulnerability management – the security program that everyone loves to hate, especially those on the receiving end of a bunch of arcane reports. Increasingly, vulnerability management programs seem to be monopolizing more and more of a security team’s time, mostly because of the energy involved with scheduling, validating and interpreting scans. But does this effort actually lead anywhere besides a massive time suck and interdepartmental stalemates? I would argue that if the core of your program is built on scanning, then you’re doing it wrong.

The term vulnerability management is a misnomer, because “management” implies that you’re solving problems. Maybe in the beginning scanning and vulnerability tracking helped organizations, but now it’s just another method used by security leadership to justify their every increasing black-hole budgets. “See? I told you it was bad. Now I need more $$$$ for this super-terrific tool!”

Vulnerability management programs shouldn’t be based on scanning, they should be focused on the hard stuff: policies, standards and procedures with scans used for validation. If you’re stuck in an endless cycle of scanning, patching, more scanning and more patching; you’ve failed.

You should be focused on building processes that bake build standards and vulnerability remediation into a deployment procedure. Work with your Infrastructure team to support a DevOps model that eliminates those “pet” systems. Focus on “cattle” or immutable systems that can and should be continuously replaced when an application or operating system needs to be upgraded. Better yet, use containers. Have a glibc vulnerability? The infrastructure team should be creating new images that can be rolled out across the organization instead of trying to “patch and pray” after finally getting a maintenance window. You should have a resilient enough environment that can tolerate change; an infrastructure that’s self-healing because it’s in a state of continuous deployment.

I recognize that most organizations aren’t there, but THIS should be your goal, not scanning and patching. Because it’s a treadmill to nowhere. Or maybe you’re happy doing the same thing over and over again with no difference in the result. If so, let me find you a very large hamster wheel.

hamster_meme

 

Tagged , , , , , ,

The Question of Technical Debt

Not too long ago, I came across an interesting blog post by the former CTO of Etsy, Kellan Elliott-McCrea, which made me rethink my understanding and approach to the concept of technical debt. In it, he opined that technical debt doesn’t really exist and it’s an overused term. While specifically referencing code in his discussion, he makes some valid points that can be applied to information security and IT infrastructure.

In the post, he credits Peter Norvig with the quote, “All code is liability.” This echoes Nicholas Carr’s belief in the increased risk that arises from infrastructure technology due to the decreased advantage as it becomes more pervasive and non-proprietary.

When a resource becomes essential to competition but inconsequential to strategy, the risks it creates become more important than the advantages it provides. Think of electricity. Today, no company builds its business strategy around its electricity usage, but even a brief lapse in supply can be devastating…..

Over the years, I’ve collected a fair amount of “war stories” about less than optimal application deployments and infrastructure configurations. Too often, I’ve seen things that make me want to curl up in a fetal position beneath my desk. Web developers failing to close connections or set timeouts to back-end databases, causing horrible latency. STP misconfigurations resulting in network core meltdowns. Data centers built under bathrooms or network hub sites using window unit air conditioners. Critical production equipment that’s end-of-life or not even under support. But is this really technical debt or just the way of doing business in our modern world?

Life is messy and always a “development” project. Maybe the main reason DevOps has gathered such momentum in the IT world is because it reflects the constantly evolving, always shifting, nature of existence. In the real world, there is no greenfield. Every enterprise struggles to find the time and resources for ongoing maintenance, upgrades and improvements. As Elliott-McCrea so beautifully expresses, maybe our need to label this state of affairs as atypical is a cop-out. By turning this daily challenge into something momentous, we make it worse. We accuse the previous leadership and engineering staff  of incompetence. We come to believe that the problem will be fully eradicated through the addition of the latest miracle product. Or we invite some high-priced process junkies in to provide recommendations which often result in inertia.

We end up pathologizing something which is normal, often casting an earlier team as bumbling. A characterization that easily returns to haunt us.

When we take it a step further and turn these conflations into a judgement on the intellect, professionalism, and hygiene of whomever came before us we inure ourselves to the lessons those people learned. Quickly we find ourselves in a situation where we’re undertaking major engineering projects without having correctly diagnosed what caused the issues we’re trying to solve (making recapitulating those issues likely) and having discarded the iteratively won knowledge that had allowed our organization to survive to date.

Maybe it’s time to drop the “blame game” by information security teams when evaluating our infrastructures and applications. Stop crying about technical debt, because there are legitimate explanations for the technology decisions made in our organizations and it’s generally not because someone was inept. We need to realize that IT environments aren’t static and will always be changing and growing. We must transform with them.

Tagged , , , , , ,

Why You Shouldn’t Be Hosting Public DNS

As a former Unix engineer who managed my share of critical network services, one of the first things I do when evaluating an organization is to validate the health of infrastructure components such as NTP, RADIUS, and DNS. I’m often shocked by what I find. Although most people barely understand how these services work, when they break, it can create some troublesome technical issues or even a full meltdown. This is especially true of DNS.

Most problems with DNS implementations are caused by the fact that so few people actually understand how the protocol is supposed to work, including vendors.The kindest thing one can say about DNS is that it’s esoteric. In my IT salad days, I implemented and was responsible for managing the BIND 9.x infrastructure at an academic institution. I helped write and enforce the DNS request policy, cleaned up and policed the namespace, built and hardened the servers, compiled the BIND binaries and essentially guarded the architecture for over a decade. I ended up in this role because no one else wanted it. I took a complete mess of a BIND 4.x deployment and proceeded to untangle a ball of string the size New Zealand. The experience was an open source rite of passage, helping to make me the engineer and architect I am today.  I also admit to being a BIND fangirl, mostly because it’s the core software of most load-balancers and IPAM systems.

This history makes what I’m about to recommend even more shocking. Outside of service providers, I no longer believe that organizations should run their own public DNS servers. Most enterprises get along fine using Active Directory for internal authentication and name resolution, using a DNS provider such as Neustar, Amazon or Akamai to resolve external services. They don’t need to take on the risk associated with managing external authoritative DNS servers or even load-balancing most public services.

The hard truth is that external DNS is best left to the experts who have time for the care and feeding of it. One missed security patch, a mistyped entry, a system compromise; any of these could have a significant impact to your business. And unless you’re an IT organization, wouldn’t it be better to have someone else deal with that headache? Besides, as organizations continue to move their services to the cloud, why would you have the name resolution of those resources tied to some legacy, on-premise server? But most importantly, as DDoS attacks become more prevalent, UDP-based services are an easy target, especially DNS. Personally, I’d rather have a service provider deal with the agony of DDoS mitigation. They’re better prepared with the right (expensive) tools and plenty of bandwidth.

I write this with great sadness and it even feels like I’m relinquishing some of my nerd status. But never fear, I still have a crush on Paul Vixie and will always choose dig over nslookup.

 

Tagged , , , , ,
%d bloggers like this: