Tag Archives: devops

Fear and Loathing in Vulnerability Management

Vulnerability management – the security program that everyone loves to hate, especially those on the receiving end of a bunch of arcane reports. Increasingly, vulnerability management programs seem to be monopolizing more and more of a security team’s time, mostly because of the energy involved with scheduling, validating and interpreting scans. But does this effort actually lead anywhere besides a massive time suck and interdepartmental stalemates? I would argue that if the core of your program is built on scanning, then you’re doing it wrong.

The term vulnerability management is a misnomer, because “management” implies that you’re solving problems. Maybe in the beginning scanning and vulnerability tracking helped organizations, but now it’s just another method used by security leadership to justify their every increasing black-hole budgets. “See? I told you it was bad. Now I need more $$$$ for this super-terrific tool!”

Vulnerability management programs shouldn’t be based on scanning, they should be focused on the hard stuff: policies, standards and procedures with scans used for validation. If you’re stuck in an endless cycle of scanning, patching, more scanning and more patching; you’ve failed.

You should be focused on building processes that bake build standards and vulnerability remediation into a deployment procedure. Work with your Infrastructure team to support a DevOps model that eliminates those “pet” systems. Focus on “cattle” or immutable systems that can and should be continuously replaced when an application or operating system needs to be upgraded. Better yet, use containers. Have a glibc vulnerability? The infrastructure team should be creating new images that can be rolled out across the organization instead of trying to “patch and pray” after finally getting a maintenance window. You should have a resilient enough environment that can tolerate change; an infrastructure that’s self-healing because it’s in a state of continuous deployment.

I recognize that most organizations aren’t there, but THIS should be your goal, not scanning and patching. Because it’s a treadmill to nowhere. Or maybe you’re happy doing the same thing over and over again with no difference in the result. If so, let me find you a very large hamster wheel.



Tagged , , , , , ,

The Question of Technical Debt

Not too long ago, I came across an interesting blog post by the former CTO of Etsy, Kellan Elliott-McCrea, which made me rethink my understanding and approach to the concept of technical debt. In it, he opined that technical debt doesn’t really exist and it’s an overused term. While specifically referencing code in his discussion, he makes some valid points that can be applied to information security and IT infrastructure.

In the post, he credits Peter Norvig with the quote, “All code is liability.” This echoes Nicholas Carr’s belief in the increased risk that arises from infrastructure technology due to the decreased advantage as it becomes more pervasive and non-proprietary.

When a resource becomes essential to competition but inconsequential to strategy, the risks it creates become more important than the advantages it provides. Think of electricity. Today, no company builds its business strategy around its electricity usage, but even a brief lapse in supply can be devastating…..

Over the years, I’ve collected a fair amount of “war stories” about less than optimal application deployments and infrastructure configurations. Too often, I’ve seen things that make me want to curl up in a fetal position beneath my desk. Web developers failing to close connections or set timeouts to back-end databases, causing horrible latency. STP misconfigurations resulting in network core meltdowns. Data centers built under bathrooms or network hub sites using window unit air conditioners. Critical production equipment that’s end-of-life or not even under support. But is this really technical debt or just the way of doing business in our modern world?

Life is messy and always a “development” project. Maybe the main reason DevOps has gathered such momentum in the IT world is because it reflects the constantly evolving, always shifting, nature of existence. In the real world, there is no greenfield. Every enterprise struggles to find the time and resources for ongoing maintenance, upgrades and improvements. As Elliott-McCrea so beautifully expresses, maybe our need to label this state of affairs as atypical is a cop-out. By turning this daily challenge into something momentous, we make it worse. We accuse the previous leadership and engineering staff  of incompetence. We come to believe that the problem will be fully eradicated through the addition of the latest miracle product. Or we invite some high-priced process junkies in to provide recommendations which often result in inertia.

We end up pathologizing something which is normal, often casting an earlier team as bumbling. A characterization that easily returns to haunt us.

When we take it a step further and turn these conflations into a judgement on the intellect, professionalism, and hygiene of whomever came before us we inure ourselves to the lessons those people learned. Quickly we find ourselves in a situation where we’re undertaking major engineering projects without having correctly diagnosed what caused the issues we’re trying to solve (making recapitulating those issues likely) and having discarded the iteratively won knowledge that had allowed our organization to survive to date.

Maybe it’s time to drop the “blame game” by information security teams when evaluating our infrastructures and applications. Stop crying about technical debt, because there are legitimate explanations for the technology decisions made in our organizations and it’s generally not because someone was inept. We need to realize that IT environments aren’t static and will always be changing and growing. We must transform with them.

Tagged , , , , , ,

Security and Ugly Babies

Recently a colleague confessed his frustration to me over the resistance he’s been encountering in a new job as a security architect. He’s been attempting to address security gaps with the operations team, but he’s being treated like a Bond villain and the paranoia and opposition are wearing him down.

It’s a familiar story for those of us in this field. We’re brought into an organization to defend against breaches and engineer solutions to reduce risk, but along the way often discover an architecture held together by bubble gum and shoestring. We point it out, because it’s part of our role, our vocation to protect and serve. Our “reward” is that we usually end up an object of disdain and fear. We become an outcast in the playground, dirt kicked in the face by the rest of IT, as we wonder what went wrong.

We forget that in most cases the infrastructure we criticize isn’t just cabling, silicon and metal. It represents the output of hundreds, sometimes thousands of hours from a team of people. Most of whom want to do good work, but are hampered by tight budgets and limited resources. Maybe they aren’t the best and brightest in their field, but that doesn’t necessarily mean that they don’t care.  I don’t think anyone starts their day by saying, “I’m going to do the worst job possible today.”

Then the security team arrives on the scene, the perpetual critic, we don’t actually build anything. All we do is tell the rest of IT that their baby is ugly. That they should get a new one. Why are we surprised that they’re defensive and hostile? No one wants to hear that their hard work and long hours have resulted in shit.

What we fail to realize is this is our baby too and our feedback would be better received if we were less of a critic and more of an ally.

Tagged , , , ,
%d bloggers like this: