The Question of Technical Debt

Not too long ago, I came across an interesting blog post by the former CTO of Etsy, Kellan Elliott-McCrea, which made me rethink my understanding and approach to the concept of technical debt. In it, he opined that technical debt doesn’t really exist and it’s an overused term. While specifically referencing code in his discussion, he makes some valid points that can be applied to information security and IT infrastructure.

In the post, he credits Peter Norvig with the quote, “All code is liability.” This echoes Nicholas Carr’s belief in the increased risk that arises from infrastructure technology due to the decreased advantage as it becomes more pervasive and non-proprietary.

When a resource becomes essential to competition but inconsequential to strategy, the risks it creates become more important than the advantages it provides. Think of electricity. Today, no company builds its business strategy around its electricity usage, but even a brief lapse in supply can be devastating…..

Over the years, I’ve collected a fair amount of “war stories” about less than optimal application deployments and infrastructure configurations. Too often, I’ve seen things that make me want to curl up in a fetal position beneath my desk. Web developers failing to close connections or set timeouts to back-end databases, causing horrible latency. STP misconfigurations resulting in network core meltdowns. Data centers built under bathrooms or network hub sites using window unit air conditioners. Critical production equipment that’s end-of-life or not even under support. But is this really technical debt or just the way of doing business in our modern world?

Life is messy and always a “development” project. Maybe the main reason DevOps has gathered such momentum in the IT world is because it reflects the constantly evolving, always shifting, nature of existence. In the real world, there is no greenfield. Every enterprise struggles to find the time and resources for ongoing maintenance, upgrades and improvements. As Elliott-McCrea so beautifully expresses, maybe our need to label this state of affairs as atypical is a cop-out. By turning this daily challenge into something momentous, we make it worse. We accuse the previous leadership and engineering staff  of incompetence. We come to believe that the problem will be fully eradicated through the addition of the latest miracle product. Or we invite some high-priced process junkies in to provide recommendations which often result in inertia.

We end up pathologizing something which is normal, often casting an earlier team as bumbling. A characterization that easily returns to haunt us.

When we take it a step further and turn these conflations into a judgement on the intellect, professionalism, and hygiene of whomever came before us we inure ourselves to the lessons those people learned. Quickly we find ourselves in a situation where we’re undertaking major engineering projects without having correctly diagnosed what caused the issues we’re trying to solve (making recapitulating those issues likely) and having discarded the iteratively won knowledge that had allowed our organization to survive to date.

Maybe it’s time to drop the “blame game” by information security teams when evaluating our infrastructures and applications. Stop crying about technical debt, because there are legitimate explanations for the technology decisions made in our organizations and it’s generally not because someone was inept. We need to realize that IT environments aren’t static and will always be changing and growing. We must transform with them.

Tagged , , , , , ,

Why You Shouldn’t Be Hosting Public DNS

As a former Unix engineer who managed my share of critical network services, one of the first things I do when evaluating an organization is to validate the health of infrastructure components such as NTP, RADIUS, and DNS. I’m often shocked by what I find. Although most people barely understand how these services work, when they break, it can create some troublesome technical issues or even a full meltdown. This is especially true of DNS.

Most problems with DNS implementations are caused by the fact that so few people actually understand how the protocol is supposed to work, including vendors.The kindest thing one can say about DNS is that it’s esoteric. In my IT salad days, I implemented and was responsible for managing the BIND 9.x infrastructure at an academic institution. I helped write and enforce the DNS request policy, cleaned up and policed the namespace, built and hardened the servers, compiled the BIND binaries and essentially guarded the architecture for over a decade. I ended up in this role because no one else wanted it. I took a complete mess of a BIND 4.x deployment and proceeded to untangle a ball of string the size New Zealand. The experience was an open source rite of passage, helping to make me the engineer and architect I am today.  I also admit to being a BIND fangirl, mostly because it’s the core software of most load-balancers and IPAM systems.

This history makes what I’m about to recommend even more shocking. Outside of service providers, I no longer believe that organizations should run their own public DNS servers. Most enterprises get along fine using Active Directory for internal authentication and name resolution, using a DNS provider such as Neustar, Amazon or Akamai to resolve external services. They don’t need to take on the risk associated with managing external authoritative DNS servers or even load-balancing most public services.

The hard truth is that external DNS is best left to the experts who have time for the care and feeding of it. One missed security patch, a mistyped entry, a system compromise; any of these could have a significant impact to your business. And unless you’re an IT organization, wouldn’t it be better to have someone else deal with that headache? Besides, as organizations continue to move their services to the cloud, why would you have the name resolution of those resources tied to some legacy, on-premise server? But most importantly, as DDoS attacks become more prevalent, UDP-based services are an easy target, especially DNS. Personally, I’d rather have a service provider deal with the agony of DDoS mitigation. They’re better prepared with the right (expensive) tools and plenty of bandwidth.

I write this with great sadness and it even feels like I’m relinquishing some of my nerd status. But never fear, I still have a crush on Paul Vixie and will always choose dig over nslookup.

 

Tagged , , , , ,

Introducing: Security SOC Puppets

Gert and Bernie

Please join Gert, Bernie and friends in their wild adventures through cyberspace! In episode one, our woolen friends explore the frustrating topic of email encryption.

Tagged , , , ,

Security Threat Levels with a Side of FUD

Today the SANS Internet Storm Center raised it’s Infocon Threat Level to “yellow” due to the recently announced backdoor in Juniper devices. I wouldn’t have even known this if someone hadn’t pointed it out to me and then I felt like I was in an episode of Star Trek. I kept waiting for the ship’s computer to make an announcement so I could strap myself into my chair.

While the level names are different, the colors seem to mirror the old Homeland Security color-coded advisory system, which was eliminated in 2011 due to questions over it’s usefulness.

2000px-Hsas-chart_with_header.svg

According to a story on CNN.com:

“The old color coded system taught Americans to be scared, not prepared,” said ranking member Rep. Bennie Thompson, D-Mississippi. “Each and every time the threat level was raised, very rarely did the public know the reason, how to proceed, or for how long to be on alert. I have raised concerns for years about the effectiveness of the system and have cited the need for improvements and transparency. Many in Congress felt the system was being used as a political scare tactic — raising and lowering the threat levels when it best suited the Bush administration.”

I have a similar experience with SANS’ Infocon and the reactions from management.

Pointy-haired Fearless Leader: OMG, the SANS Infocon is at YELLOW!!! The end of the Internet is nigh!

Much Put-Upon Security Architect: Please calm down and take a Xanax. It’s just a color.

I’d like to propose a simpler and more useful set of threat levels with recommended actions. Let’s call it the Postmodern Security Threat Action Matrix:

Level Description Action
Tin Foil Hat Normal levels of healthy paranoia You can still check your email and watch Netflix. But remember they’re always watching….
Adult Diaper It’s damn scary out there. Trust no one. Remember to update your Tor browser. Have your “go bag” ready.
Fetal Position Holy underwear Batman, it’s the end. Destroy all electronic devices and move into a bomb shelter. The Zombie Apocalypse is imminent.
Tagged , , , , ,

Don’t Let the Grinch Ruin Your Credit

Believe it or not, I actually like to educate my friends and acquaintances about technology. It makes my skeptical, shriveled, infosec heart grow a few sizes larger when I solve even the simplest problems, making someone’s life a little easier. So I was ecstatic to create and teach a free online-safety webinar for one of my favorite programs, AARP Tek Academy. While not as exciting as chasing down hackers or fighting a DDoS attack, it was a very rewarding experience.  And I didn’t have to argue with anyone about budgets or risk. So please share it with your Luddite friends this holiday season.

You can access the webinar here.

grinch_heart

Tagged , , , , ,

Why You’re Probably Not Ready for SDN

While it may seem as though I spend all my time inventing witty vendor snark to post in social media,  it doesn’t pay the bills. So I have a day-job as a Sr. Security Architect. But after coming up through the ranks in IT infrastructure, I often consider myself “architect first, security second.” I’m that rare thing,  an IT generalist. I actually spend quite a bit of time trying to stay current on all technology and SDN is one of many topics of interest for me. Especially since vendors are now trying to spin it as a security solution.

Software-defined networking (SDN) is still discussed as if it’s the secret sauce of the Internet. This despite Gartner placing it at the bottom of its Networking Hype Cycle due to “SDN fatigue” and the technology’s failure, thus far, to gain much traction in the enterprise.

 However, the magical SDN unicorn still manages to rear its head in strategy meetings under the new guise of hyper-convergence and the software-defined data center (SDDC). This is probably due to IT leadership’s continued yearning for cost savings, improved security and the achievement of a truly agile organization. But is SDN, with its added complexity and startling licensing costs, really the answer?
You can read the rest of the article here. And yes, there’s a registration wall.
Tagged , , , , , , ,

Ending the Tyranny of Expensive Security Tools

My obsession with talking about low-cost security tools all started with an article for TechTarget. It morphed into a session for Interop, then a sponsored webinar (by a vendor, go figure) and finally a longer mega-webinar for Ipspace.net. Maybe it’s because I’ve spent most of my time in the non-profit realm, but I simply hate spending money unnecessarily on products that replicate functionality of something my organization already owns. What follows is an excerpt of a post I wrote for Solarwinds on the topic.

Security tools: sometimes it seems that we never have enough to keep up with the task of protecting the enterprise. Or, at least it seems that way when walking the exhibit floor at most technology conferences. There’s a veritable smorgasbord of tools available, and you could easily spend your entire day looking for the perfect solution for every problem.

But, the truth is, IT teams at most organizations simply don’t have the budget or resources to implement dedicated security tools to meet every need and technical requirement. They’re too busy struggling with Cloud migrations, SaaS deployments, network upgrades, and essentially “keeping the lights on.”

Have you ever actually counted all the security tools your organization already owns? In addition to the licensing and support costs, every new product requires something most IT environments are in short supply of these days—time.

Optimism fades quickly when you’re confronted by the amount of time and effort required to implement and maintain a security tool in most organizations. As a result, these products end up either barely functional or as shelfware, leaving you to wonder if it’s possible to own too many tools.

There has to be a better way.

Maybe it’s time to stop the buying spree and consider whether you really need to implement another security tool. The fear, uncertainty, and doubt (FUD) that drives the need to increase the budget for improving IT security works for only so long. At some point, the enterprise will demand tangible results for the money spent.

Try a little experiment. Pretend that you don’t have any budget for security tools.  You might discover that your organization already owns plenty of products with functionality that can be used for security purposes.

You can read the rest of my rant here.

Tagged , , ,

Security Training for Cheapskates

During a recent webinar I gave, someone asked how soon I would be doing another one. I was flattered, but responded that because of a full-time job as an architect, my time was limited. “Besides,” I said, “you don’t need to wait for me, there’s plenty of free or inexpensive security training available online.”

Security professionals love to share and show off what they’ve learned. Some of us crave the warm fuzzy of helping our colleagues, while others do it to demonstrate their wicked skills or build their resume. Regardless of the motivation, that means there’s always abundant content to help you learn and grow.

Here’s a list of useful sites that I’ll try to keep updated. If you know of others and would like to contribute or if you think the training is outdated or bad, please let me know and I’ll adjust the list accordingly.

Securitytube.net – a project of security researcher, Vivek Ramachandran.

Hak5.org – Online security show produced by Darren Kitchen (of Pineapple WiFi router fame) and a collection of nerds who demo security tools and hacks. Includes Metasploit Minute with the awesome @Mubix.

OWASP – The Open Web Application Security Project has lots of “how to” guides and videos.

Offensive Security’s Vimeo Channel

Metasploit Unleased, Made for Hackers for Charity, an ethical hacking course provided free of charge to the InfoSec community in an effort to raise funds and awareness for underprivileged children in East Africa.

Georgia Weidman:Bulb Security – creator of the Smartphone Pentest Framework, researcher and author of Penetration Testing: A Hands-on Introduction to Hacking. She offers inexpensive online training in pentesting.

Adrian Crenshaw’s site, Irongeek, with conference and training videos.

Official BlackHat Conference Youtube Channel 

Defcon Youtube Channel 

Chaos Communication Congress videos

OpenSecurityTraining.info – CreativeCommons licensed security training site

Cyber Kung Fu for the Eight (8) Domains of CISSP – Training videos from Larry Greenblatt, a CISSP training guru.

Pentester Academy – video training site available for monthly or yearly subscription fee. Some free content.

Pentester Lab – Free online pentesting courses with practice images.

Penetration Testing Practice Lab – A mindmap of available vulnerable applications and systems practicing pentesting.

ENISA(European Union Agency for Network and Information Security) incident handling training

Carnegie Mellon University Software Engineering Institute (SEI) training – low-cost security training from a research, development and training center involved in computer software and network security.

Cybrary – free online IT and security training that grew out of a Kickstarter project.

Udemy, Coursera, edX and many universities offer MOOCs in computer science and information security. You can get a list from MOOC-Online.

Tagged , , , ,

Security Vs. Virtualization

Recently, I wrote an article for Network Computing about the challenges of achieving visibility in the virtualized data center.

Security professionals crave data. Application logs, packet captures, audit trails; we’re vampiric in our need for information about what’s occurring in our organizations. Seeking omniscience, we believe that more data will help us detect intrusions, feeding the inner control freak that still believes prevention is within our grasp. We want to see it all, but the ugly reality is that most of us fight the feeling that we’re flying blind.

In the past, we begged for SPAN ports from the network team, frustrated with packet loss. Then we bought expensive security appliances that used prevention techniques and promised line-rate performance, but were often disappointed when they didn’t “fail open,” creating additional points of failure and impacting our credibility with infrastructure teams.

So we upgraded to taps and network packet brokers, hoping this would offer increased flexibility and insight for our security tools while easing fears of unplanned outages. We even created full network visibility layers in the infrastructure, thinking we were finally ahead of the game.

Then we came face-to-face with the nemesis of visibility: virtualization. It became clear that security architecture would need to evolve in order to keep up.

You can read and comment on the rest of the article here.

Tagged , , , , , , ,

Is Your Security Architecture Default-Open or Default-Closed?

One of the most significant failures I see in organizations is an essential misalignment between Operations and Security over the default network state. Is it default-open or default-closed? And I’m talking about more than the configuration of fail-open or fail-closed on your security controls.

Every organization must make a philosophical choice regarding its default security state and the risk it’s willing to accept. For example, you may want to take a draconian approach, i.e. shooting first, asking questions later. This means you generally validate an event as benign before resuming normal operations after receiving notification of an incident.

But what if the security control detecting the incident negatively impacts operations through enforcement? If your business uptime is too critical to risk unnecessary outages, you may decide to continue operating until a determination is made that an event is actually malicious.

Both choices can be valid, depending upon your risk appetite. But you must make a choice, socializing that decision within your organization. Otherwise, you’re left with confusion and conflict over how to proceed during an incident.

baby_meme

Tagged , , , , ,
%d bloggers like this: