Category Archives: Blog Posts

Booth Babe Shame

Yesterday on Twitter I came across the following in my feed:

Screenshot 2016-06-23 13.46.05

I was horrified and angry, responding with the following:

Screenshot 2016-06-23 14.05.41

I also posted something similar on Linkedin, Facebook and Google+.  I never heard from the vendor, but I did hear back from some pretty offended men and women, both technologists and non-technologists.

Screenshot 2016-06-23 14.15.34

Why are we still having this conversation? Why would ANY vendor think it’s appropriate to make make the staff at a technical conference dress like employees at Hooters?

There’s a bitter irony in reading article after article about improving the diversity in STEM fields and then seeing this. I’m getting pretty tired of feeling like a second class citizen in this industry. I’d like to be able to go to a conference and know that I’ll be treated with the respect due a senior-level technologist, not minimized because I happen to have a uterus. Clearly, I’m not the only person who feels this kind of behavior is no longer acceptable and shouldn’t be tolerated. I encourage anyone equally appalled to call out the vendor in question. If we can’t appeal to their morals, maybe they’ll think twice if it impacts their bottom line.

UPDATE: I’ve been corrected regarding the venue. This event took place at a Vegas night club, but I’m STILL offended. I don’t see any Booth Bros. Why did this vendor feel the need to have women promote their product this way?

UPDATE 6/28: Looks like people at Nutanix are embarrassed, but I’m not sure if this will actually translate to any awareness on their part.

“In a blog post Friday, Julie O’Brien, vice president of corporate marketing at Nutanix, apologized for the stunt while also offering an explanation of what went wrong.”

This was in my feed this morning: Nutanix CMO After Risque Conference Stunt: I Will Resign If It Happens Again.

 

Tagged , ,

Fixing a Security Program

I’m still unsettled by how many security programs are so fundamentally broken. Even those managed and staffed by people with impressive credentials. But when I talk to some of these individuals, I discover the key issue. Many seem to think the root cause is bad tools. This is like believing the only thing keeping you from writing the Next Great American novel is that you don’t have John Steinbeck’s pen or Dorothy Parker’s typewriter.

In reality, most of the problems found in security programs are caused inferior processes, inadequate policies, non-existent documentation  and insufficient standards. If buying the best tools actually fixed security problems, wouldn’t we already be done? The truth is that too many employed in this field are in love with the mystique of security work. They don’t understand the business side, the drudgery, the grunt work necessary to build a successful program.

For those people, here’s my simple guide.  I’ve broken it down to the following essential tasks:

  1. Find your crap. Everything. Inventory and categorize your organization’s physical and digital assets according to risk. If you don’t have classification standards, then you must create them.
  2. Document your crap. Build run books. Make sure you have diagrams of networks and distributed applications. Create procedure documents such as IR plans. Establish SLOs and KPIs. Create policies and procedures governing the management of your digital assets.
  3. Assess your crap. Examine current state, identify any issues with the deployment or limitations with the product(s). Determine the actual requirements and analyze whether or not the tool actually meets the organization’s needs. This step can be interesting or depressing, depending upon whether or not you’re responsible for the next step.
  4. Fix your crap. Make changes to follow “best practices.” Work with vendors to understand the level-of-effort involved in configuring their products to better meet your needs. The temptation will be to replace the broken tools, but these aren’t $5 screwdrivers. Your organization made a significant investment of time and money and if you want to skip this step by replacing a tool, be prepared to provide facts and figures to back up your recommendation. Only after you’ve done this, can you go to step 6.
  5. Monitor your crap. If someone else knows your crap is down or compromised before you do, then you’ve failed. The goal isn’t to be the Oracle of Delphi or some fully omniscient being, but simply more proactive. And you don’t need to have all the logs. Identify the logs that are critical and relevant and start there: Active Directory, firewalls, VPN, IDS/IPS.
  6. Replace the crap that doesn’t work. But don’t make the same mistakes. Identify requirements, design the solution carefully, build out a test environment. Make sure to involve necessary stakeholders. And don’t waste time arguing about frameworks, just use an organized method and document what you do.

Now you have the foundation of any decent information security program. This process isn’t easy and it’s definitely not very sexy. But it will be more effective for your organization than installing new tools every 12 months.

 

Tagged , , , , , , , ,

Security Word of the Day: Compromise

Living in Washington DC, many of my friends and acquaintances work for the federal government in some capacity. And not all of these people are in IT. When you live in this area, politics seem to permeate everything, making House of Cards seem more like reality television than fiction.

I know a former congressional staff member and when we run into each other, inevitably our conversations turn to the current state of the federal government. It’s obligatory here, sort of like chatting about the weather anywhere else. Recently she made the point that recent problems with government shutdowns and standoffs could be attributed to one major problem: confusing compromise with capitulation. The government only works when both sides are willing to negotiate and unsophisticated, inexperienced politicians generally fail to differentiate between the two concepts, causing no end of problems.

I realized this is also a major problem with many information security professionals. At least 50% of our work is focused on discussing and negotiating risk. Does Operations really need to apply that emergency patch to all 2500 servers today to mitigate the latest threat? Is there an active exploit? What about the risk of service interruption? Can it be done in phases?

Often, our relationship to the organization turns into a WWE Main EventInformation Security vs. the Business. But like effective politicians (no, it’s not always an oxymoron),  good security professionals pick their battles, living to fight another day. Compromise is not the same as capitulation and it doesn’t mean we’ve surrendered to the Dark Side when we work with the business to build reasonable timelines for remediating risk.

patching

Splunk Funk

Recently, I was asked to evaluate an organization’s Splunk deployment. This request flummoxed me, because while I’ve always been a fan of the tool’s capabilities, I’ve never actually designed an implementation or administered it. I love the empowerment of people building their own dashboards and alerts, but this only works when there’s a dedicated Splunk-Whisperer carefully overseeing the deployment and socializing the idea of using it as self-service, cross-functional tool.  As I started my assessment, I entered what can only be called a “dark night of the IT soul” because my findings have led me to question the viability of most enterprise monitoring systems.

The original implementer recently moved on to greener pastures and (typically) left only skeletal documentation. As I started my investigation, I discovered  a painfully confusing distributed deployment built with little to no understanding of  “best practices” for the product. With no data normalization and almost non-existent data input management, the previous admin had created the equivalent of a Splunk Wild West, allowing most data to flow in with little oversight or control. With an obscenely large number of sourcetypes and sources, the situation horrified Splunk support and they told me my only option was to rebuild, a scenario that filled me with nerd-angst.

In the past, I’ve written about the importance of using machine data for infrastructure visibility. It’s critical for security, but also performance monitoring and troubleshooting. Log correlation and analysis is a key component of any healthy infrastructure and without it, you’re like a mariner lost at sea. So imagine my horror when confronted by a heaping pile of garbage data thrown into a very expensive application like Splunk.

Most organizations struggle with a monitoring strategy because it simply isn’t sexy. It’s hard to get business leadership excited about dashboards, pie charts and graphs without contextualizing them in a report. “Yeah baby, let me show you those LOOOOW latency times in our web app.” It’s a hard sell, especially when you see the TCO for on-premise log correlation and monitoring tools. Why not focus on improving a product that could bring in more customer dollars or a new service to make your users happier?  Most shops are so focused on product delivery and firefighting, they simply don’t have cycles left for thinking about proactive service management. So you end up with infrastructure train wrecks, with little to no useful monitoring.

While a part of me still believes in using the best tools to gain intelligence and visibility from an infrastructure, I’m tired of struggling. I’m beginning to think I’d be happy with anything, even a Perl script, that works consistently with a low LOE. I need that data now and no longer have the luxury of waiting until there’s a budget for qualified staff and the right application. Lately, I’m finding it pretty hard to resist the siren song of SaaS log management tools that promise onboarding and insight into machine data within minutes, not hours. Just picture it: no more agents or on-premise systems to manage, just immediate visibility into data.  Most other infrastructure components have moved to the cloud, maybe it’s inevitable for log management and monitoring. Will I miss the flexibility and power of tools like Splunk and ELK? Probably, but I no longer have the luxury of nostalgia.

 

 

Tagged , , , , ,

Danger: Stunt Hacking Ahead

On 4/18, Ars Technica reported on a recent 60 Minutes stunt-hacking episode by some telecom security researchers. During the episode, US representative Ted Lieu had a cell phone intercepted via vulnerabilities in the SS7 network. I’m no voice expert*, but it’s clear that both the 60 Minutes story and the Ars Technica article are pretty muddled attempts to dissect the source of these vulnerabilities. This is probably because trying to understand legacy telephony protocols such as SS7 is only slightly less challenging than reading ancient Sumerian.

Since I was dubious regarding the “findings” from these reports,  I  reached out to my VoIP bestie, @Unregistered436.

Screenshot 2016-04-20 11.33.19

 

While we can argue over how useful media FUD is in getting security issues the attention they deserve, I have other problems with this story:

  • This isn’t new information. Researchers (including the ones appearing in 60 Minutes) have been reporting on problems with SS7 for years. A cursory Google search found the following articles and presentations, including some by the Washington Post.

German Researchers Discover a Flaw That Could Let Anyone Listen to Your Cell Calls 12/18/14

Locating Mobile Phones Using Signalling System #7 1/26/13

For Sale: Systems that Can Secretly Track Where Cellphone Users Go Around the Globe 8/24/14

Toward the HLR, Attacking the SS7 & SIGTRAN Applications H2HC, Sao Paulo, Brazil, December 2009

  • If you’re going to reference critical infrastructure such as the SS7 network, why not discuss how migration efforts with IP convergence in the telco industry relate to this issue and could yield improvements? There are also regulatory concerns which impact the current state of the telecommunications infrastructure as well. Maybe Ted Lieu should start reading all those FCC documents and reports.
  • Legacy protocols don’t get ripped out or fixed overnight (IPv4 anyone?), so the congressman’s call to have someone “fired” is spurious. If security “researchers”  really want things to change, they should contribute to ITU, IEEE and IETF working groups or standards committees and help build better protocols. Or *shudder* take a job with a telecom vendor. We all need to take some ownership to help address these problems.

*If you want to learn more about telecom regulation, you should definitely follow Sherry Lichtenberg. For VoIP and SS7 security, try Patrick McNeil and Philippe Langlois.

Tagged , , , , , ,

The Endpoint Is Dead To Me

Another day, another vulnerability. This month the big non-event was Badlock, following the recent trend of using a splashy name to catch the media’s attention so they could whip the management meat puppets into a paranoid frenzy. Many in the security community seem unperturbed by this latest bug, because let’s face it, nothing can surprise us after the last couple of really grim years.

But then more Java and Adobe Flash bugs were announced, some new iOS attack using Wi-Fi and NTP that can brick a device, followed by an announcement from Apple that they’ve decided to kill QuickTime for Windows instead of fixing a couple of critical flaws. All of this is forcing  me to admit the undeniable fact that trying to secure the endpoint is a waste of time. Even if you’re using a myriad of management tools, patching systems, vulnerability scanners, and DLP agents; you will fail, because you can never stay ahead of the game.

The endpoint must be seen for what it truly is: a limb to be isolated and severed from the healthy body of the enterprise at the first sign of gangrene. It’s a cootie-ridden device that while necessary for our users, must be isolated as much as possible from anything of value. A smelly turd to be flushed without remorse or hesitation. No matter what the user says, it is not valuable and nothing of value to the organization should ever be stored on it.

This doesn’t mean I’m advocating removing all endpoint protection or patching agents. But it’s time to get real and accept the fact that most of this corporate malware is incapable of delivering what the vendors promise. Moreover, most often these applications negatively impact system performance, which infuriates our users. But instead of addressing this issue, IT departments layer on more of this crapware on the endpoint, which only encourages users to find ways to disable it. One more security vulnerability for the organization. A dedicated attacker will figure out a way to bypass most of it anyway, so why do we bother to trust software that is often more harmful to business productivity than the malware we’re trying to block?  We might as well give out “Etch A Sketches” instead of laptops.

google_on_my_etch_a_sketch_by_pikajane-d3846dy

Tagged , , , , , ,

Mindful Security

Recently, I was invited to participate in a podcast on the topic of mindfulness. A friend and former editor of mine thought it would be helpful to discuss how the practice could potentially help stressed-out and oversubscribed IT professionals. I’ve actually had a meditation practice on and off for the last eight years and sometimes it seems like it’s the only thing keeping me sane. Especially in the high-pressure realm of information security. While I’m not an expert on meditation, I’ve spent a considerable amount of time studying and trying to understand its impact. Hopefully, people will find the information useful as an alternative method of reducing the inevitable stress and anxiety we’re all feeling in this industry lately.

Tagged , , ,

Fear and Loathing in Vulnerability Management

Vulnerability management – the security program that everyone loves to hate, especially those on the receiving end of a bunch of arcane reports. Increasingly, vulnerability management programs seem to be monopolizing more and more of a security team’s time, mostly because of the energy involved with scheduling, validating and interpreting scans. But does this effort actually lead anywhere besides a massive time suck and interdepartmental stalemates? I would argue that if the core of your program is built on scanning, then you’re doing it wrong.

The term vulnerability management is a misnomer, because “management” implies that you’re solving problems. Maybe in the beginning scanning and vulnerability tracking helped organizations, but now it’s just another method used by security leadership to justify their every increasing black-hole budgets. “See? I told you it was bad. Now I need more $$$$ for this super-terrific tool!”

Vulnerability management programs shouldn’t be based on scanning, they should be focused on the hard stuff: policies, standards and procedures with scans used for validation. If you’re stuck in an endless cycle of scanning, patching, more scanning and more patching; you’ve failed.

You should be focused on building processes that bake build standards and vulnerability remediation into a deployment procedure. Work with your Infrastructure team to support a DevOps model that eliminates those “pet” systems. Focus on “cattle” or immutable systems that can and should be continuously replaced when an application or operating system needs to be upgraded. Better yet, use containers. Have a glibc vulnerability? The infrastructure team should be creating new images that can be rolled out across the organization instead of trying to “patch and pray” after finally getting a maintenance window. You should have a resilient enough environment that can tolerate change; an infrastructure that’s self-healing because it’s in a state of continuous deployment.

I recognize that most organizations aren’t there, but THIS should be your goal, not scanning and patching. Because it’s a treadmill to nowhere. Or maybe you’re happy doing the same thing over and over again with no difference in the result. If so, let me find you a very large hamster wheel.

hamster_meme

 

Tagged , , , , , ,

The Question of Technical Debt

Not too long ago, I came across an interesting blog post by the former CTO of Etsy, Kellan Elliott-McCrea, which made me rethink my understanding and approach to the concept of technical debt. In it, he opined that technical debt doesn’t really exist and it’s an overused term. While specifically referencing code in his discussion, he makes some valid points that can be applied to information security and IT infrastructure.

In the post, he credits Peter Norvig with the quote, “All code is liability.” This echoes Nicholas Carr’s belief in the increased risk that arises from infrastructure technology due to the decreased advantage as it becomes more pervasive and non-proprietary.

When a resource becomes essential to competition but inconsequential to strategy, the risks it creates become more important than the advantages it provides. Think of electricity. Today, no company builds its business strategy around its electricity usage, but even a brief lapse in supply can be devastating…..

Over the years, I’ve collected a fair amount of “war stories” about less than optimal application deployments and infrastructure configurations. Too often, I’ve seen things that make me want to curl up in a fetal position beneath my desk. Web developers failing to close connections or set timeouts to back-end databases, causing horrible latency. STP misconfigurations resulting in network core meltdowns. Data centers built under bathrooms or network hub sites using window unit air conditioners. Critical production equipment that’s end-of-life or not even under support. But is this really technical debt or just the way of doing business in our modern world?

Life is messy and always a “development” project. Maybe the main reason DevOps has gathered such momentum in the IT world is because it reflects the constantly evolving, always shifting, nature of existence. In the real world, there is no greenfield. Every enterprise struggles to find the time and resources for ongoing maintenance, upgrades and improvements. As Elliott-McCrea so beautifully expresses, maybe our need to label this state of affairs as atypical is a cop-out. By turning this daily challenge into something momentous, we make it worse. We accuse the previous leadership and engineering staff  of incompetence. We come to believe that the problem will be fully eradicated through the addition of the latest miracle product. Or we invite some high-priced process junkies in to provide recommendations which often result in inertia.

We end up pathologizing something which is normal, often casting an earlier team as bumbling. A characterization that easily returns to haunt us.

When we take it a step further and turn these conflations into a judgement on the intellect, professionalism, and hygiene of whomever came before us we inure ourselves to the lessons those people learned. Quickly we find ourselves in a situation where we’re undertaking major engineering projects without having correctly diagnosed what caused the issues we’re trying to solve (making recapitulating those issues likely) and having discarded the iteratively won knowledge that had allowed our organization to survive to date.

Maybe it’s time to drop the “blame game” by information security teams when evaluating our infrastructures and applications. Stop crying about technical debt, because there are legitimate explanations for the technology decisions made in our organizations and it’s generally not because someone was inept. We need to realize that IT environments aren’t static and will always be changing and growing. We must transform with them.

Tagged , , , , , ,

Why You Shouldn’t Be Hosting Public DNS

As a former Unix engineer who managed my share of critical network services, one of the first things I do when evaluating an organization is to validate the health of infrastructure components such as NTP, RADIUS, and DNS. I’m often shocked by what I find. Although most people barely understand how these services work, when they break, it can create some troublesome technical issues or even a full meltdown. This is especially true of DNS.

Most problems with DNS implementations are caused by the fact that so few people actually understand how the protocol is supposed to work, including vendors.The kindest thing one can say about DNS is that it’s esoteric. In my IT salad days, I implemented and was responsible for managing the BIND 9.x infrastructure at an academic institution. I helped write and enforce the DNS request policy, cleaned up and policed the namespace, built and hardened the servers, compiled the BIND binaries and essentially guarded the architecture for over a decade. I ended up in this role because no one else wanted it. I took a complete mess of a BIND 4.x deployment and proceeded to untangle a ball of string the size New Zealand. The experience was an open source rite of passage, helping to make me the engineer and architect I am today.  I also admit to being a BIND fangirl, mostly because it’s the core software of most load-balancers and IPAM systems.

This history makes what I’m about to recommend even more shocking. Outside of service providers, I no longer believe that organizations should run their own public DNS servers. Most enterprises get along fine using Active Directory for internal authentication and name resolution, using a DNS provider such as Neustar, Amazon or Akamai to resolve external services. They don’t need to take on the risk associated with managing external authoritative DNS servers or even load-balancing most public services.

The hard truth is that external DNS is best left to the experts who have time for the care and feeding of it. One missed security patch, a mistyped entry, a system compromise; any of these could have a significant impact to your business. And unless you’re an IT organization, wouldn’t it be better to have someone else deal with that headache? Besides, as organizations continue to move their services to the cloud, why would you have the name resolution of those resources tied to some legacy, on-premise server? But most importantly, as DDoS attacks become more prevalent, UDP-based services are an easy target, especially DNS. Personally, I’d rather have a service provider deal with the agony of DDoS mitigation. They’re better prepared with the right (expensive) tools and plenty of bandwidth.

I write this with great sadness and it even feels like I’m relinquishing some of my nerd status. But never fear, I still have a crush on Paul Vixie and will always choose dig over nslookup.

 

Tagged , , , , ,
%d bloggers like this: