Tag Archives: Information Security

Security Karma

The Hacking Team debacle continues to make life miserable for defenders everywhere. Any vestige of organizational good will I  may have built up over the last year, is gone after issuing five emergency patch requests over ten days. I’m exhausted and still wondering how many more 0-days are lurking around the corner.

The compromise was epic, with hackers releasing approximately 400GB of data, including thousands of internal emails and memos which were posted on Wikileaks. Reuters reported that all this mayhem was caused by six disgruntled former employees who also released Hacking Team source code.  Frankly, I don’t have much sympathy for David Vincenzetti and his circle of douchery that includes government clients using Hacking Team’s brand of malware to spy on dissidents. While following the story, a Confucian proverb came to mind. “When you ride a tiger, it’s hard to get off.”

And so it has been for The Hacking Team, now bitten by that proverbial tiger and broken, a casualty of their own hubris. Whether they can recover from this disaster is questionable. Their arrogance only surpassed by that other sad sack of the security industry, HBGary, taken down by Anonymous.

There is a story of a soldier who went to see a famous Buddhist Monk, Ajahn Chah, to ask why he had been shot on the battlefield. Why had he been chosen to suffer, was it something he had done in a past life? Ajahn Chah answered that it was the karma of a soldier to be wounded. The real meaning of karma isn’t punishment, it’s simple cause and effect. With the Hacking Team it’s a case of security karma: they chose to enter the arena of offensive security and use the tools of attackers for questionable purposes. By doing so, they increased the odds that they would themselves become an object of retaliation.

Tagged , , ,

Security’s Bad Boys

This week’s latest stunt hacking episode seemed to cement the security community’s reputation as the industry bad boy. The Wired car hacking story demonstrated an absence of the responsible disclosure most security researchers strive to follow. While the story indicated that Miller and Valasek have been working with Chrysler for nine months and that they’re leaving out a key element of the published exploit, there’s still going to be enough left to cause some mayhem when released at Black Hat USA next month. Moreover, the story’s writer and innocent bystanders were often in harm’s way during the demonstration on a major highway in St. Louis.

The annual Black Hat conference in Vegas is an adult version of “look what I can do” for the security set, perfectly placed in the city’s carnival atmosphere. A grand spectacle where every breaker competes to get Daddy’s attention by taking apart the toaster, or car in this case. The media loves this stuff and floods outlets with paranoia-inducing stories the few weeks before and during the conference.  What’s so disturbing about these events isn’t the frailty of our technology-enabled stuff aka “Internet of Things,” but the need for a subset of people to focus on its faults. The typical rationale from many of these researchers for their theatrical, hype-infested releases during Black Hat and other security conferences, is that they can’t get any attention from manufacturers when going the path of responsible disclosure. I would argue that this behavior is more about ego than concern for the safety of consumers, because there are plenty of principled researchers, quiet heroes who slog along filing bugs with vendors, unknown and overlooked by the general public.

Most idiots can blow up a cathedral with enough C-4. But it takes a Bernini or Michelangelo with hundreds of talented, dedicated artisans, to design and build one. People who will never be remembered by tourists standing in the middle St. Peter’s, glorying in the majesty of such an achievement.

St. Peter's

Tagged , , , , , , ,

Dear Flash, It’s Over

Dear Adobe Flash,

It’s probably insensitive of me to do this in a blog post, but I can’t trust myself to be alone with you anymore. The relationship started out great. Those cute kitten and puppy videos would get me through the most stressful days, when I just needed to turn off my brain off after a day of navigating the network poopfest at work. I wish we could go back and start over again, but after three patches in a week, I’m done. This just isn’t working for me anymore. Okay, I know we could still have some fun times, but I simply don’t feel safe with you anymore. So I’m going to have to end it. And to be clear, it’s not me, it’s you.

P.S. I’d just like to point out the irony of a recent Wired article, “Flash.Must.Die.” It has a Flash popup.

Screenshot 2015-07-16 09.31.52

Tagged , , , ,

Mythology and the OPM Hack

Seems like every security “thought leader” on the planet has commented on the OPM hack, so I might as well join in.

Although the scope of the breach is huge, there’s nothing all that new here. In fact, it’s depressing how familiar the circumstances sound to those of us who work as defenders in an enterprise. For the moment, ignore attribution, because it’s a distraction from the essential problem. OPM was failing security kindergarten. They completely neglected the basics of rudimentary security: patching vulnerabilities, keeping operating systems upgraded, multi-factor authentication for accessing critical systems, intrusion detection.

Being on a security team in an organization often means that your cries of despair land on deaf ears. Much like a mythical figure named Cassandra. She was the daughter of the Trojan king Priam and greatly admired by Apollo, who gave her the gift of prophecy. When she spurned his affections, he converted the gift into a curse. While her predictions were still true, no one would believe them.

As a recent Washington Post story reminded us, many in security have been predicting this meltdown since the 90’s. Now that IT has become a critical component of most organizational infrastructures, there’s more at stake and we’re finally getting the attention we’ve been demanding. But it may be too late in the game, leaving worn out security pros feeling like the Trojan War’s patron saint of “I told you so’s,” Cassandra.

Cassandra on TVM

Tagged , , , , , ,

The MSSP Is the New SIEM

In the last year, I’ve come to a realization about incident management. In most cases, buying a SIEM is a waste of money for the enterprise. The software and licensing cost isn’t trivial, some of them utilizing what I like to call the “heroin dealer” or consumption licensing model. The first taste is free or inexpensive, but once you’re hooked, prepare to hand over your checkbook, because the costs often spiral out of control as you add more devices. Additionally, for most small to medium organizations, the complicated configuration often requires a consulting company to assist with the initial implementation and at least one full-time employee to manage and maintain. Even then, you won’t really have 24×7 monitoring and alerting, because most can’t afford a large enough staff to work in shifts, which means you’re dependent upon email or text alerts. That’s not very useful if your employees actually have lives outside of work. Most often, what you’ll see is an imperfectly implemented SIEM that becomes a noise machine delivering little to no value.

The SIEM’s dirty secret is that it’s a money pit. Once you add up the software and licensing cost, the professional services you spend to get it deployed and regularly upgraded, the hardware, the annual support cost, and staffing, you’re looking at a sizable investment. Now you should ask yourself, are you really reducing risk with a SIEM or just hitting some checkbox on a compliance list?

Alternatively, let’s look at the managed security service provider (MSSP). For a yearly cost, this outsourced SOC will ingest and correlate your logs, set up alerts, monitor and/or manage devices 24×7, 365 days a year. An MSSP’s level-1 and level-2 staff significantly reduce the amount of repetitive work and noise your in-house security team must deal with, making it less likely that critical incidents are missed. The downside is that the service is often mediocre, leaving one with the sneaking suspicion that these companies are happy to employ any warm body to answer the phone and put eyeballs on a screen. This means that someone has to manage the relationship, ensuring that service level agreements are met.

While there are challenges with outsourcing, the MSSP is a great lesson in the economy of scale. The MSSP is more efficient in delivering service because it performs the same functions for many customers.  While not cutting-edge or innovative, the service is often good enough to allow a security team to focus on the incidents that matter without having to sift through the noise themselves. The caveat? While useful in the short-term, security teams should still focus on building proactive controls with automation and anomaly detection for improved response. After all, the real goal is to make less garbage, not more sanitation workers.

Tagged , , , ,

Tootsie Roll Pop Security

Recently, it occurred to me that the security of most organizations is like a Tootsie Roll Pop. Hard and crunchy on the outside, soft and chewy on this inside. One bite and you easily get to the yummy center.

How many licks does it take to get to the crown jewels of your organization: your data?

Tagged , , , ,

The Security Policy’s Bad Reputation

I had a disturbing conversation with a colleague last night. He told me that he didn’t believe in compliance-only, checkbox security, so why should he waste time on policies and standards? I almost blew a gasket, but because he’s pretty junior, I thought it best to educate him. The following is a summary of what I told him.

Security policies and standards are a foundational set of requirements for your engineering, development and operations teams. Without these boundaries, the entire IT organization floats aimlessly, buying solutions and implementing controls without rhyme or reason. Generally, only oblivious technologists design solutions without referencing policies and most engineers are begging for this guidance from their security teams.  Engineers aren’t mind readers, they just want us to tell them what we want: in writing.  Without policies and standards, the result is reactive inefficiency, because the security team becomes a chokepoint for every implementation.

Security policies help keep organizations ahead of the risk curve. It means that risk has been evaluated to some degree and a decision made (by someone) regarding the level an organization is willing to accept. Any security organization that wants to achieve some level of maturity will spend the cycles to develop its policies or suffer the consequences.

Developing policies and standards isn’t an easy process. Often the right stakeholders haven’t participated in the discussion, the documents are badly written, outdated or compiled by consultants with no organizational context. Moreover, policy debates often degenerate into arguments over semantics, but the how of getting this done isn’t as important as simply getting it done.

Ultimately, when security professionals don’t create and maintain policies and standards, they have abdicated their responsibility to the organization that employs them.

Tagged , , , , ,

Security and Ugly Babies

Recently a colleague confessed his frustration to me over the resistance he’s been encountering in a new job as a security architect. He’s been attempting to address security gaps with the operations team, but he’s being treated like a Bond villain and the paranoia and opposition are wearing him down.

It’s a familiar story for those of us in this field. We’re brought into an organization to defend against breaches and engineer solutions to reduce risk, but along the way often discover an architecture held together by bubble gum and shoestring. We point it out, because it’s part of our role, our vocation to protect and serve. Our “reward” is that we usually end up an object of disdain and fear. We become an outcast in the playground, dirt kicked in the face by the rest of IT, as we wonder what went wrong.

We forget that in most cases the infrastructure we criticize isn’t just cabling, silicon and metal. It represents the output of hundreds, sometimes thousands of hours from a team of people. Most of whom want to do good work, but are hampered by tight budgets and limited resources. Maybe they aren’t the best and brightest in their field, but that doesn’t necessarily mean that they don’t care.  I don’t think anyone starts their day by saying, “I’m going to do the worst job possible today.”

Then the security team arrives on the scene, the perpetual critic, we don’t actually build anything. All we do is tell the rest of IT that their baby is ugly. That they should get a new one. Why are we surprised that they’re defensive and hostile? No one wants to hear that their hard work and long hours have resulted in shit.

What we fail to realize is this is our baby too and our feedback would be better received if we were less of a critic and more of an ally.

Tagged , , , ,

Are You Trying To Improve Security or Just Kingdom Building?

I’m a huge Seth Godin fan. Technically, a marketing guru, but he’s so much more than that. His wisdom easily applies to all facets of business and life. A few days ago, I read a post of his, “But do you want to get better?”

…Better means change and change means risk and risk means fear. So the organization is filled with people who have been punished when they try to make things better, because the boss is afraid.

I wonder if Godin ever worked in Information Security.

Some days it seems as though the practice of Infosec is more about how it sounds and looks to outsiders and very little about actual reduction of risk. Most of the time, real improvement to an information security program doesn’t arise from exciting changes or innovative new tools. It often comes from making better policies, standards and procedures. It could mean that you really don’t need five extra staff members or a Hadoop cluster. Maybe it means you learn to operationalize controls, automate and collaborate better with your peers in apps and infrastructure. Worrying less about kingdom building and more about what helps the organization.

But this kind of change is a gargantuan shift in the way many infosec leaders operate. Often, they’re so busy cultivating FUD to get budget, they can’t or won’t stop to ask themselves, “Do I want to make it better?”

Tagged , , ,

I’m a Doctor, Not a Security Expert!

While I don’t completely agree with the Rob Ragan’s sentiments in a recent article in Dark Reading on the limitations of security awareness training, I think the writer makes some good points, especially regarding the appropriate use of technical controls in combination with training to mitigate risk. I love the quote he includes from Adrienne Porter Felt from the Google Chrome Security Team:

 “…users are neither stupid nor lazy. They are musicians, parents, journalists, firefighters — it isn’t fair to also expect them to become security experts too. And they have other, important things to do besides read our lovingly crafted explanations of SSL. But they still deserve to use the web safely, and it’s on us to figure out that riddle.”
This was prevalent in my mind as I assisted my Luddite physical therapist last night in resetting her AOL password. She couldn’t get into her account for an entire day, all because a “security feature” locked her account for suspicious activity. Basically, she bought a new iPad and entered her complex password incorrectly multiple times. But because she used IMAP to connect to her account from her laptop, she had no way of knowing that the account had been locked and didn’t understand how to use the UI. So I did the unthinkable: I requested an account reset, then logged into the Gmail account she uses for account recovery and gave her the new password I created for her AOL account. She thanked me and told me how much harder my job was than hers, and that she would never do it. And this admiration was all predicated upon my resetting her password. Supposedly, one of the most trivial activities in IT. Any user should be able to do this, right?
Earlier this week, my team received a request to allow a user to install the Fitbit application on her company-owned system. It prompted an esoteric discussion on the security of the Internet of Things and the Quantified Self. I recommended that we approve the request and said, “Why are we even having this discussion? We’re an organization that has an employee wellness program and we’re wasting precious resources discussing whether or not this application increases organizational risk? We have approved applications that are more dangerous, such as Java, Adobe Flash and Internet Explorer.”
Why are we still so disconnected from our users, making user interfaces that are too complex, byzantine security procedures and arcane policies?
I'm a doctor, not a security expert!
Tagged , , , ,