Tag Archives: business strategy

Fear and Loathing in Security Dashboards

Recently a colleague asked for my help in understanding why he was seeing a specific security alert on a dashboard. The message said that his database instance was “exposed to a broad public IP range.” He disagreed with this assessment because it misrepresented the configuration context. While the database had a public IP, only one port was available, and it was behind a proxy. The access to this test instance was also restricted to “authorized” IP address ranges. I explained that this kind of information is what security practitioners like to know as they evaluate risk, but then thought, “is this a reasonable alert for a user or just more noise?” When did security dashboards become like the news, more information than we can reasonably take in, overloading our cognitive faculties and creating stress?

I have a complicated relationship with security dashboards. Though I understand different teams need a quick view of what they need to prioritize, findings are broadly categorized as high, medium, and low without much background. This approach can create confusion and disagreements between groups because those categories are generally aligned to the Vienna Convention on Road Signs and Signals. Green is good, red is bad, and yellow means caution. The problem is that a lot of findings end up red and yellow, with categorization dependent upon how well the security team has tuned alerts and your organizational risk tolerance. Most nuance is lost.

The other problem is that this data categorization isn’t only seen as a prioritization technique. It can communicate danger. As humans, we have learned to associate red on a dashboard with some level of threat. This might be why some people develop fanariphobia, a fear of traffic lights. Is this an intentional design choice? Historically, Protection Motivation Theory (PMT), which explains how humans are motivated to protect themselves when threatened, has been used as a standard technique within the domain of cybersecurity to justify the use of fear appeals. But what if this doesn’t work as well as we think it does? A recent academic paper reviewed literature in this space and found conflicting data on the value of fear appeals in promoting voluntary security behaviors. It often backfires, leading to a reduction in desired responses. What does work? The researchers identify Stewardship Theory as a more efficacious approach leading to improved security behaviors by employees. They define it as “a covenantal relationship between the individual and the organization” which “connects both employee and employer to work toward a common goal, characterized by moral commitment between employees and the organization.”

Am I suggesting you should throw your security dashboards away? No, but I think we can agree that they’re a limited view, which can exacerbate conflict between teams. Instead of being the end of a conversation, they should be the beginning, a dialog tool that encourages a collaborative discussion between teams about risk.

Tagged , , , , , , , ,

Dancing with the Cloud

Recently, I’ve written about the dangers posed by technology fallacies and one of the most frustrating for me involves discussions of “best in class.” In my experience, this mindset causes technology teams to get themselves wrapped up in too many pointless discussions followed by never-ending proof-of-concept work all in search of that non-existent perfect tool. The truth is that most organizations don’t need the best, they need “good enough” so they can get on with business. But this delusion has more serious consequences within the cloud. When you choose tools solely based on the requirement of being “best in class,” you could compromise the integrity of your cloud architecture. Without considering the context, your selection could violate a sacred principle of data center design – minimizing the size of a failure domain.

For many, cloud’s abstraction of the underlying network reduced complexity in what often seemed like arcane and mystical knowledge. It also allowed development teams to work unencumbered by the fear of seemingly capricious network engineers. However, the downside of infrastructure-as-a-service (IaaS) is that this same obfuscation allows those without a background in traditional data center design to make critical errors in fault tolerance.  We’ve all heard the horror stories about cloud applications failing because they were only located in a single availability zone (AZ) or region. Even worse, partial outages that occur due to application dependencies that cross an AZ or region. While this could still happen with physical data centers, it was more obvious because you could see the location of a system in a rack and your network engineers would call you out when you didn’t build in fault-tolerance and thwarted their planned maintenance windows.   

Today, it’s also common for organizations to use a variety of software-as-a-service (SaaS) applications with no knowledge of which underlying cloud providers are being used or how those services are designed. While this simplicity is often beneficial for the business because it increases velocity in service delivery, it can also create blind spots that violate the same failure domain principles as with IaaS. Only the most persistent technologists can unmask the inner workings of these applications to determine how they’re deployed and whether they align with their organization’s infrastructure design. Unfortunately, the business doesn’t always understand this nuance and can fall prey to the “best in class” fallacy, leading to brittle environments due to large failure domains. By exchanging their on-premise, sluggish systems for SaaS, the organization often accepts a different set of problems associated with risk.

Ultimately, when choosing capabilities, it’s a better recommendation to “dance with the cloud that brought you.” Instead of worrying about “best in class,” you want to select the technologies that are closer to your infrastructure by leveraging the services of your cloud provider where feasible. This translates to a better technology architecture for your organization because the cloud provider is making sure that their managed services are low-latency, resilient and highly available. While it may not always be possible, by taking the failure domain principle into consideration during the selection and implementation of solutions, you’ll achieve better service delivery for your organization.

Tagged , , , , ,

Trapped by Technology Fallacies

After a working in tech at several large companies over a couple of decades, I’ve observed some of the worst fallacies that cause damage to organizations. They don’t arise from malice, but from a scarcity of professional reflection in our field. Technologists often jump to problem solving before spending sufficient time on problem setting, which leads to the creation of inappropriate and brittle solutions. Donald A. Schön discusses this disconnect in his seminal work, The Reflective Practitioner,

…with this emphasis on problem solving, we ignore problem setting, the process by which we define the decision to be made, the ends to be achieved, the means which may be chosen.

Problem solving relies on selecting from a menu of previously established formulas. While many of these tactics can be effective, let’s examine some of the dysfunctional approaches used by technologists that lead to pain for their organizations.
  • Fallacy #1 – Hammer-Nail: Technologists often assume that all problems can be beaten into submission with a technology hammer.  It’s like the bride’s father in My Big Fat Greek Wedding, who believes that Windex can be used to cure any ill. Similarly, technologists think that every challenge is just missing a certain type of technology to resolve it.  This, even though we generally speak about maturity models in terms of people, process, technology, and culture. I can’t tell you how often I’ve seen someone design and implement a seemingly elegant solution only to have it rejected because it was developed without understanding the context of the problem.
  • Fallacy #2 – Best in Class. I’ve heard this so many times in my career that I just want to stand on a chair and shake my fist in the middle of a Gartner conference. Most organizations don’t need “best in class,” they need “good enough.” The business needs fast and frugal solutions to keep them productive and efficient, but technologists are often too busy navel gazing to listen.
  • Fallacy #3 – Information Technology is the center of the business universe. I once worked for a well-known bank that had an informal motto, “We’re a technology company that happens to be a bank.” The idea was that because they were so reliant on technology, it transformed them into a cool tech company. I used to respond with, “We also use a lot of electricity, does that make us a utility company?” Maybe a little hyperbolic, but I was trying to make the point that IT Doesn’t Matter. When Nicholas Carr used that phrase as the title of his Harvard Business Review article in 2003, he was considering technology in the historical context of other advances such as electricity and telephones, “When a resource becomes essential to competition but inconsequential to strategy, the risks it creates become more important than the advantages it provides.” In the early days of tech, it gave you an edge. Today, when a core system fails, it could sink your business. The best solutions are often invisible to the organization so it can focus on its core competencies.
While technology can be very effective at solving technical problems, most organizational issues are adaptive challenges. In The Practice of Adaptive Leadership, the authors identify this failure to differentiate between the two as the root cause of business difficulties,

The most common cause of failure in leadership is produced by treating adaptive challenges as if they were technical problems. What’s the difference? While technical problems may be very complex and critically important (like replacing a faulty heart valve during cardiac surgery), they have known solutions that can be implemented by current know-how. They can be resolved through the application of authoritative expertise and through the organization’s current structures, procedures, and ways of doing things. Adaptive challenges can only be addressed through changes in people’s priorities, beliefs, habits, and loyalties. Making progress requires going beyond any authoritative expertise to mobilize discovery, shedding certain entrenched ways, tolerating losses, and generating the new capacity to thrive anew.

The end goals that we’re trying to reach can’t be clearly established if we don’t sufficiently reflect on the problem. When we jump to problem solving over problem setting, we’re assuming a level of confidence that hasn’t been earned. We’ve made assumptions in the way systems should work, without thoroughly investigating how they are actually functioning. When Postmodern critic Michel Foucault speaks of “an insurrection of subjugated knowledges,” he’s questioning the certainty of our perceptions when we’ve disqualified information that might be important in gaining a broader perspective. Technologists are more effective when they recognize the inherent expertise of the non-technologists in the businesses they serve and operate as trusted partners who understand change leadership. Instead of serving the “religion of tech,” we should focus on delivering what organizations really need.
Tagged , , , ,

Compliance As Property

In engineering, a common approach to security concerns is to address those requirements after delivery. This is inefficient for the following reasons:

  • Fails to consider how the requirement(s) can be integrated during development, thereby avoiding reengineering to accommodate the requirement 
  • Disempowers engineering teams by outsourcing compliance and the understanding of the requirements to another group.

To improve individual and team accountability, it is recommended to borrow a key concept from Restorative Justice, Conflict as Property. This concept asserts that the disempowerment of individuals in western criminal justice systems is the result of ceding ownership of conflict to a third-party. Similarly, enterprise security programs often operate as “policing” systems, with engineering teams considering security requirements as owned by a compliance group. While appearing to be efficient, this results in the siloing of compliance activities and infantilization of engineering teams. 

Does this mean that engineering teams must become deep experts in all aspects of information security? How can they own security requirements without a full grounding in these concepts? Ownership does not necessarily imply expertise. While one may own a house or vehicle and be responsible for maintenance, most owners will understand when outside expertise is required.

The ownership of all requirements by an engineering team is critical for accountability. To proactively address security concerns, a team must see these requirements as their “property” to address them efficiently during the design and development phases. It is neither effective nor scalable to hand off the management of security requirements to another group. While an information security office can and should validate that requirements have been met in support of Separation of Duties (SoD), ownership for implementation and understanding belongs to the engineering team. 

Tagged , , ,

The Five Stages of Cloud Grief

Over the last five years as a security architect, I’ve been at organizations in various phases of cloud adoption. During that time, I’ve noticed that the most significant barrier isn’t technical. In many cases, public cloud is actually a step up from an organization’s on-premise technical debt.

One of the main obstacles to migration is emotional and can derail a cloud strategy faster than any technical roadblock. This is because our organizations are still filled with carbon units that have messy emotions who can quietly sabotage the initiative.

The emotional trajectory of an organization attempting to move to the public cloud can be illustrated through the Five Stages of Cloud Grief, which I’ve based on the Kubler-Ross Grief Cycle.

  1. Denial – Senior Leadership tells the IT organization they’re spending too much money and that they need to move everything to the cloud, because it’s cheaper. The CIO curls into fetal position under his desk. Infrastructure staff eventually hear about the new strategy and run screaming to the data center, grabbing onto random servers and switches. Other staff hug each other and cry tears of joy hoping that they can finally get new services deployed before they retire.
  2. Anger – IT staff shows up at all-hands meeting with torches and pitchforks calling for the CIO’s blood and demanding to know if there will be layoffs. The security team predicts a compliance apocalypse. Administrative staff distracts them with free donuts and pizza.
  3. Depression – CISO tells everyone cloud isn’t secure and violates all policies. Quietly packs a “go” bag and stocks bomb shelter with supplies. Infrastructure staff are forced to take cloud training, but continue to miss project timeline milestones while they refresh their resumes and LinkedIn pages.
  4. Bargaining – After senior leadership sets a final “drop dead” date for cloud migration, IT staff complain that they don’t have enough resources. New “cloud ready” staff is hired and enter the IT Sanctum Sanctorum like the Visigoths invading Rome. Information Security team presents threat intelligence report that shows $THREAT_ACTOR_DU_JOUR has pwned public cloud.
  5. Acceptance – 75% of cloud migration goal is met, but since there wasn’t a technical strategy or design, the Opex is higher and senior leadership starts wearing diapers in preparation for the monthly bill. Most of the “cloud ready” staff has moved on to the next job out of frustration and the only people left don’t actually understand how anything works.

AWS_consumption

Tagged , , , , , , , ,

Fixing a Security Program

I’m still unsettled by how many security programs are so fundamentally broken. Even those managed and staffed by people with impressive credentials. But when I talk to some of these individuals, I discover the key issue. Many seem to think the root cause is bad tools. This is like believing the only thing keeping you from writing the Next Great American novel is that you don’t have John Steinbeck’s pen or Dorothy Parker’s typewriter.

In reality, most of the problems found in security programs are caused inferior processes, inadequate policies, non-existent documentation  and insufficient standards. If buying the best tools actually fixed security problems, wouldn’t we already be done? The truth is that too many employed in this field are in love with the mystique of security work. They don’t understand the business side, the drudgery, the grunt work necessary to build a successful program.

For those people, here’s my simple guide.  I’ve broken it down to the following essential tasks:

  1. Find your crap. Everything. Inventory and categorize your organization’s physical and digital assets according to risk. If you don’t have classification standards, then you must create them.
  2. Document your crap. Build run books. Make sure you have diagrams of networks and distributed applications. Create procedure documents such as IR plans. Establish SLOs and KPIs. Create policies and procedures governing the management of your digital assets.
  3. Assess your crap. Examine current state, identify any issues with the deployment or limitations with the product(s). Determine the actual requirements and analyze whether or not the tool actually meets the organization’s needs. This step can be interesting or depressing, depending upon whether or not you’re responsible for the next step.
  4. Fix your crap. Make changes to follow “best practices.” Work with vendors to understand the level-of-effort involved in configuring their products to better meet your needs. The temptation will be to replace the broken tools, but these aren’t $5 screwdrivers. Your organization made a significant investment of time and money and if you want to skip this step by replacing a tool, be prepared to provide facts and figures to back up your recommendation. Only after you’ve done this, can you go to step 6.
  5. Monitor your crap. If someone else knows your crap is down or compromised before you do, then you’ve failed. The goal isn’t to be the Oracle of Delphi or some fully omniscient being, but simply more proactive. And you don’t need to have all the logs. Identify the logs that are critical and relevant and start there: Active Directory, firewalls, VPN, IDS/IPS.
  6. Replace the crap that doesn’t work. But don’t make the same mistakes. Identify requirements, design the solution carefully, build out a test environment. Make sure to involve necessary stakeholders. And don’t waste time arguing about frameworks, just use an organized method and document what you do.

Now you have the foundation of any decent information security program. This process isn’t easy and it’s definitely not very sexy. But it will be more effective for your organization than installing new tools every 12 months.

 

Tagged , , , , , , , ,

Fear and Loathing in Vulnerability Management

Vulnerability management – the security program that everyone loves to hate, especially those on the receiving end of a bunch of arcane reports. Increasingly, vulnerability management programs seem to be monopolizing more and more of a security team’s time, mostly because of the energy involved with scheduling, validating and interpreting scans. But does this effort actually lead anywhere besides a massive time suck and interdepartmental stalemates? I would argue that if the core of your program is built on scanning, then you’re doing it wrong.

The term vulnerability management is a misnomer, because “management” implies that you’re solving problems. Maybe in the beginning scanning and vulnerability tracking helped organizations, but now it’s just another method used by security leadership to justify their every increasing black-hole budgets. “See? I told you it was bad. Now I need more $$$$ for this super-terrific tool!”

Vulnerability management programs shouldn’t be based on scanning, they should be focused on the hard stuff: policies, standards and procedures with scans used for validation. If you’re stuck in an endless cycle of scanning, patching, more scanning and more patching; you’ve failed.

You should be focused on building processes that bake build standards and vulnerability remediation into a deployment procedure. Work with your Infrastructure team to support a DevOps model that eliminates those “pet” systems. Focus on “cattle” or immutable systems that can and should be continuously replaced when an application or operating system needs to be upgraded. Better yet, use containers. Have a glibc vulnerability? The infrastructure team should be creating new images that can be rolled out across the organization instead of trying to “patch and pray” after finally getting a maintenance window. You should have a resilient enough environment that can tolerate change; an infrastructure that’s self-healing because it’s in a state of continuous deployment.

I recognize that most organizations aren’t there, but THIS should be your goal, not scanning and patching. Because it’s a treadmill to nowhere. Or maybe you’re happy doing the same thing over and over again with no difference in the result. If so, let me find you a very large hamster wheel.

hamster_meme

 

Tagged , , , , , ,

Being the Security Asshole

Yes, I have become a security asshole. The one who says “no” to a technology. But I say it because of risk, and not just security risk.  And I’m angry, because my “no” is a last resort after many struggles with developer, engineering and operations teams in organizations that struggle to get the basics right.

I try to work with teams to build a design. I bring my own architecture documents and diagrams, which include Powerpoint presentations with talking points. I create strategy road maps explaining my vision for the security architecture in an organization. I detail our team’s progress and explain how we want to align with the rest of enterprise strategy and architecture. I stress that our team exists to support the business.

What do I get in return? Diagrams so crude, they could be drawn in crayon or made with Legos. They usually don’t even have IP addresses or port numbers. I have to argue with sysadmins about whether Telnet is still an acceptable protocol in 2015. I’m subjected to rehashed Kool-Aid about how some product is going to rescue the organization even though I found significant vulnerabilities during the assessment, which the vendor doesn’t want to fix.

And if this means that you hate me, fine. I’ll be the asshole. I’ll embrace it. But at least I can have a clear conscience, because I’ve done my best to safeguard the organization that’s paying me.

Tagged , , , , ,

Are You There Business? It’s Me, Information Security

Are you there Business? It’s me, Information Security. We need to talk. I know you’re busy generating revenue and keeping the lights on, but we’ve got some critical matters to discuss. I feel like everyone hates me and thinks I’m a nag. Every time I want to talk to you about patching and vulnerabilities, I’m ignored. I’m so scared, because I’m always trying to secure the network so the bad guys don’t get into it, but no one wants to help me make sure that doesn’t happen. Honestly, I don’t feel heard. It seems like everything is about you all the time. I get it. I wouldn’t have a job without you, but I really need to feel respected in this relationship. Because let’s be honest with each other for once, if you’re breached, I’m probably the one that’s getting fired.

I understand that you don’t always remember passwords, which is why you write them down or use your pet’s name. I know that it’s takes a lot of time to follow rules that don’t always make sense. But this is important, so could you find a way to work with me?  I’d really appreciate it, because I feel so frightened and alone.

Yours Truly,

Infosec

P.S. Could you please stop using the word “cyber”in everything? I really hate that term.

P.P.S. And yes, I’m blocking porn because it really does have malware. But also because HR told me to. So please don’t get mad at me.i-wonder-if-9jgrq0

Tagged , , , ,

Meetings: the First Horseman of the Apocalypse

While browsing the Interweb for daily threat intelligence this morning*, I found an interesting research paper, “Meetings and More Meetings: The Relationship Between Meeting Load and the Daily Well-Being of Employees.” Anyone with some amount of seniority in IT is familiar with the concept of “death by meeting,” so I was excited to find scientific research (!) confirming that meetings are the soul-sucking creation of Satan.

Meetings are an integral part of organizational life; however, few empirical studies have systematically examined the phenomenon and its effects on employees. By likening work meetings to interruptions and daily hassles, the authors proposed that meeting load (i.e., frequency and time spent) can affect employee well-being. For a period of 1 week, participants maintained daily work diaries of their meetings as well as daily self-reports of their well-being. Using hierarchical linear modeling analyses, the authors found a significant positive relationship between number of meetings attended and daily fatigue as well as subjective workload (i.e., more meetings were associated with increased feelings of fatigue and workload).

No shit, Dick Tracy. Every morning I check my calendar with trepidation, wondering how much of my day will be wasted watching pointless Powerpoint presentations, the “jazz hands” of the modern workplace. How often will I be forced to feign attention as leadership drones on about strategy? Then I realized that civilization will not be destroyed by weapons of mass destruction or global warming, but with meetings. As T.S. Eliot said,

This is the way the world ends
Not with a bang but a whimper.

It seems appropriate to end with an xkcd comic on the topic.

Meeting from xkcd.com

*Who am I kidding, I was watching silly videos like “The Running of the Pugs.” I blame Adobe Flash, not just for being insecure, but as the harbinger of time-wasting.

Tagged , , , ,