Tag Archives: Information Security

Let’s Stop the Security Shaming

When I started this blog over a decade ago, my understanding of postmodernism arose from my college studies of art history and aesthetics. Like Camille Paglia, I was not a fan of the movement or the result: the soul-crushing commoditization of art. I used the title as a pretentious insider joke to highlight the deplorable state of cybersecurity, a field increasingly driven by disingenuous vendors and practitioners who valued profit over stewardship.

Only now, as I have read more about the movement and learned to appreciate the perspective of one founder, Foucault, do I realize how appropriate the title of this blog is. “The insurrection of subjugated knowledges” is Foucault’s famous quote from Society Must Be Defended, where he spoke about how long-suppressed cultural wisdom is rediscovered, challenging the dominant power structure. Originally, I chose the name as a commentary about the current state of cybersecurity because my work felt repetitive and meaningless, like being on an assembly line. Going to different organizations didn’t seem to matter. More mature, less mature, they always had the same problems, which were rarely technical. The biggest challenge I saw was how security practitioners treated the people within the organizations they claimed to serve. Anyone outside the security team was victim-shamed and blamed for their purported cluelessness, as if they were at fault for failing to make cybersecurity the central locus of their daily work. Security organizations often tyrannized the very people they were meant to serve.

This behavior seemed counterproductive and demeaning, especially as I learned more about Nonviolent Communication and other conflict resolution techniques. I considered that peacebuilding methods might be useful in creating collaboration and alignment between stakeholders. Not many security practitioners seemed interested, but as organizations transitioned to DevOps, which emphasized these attributes, I found some like-minded people.

I also observed similarities with the criminal justice system, in which the predominant punitive, shaming narrative hasn’t been shown to decrease crime or support victims. But an alternative approach, restorative justice, shows promise. Restorative justice is focused on repairing the harm from crime and restoring community, while also upholding the dignity of all parties involved. The set of practices aims to process the shame experienced by stakeholders of crime to effectively rebuild relationships in a community, which reduces recidivism. It has already been successfully expanded to educational settings and I wondered if it could be useful within the field of cybersecurity as well.

The research I found helps support this use-case. The security community’s fixation on methods based on Protection Motivation Theory, or fear appeals, hasn’t demonstrated much success. Additionally, the use of shaming, highlighting how users fail in their attempts at implementing security, only seems to alienate those people we need to cooperate with us. What does encourage voluntary security behaviors from members of an organization? Feelings of being supported in a workplace community, a primary goal of restorative justice.

As the practice of cybersecurity becomes increasingly commodified, but decreasingly constructive, isn’t it time for us to re-evaluate the way we operate within organizations? Will we continue to use shame as a tactic to enforce desired behaviors even though it has been shown to be fruitless and even harmful? Isn’t it time we evolved past our FUD-infused approaches, recognizing that users are our allies?

Tagged , , , , ,

Fear and Loathing in Security Dashboards

Recently a colleague asked for my help in understanding why he was seeing a specific security alert on a dashboard. The message said that his database instance was “exposed to a broad public IP range.” He disagreed with this assessment because it misrepresented the configuration context. While the database had a public IP, only one port was available, and it was behind a proxy. The access to this test instance was also restricted to “authorized” IP address ranges. I explained that this kind of information is what security practitioners like to know as they evaluate risk, but then thought, “is this a reasonable alert for a user or just more noise?” When did security dashboards become like the news, more information than we can reasonably take in, overloading our cognitive faculties and creating stress?

I have a complicated relationship with security dashboards. Though I understand different teams need a quick view of what they need to prioritize, findings are broadly categorized as high, medium, and low without much background. This approach can create confusion and disagreements between groups because those categories are generally aligned to the Vienna Convention on Road Signs and Signals. Green is good, red is bad, and yellow means caution. The problem is that a lot of findings end up red and yellow, with categorization dependent upon how well the security team has tuned alerts and your organizational risk tolerance. Most nuance is lost.

The other problem is that this data categorization isn’t only seen as a prioritization technique. It can communicate danger. As humans, we have learned to associate red on a dashboard with some level of threat. This might be why some people develop fanariphobia, a fear of traffic lights. Is this an intentional design choice? Historically, Protection Motivation Theory (PMT), which explains how humans are motivated to protect themselves when threatened, has been used as a standard technique within the domain of cybersecurity to justify the use of fear appeals. But what if this doesn’t work as well as we think it does? A recent academic paper reviewed literature in this space and found conflicting data on the value of fear appeals in promoting voluntary security behaviors. It often backfires, leading to a reduction in desired responses. What does work? The researchers identify Stewardship Theory as a more efficacious approach leading to improved security behaviors by employees. They define it as “a covenantal relationship between the individual and the organization” which “connects both employee and employer to work toward a common goal, characterized by moral commitment between employees and the organization.”

Am I suggesting you should throw your security dashboards away? No, but I think we can agree that they’re a limited view, which can exacerbate conflict between teams. Instead of being the end of a conversation, they should be the beginning, a dialog tool that encourages a collaborative discussion between teams about risk.

Tagged , , , , , , , ,

Supply Chain Security Jumps the Shark

Can we collectively agree that the supply chain security discussion has grown tiresome? Ten years ago, I couldn’t get anyone to pay attention to the supply chain outside of the federal government crowd, but now it continues to be the security topic du jour. And while this might seem like a good thing, it’s increasingly becoming a distraction from other topics of product security, crowding out meaningful discussions about secure software development. So like a once-loved, long-running TV show that has worn out its welcome but looks for gimmicks to keep everyone’s attention, I’m officially declaring that Supply Chain Security has jumped the shark.

First, let’s clarify the meaning of the term Supply Chain Security. Contrary to what some believe, it’s not synonymous with the software development lifecycle (SDLC). That’s right, it’s time for a NIST definition! NIST, or the National Institute of Standards and Technology, defines supply chain security broadly because this term refers to anything acquired by an organization.

…the term supply chain refers to the linked set of resources and processes between and among multiple levels of an enterprise, each of which is an acquirer that begins with the sourcing of products and services and extends through the product and service life cycle.

Given the definition of supply chain, cybersecurity risks throughout the supply chain refers to the potential for harm or compromise that may arise from suppliers, their supply chains, their products, or their services. Cybersecurity risks throughout the supply chain are the results of threats that exploit vulnerabilities or exposures within products and services that traverse the supply chain or threats that exploit vulnerabilities or exposures within the supply chain itself.

(If you’re annoyed by the US-centric discussion, I encourage you to review ISO 28000 series, supply chain security management, which I haven’t included here because they charge you > $600 for downloading the standard.)

Typically, supply chain security refers to third parties, which is why the term is most often used in relation to open source software (OSS). You didn’t create the OSS you’re using, and it exists outside your own SDLC, so you need processes and capabilities in place to evaluate it for risk. But you also need to consider the commercial off-the-shelf software (COTS) you acquire as well. Consider SolarWinds. A series of attacks against the public and private sectors was caused by a breach against a commercial product. This compromise is what allowed malicious parties into SolarWinds customers’ internal networks. This isn’t a new concept, it just gained widespread attention due to the pervasive use of SolarWinds as an enterprise monitoring system. Most organizations that have procurement processes include robust third party security programs for this reason, but they aren’t perfect.

If supply chain security isn’t a novel topic and isn’t inclusive of the entire SDLC, then why does it continue to captivate the attention of security leaders? Maybe because it presents a measurable, systematic approach to addressing application security issues. Vulnerability management is attractive because it offers the comforting illusion that if you do the right things, like updating OSS, you’ll beat the security game. Unfortunately, the truth is far more complicated. Just take a look at the following diagram that illustrates the typical elements of a product security program:

Transforming_product_security_EXTERNAL

Executives want uncomplicated answers when they ask, “Are we secure.” They often feel overwhelmed by security discussions because they want to focus on what they were hired for: to run a business. As security professionals, we need to remember this motivation as we build programs to comprehensively address security risk. We should be giving our organizations what they need, not more empty security promises based on the latest trends.

Tagged , , , , , , ,

Compliance As Property

In engineering, a common approach to security concerns is to address those requirements after delivery. This is inefficient for the following reasons:

  • Fails to consider how the requirement(s) can be integrated during development, thereby avoiding reengineering to accommodate the requirement 
  • Disempowers engineering teams by outsourcing compliance and the understanding of the requirements to another group.

To improve individual and team accountability, it is recommended to borrow a key concept from Restorative Justice, Conflicts as Property. This idea asserts that the disempowerment of individuals in Western criminal justice systems is the result of ceding ownership of conflict to a third-party. Similarly, enterprise security programs often operate as “policing” systems, with engineering teams considering security requirements as owned by a compliance group. While appearing to be efficient, this results in the siloing of compliance activities and infantilization of engineering teams. 

Does this mean that engineering teams must become deep experts in all aspects of information security? How can they own security requirements without a full grounding in these concepts? Ownership does not necessarily imply expertise. While one may own a house or vehicle and be responsible for maintenance, most owners will understand when outside expertise is required.

The ownership of all requirements by an engineering team is critical for accountability. To proactively address security concerns, a team must see these requirements as their “property” to address them efficiently during the design and development phases. It is neither effective nor scalable to hand off the management of security requirements to another group. While an information security office can and should validate that requirements have been met in support of Separation of Duties (SoD), ownership for implementation and understanding belongs to the engineering team. 

Tagged , , ,

The Five Stages of Cloud Grief

Over the last five years as a security architect, I’ve been at organizations in various phases of cloud adoption. During that time, I’ve noticed that the most significant barrier isn’t technical. In many cases, public cloud is actually a step up from an organization’s on-premise technical debt.

One of the main obstacles to migration is emotional and can derail a cloud strategy faster than any technical roadblock. This is because our organizations are still filled with carbon units that have messy emotions who can quietly sabotage the initiative.

The emotional trajectory of an organization attempting to move to the public cloud can be illustrated through the Five Stages of Cloud Grief, which I’ve based on the Kubler-Ross Grief Cycle.

  1. Denial – Senior Leadership tells the IT organization they’re spending too much money and that they need to move everything to the cloud, because it’s cheaper. The CIO curls into fetal position under his desk. Infrastructure staff eventually hear about the new strategy and run screaming to the data center, grabbing onto random servers and switches. Other staff hug each other and cry tears of joy hoping that they can finally get new services deployed before they retire.
  2. Anger – IT staff shows up at all-hands meeting with torches and pitchforks calling for the CIO’s blood and demanding to know if there will be layoffs. The security team predicts a compliance apocalypse. Administrative staff distracts them with free donuts and pizza.
  3. Depression – CISO tells everyone cloud isn’t secure and violates all policies. Quietly packs a “go” bag and stocks bomb shelter with supplies. Infrastructure staff are forced to take cloud training, but continue to miss project timeline milestones while they refresh their resumes and LinkedIn pages.
  4. Bargaining – After senior leadership sets a final “drop dead” date for cloud migration, IT staff complain that they don’t have enough resources. New “cloud ready” staff is hired and enter the IT Sanctum Sanctorum like the Visigoths invading Rome. Information Security team presents threat intelligence report that shows $THREAT_ACTOR_DU_JOUR has pwned public cloud.
  5. Acceptance – 75% of cloud migration goal is met, but since there wasn’t a technical strategy or design, the Opex is higher and senior leadership starts wearing diapers in preparation for the monthly bill. Most of the “cloud ready” staff has moved on to the next job out of frustration and the only people left don’t actually understand how anything works.

AWS_consumption

Tagged , , , , , , , ,

Infrastructure-as-Code Is Still *CODE*

After working in a DevOps environment for over a year, I’ve become an automation acolyte. The future is here and I’ve seen the benefits when you get it right: improved efficiency, better control and fewer errors. However, I’ve also seen the dark side with Infrastructure-as-Code (IaC). Bad things happen because people forget that it’s still code and it should be subject to the same types of security controls you use in the rest of your SDLC.

That means including automated or manual reviews, threat modeling and architectural risk assessments. Remember, you’re not only looking for mistakes in provisioning your infrastructure or opportunities for cost control. Some of this code might introduce vulnerabilities that could be exploited by attackers. Are you storing credentials in the code? Are you calling scripts or homegrown libraries and has that code been reviewed? Do you have version control in place? Are you using open source tools that haven’t been updated recently? Are your security groups overly permissive?

IaC is CODE. Why aren’t you treating it that way?

devops_borat

Tagged , , , , , ,

Security Group Poop

One of the most critical elements of an organization’s security posture in AWS, is the configuration of security groups. In some of my architectural reviews, I often see rules that are confusing, overly-permissive and without any clear business justification for the access allowed. Basically, the result is a big, steaming pile of security turds.
While I understand many shops don’t have dedicated network or infrastructure engineers to help configure their VPCs, AWS has created some excellent documentation to make it a bit easier to deploy services there. You can and should plow through the entirety of this information. But for those with short attention spans or very little time, I’ll point out some key principles and “best practices” that you must grasp when configuring security groups.
  • A VPC automatically comes with a default security group and each instance created in that VPC will be associated with it, unless you create a new security group.
  • “Allow” rules are explicit, “deny” rules are implicit. With no rules, the default behavior is “deny.” If you want to authorize ingress or egress access you add a rule, if you remove a rule, you’re revoking access.
  • The default rule for a security group denies all inbound traffic and permits all outbound traffic. It is a “best practice” to remove this default rule, replacing it with more granular rules that allow outbound traffic specifically needed for the functionality of the systems and services in the VPC.
  • Security groups are stateful. This means that if you allow inbound traffic to an instance on a specific port, the return traffic is automatically allowed, regardless of outbound rules.
  • The use-cases requiring inbound and outbound rules for application functionality would be:
    • ELB/ALBs – If the default outbound rule has been removed from the security group containing an ELB/ALB, an outbound rule must be configured to forward traffic to the instances hosting the service(s) being load balanced.
    • If the instance must forward traffic to a system/service outside the configured security group.
AWS documentation, including security group templates, covering multiple use-cases:
Security groups are more effective when layered with Network ACLs, providing an additional control to help protect your resources in the event of a misconfiguration. But there are some important differences to keep in mind according to AWS:
Security Group
Network ACL
Operates at the instance level (first layer of defense)
Operates at the subnet level (second layer of defense)
Supports allow rules only
Supports allow rules and deny rules
Is stateful: Return traffic is automatically allowed, regardless of any rules
Is stateless: Return traffic must be explicitly allowed by rules
We evaluate all rules before deciding whether to allow traffic
We process rules in number order when deciding whether to allow traffic
Applies to an instance only if someone specifies the security group when launching the instance, or associates the security group with the instance later on
Automatically applies to all instances in the subnets it’s associated with (backup layer of defense, so you don’t have to rely on someone specifying the security group)
Additionally, the AWS Security Best Practices document, makes the following recommendations:
  • Always use security groups: They provide stateful firewalls for Amazon EC2 instances at the hypervisor level. You can apply multiple security groups to a single instance, and to a single ENI.
  • Augment security groups with Network ACLs: They are stateless but they provide fast and efficient controls. Network ACLs are not instance-specific so they can provide another layer of control in addition to security groups. You can apply separation of duties to ACLs management and security group management.
  • For large-scale deployments, design network security in layers. Instead of creating a single layer of network security protection, apply network security at external, DMZ, and internal layers. 

For those who believe the purchase of some vendor magic beans (i.e. a product) will instantly fix the problem, get ready for disappointment. You’re not going to be able to configure that tool properly for enforcement until you comprehend how security groups work and what the rules should be for your environment.

aws_poop

Tagged , , , , ,

Fixing a Security Program

I’m still unsettled by how many security programs are so fundamentally broken. Even those managed and staffed by people with impressive credentials. But when I talk to some of these individuals, I discover the key issue. Many seem to think the root cause is bad tools. This is like believing the only thing keeping you from writing the Next Great American novel is that you don’t have John Steinbeck’s pen or Dorothy Parker’s typewriter.

In reality, most of the problems found in security programs are caused inferior processes, inadequate policies, non-existent documentation  and insufficient standards. If buying the best tools actually fixed security problems, wouldn’t we already be done? The truth is that too many employed in this field are in love with the mystique of security work. They don’t understand the business side, the drudgery, the grunt work necessary to build a successful program.

For those people, here’s my simple guide.  I’ve broken it down to the following essential tasks:

  1. Find your crap. Everything. Inventory and categorize your organization’s physical and digital assets according to risk. If you don’t have classification standards, then you must create them.
  2. Document your crap. Build run books. Make sure you have diagrams of networks and distributed applications. Create procedure documents such as IR plans. Establish SLOs and KPIs. Create policies and procedures governing the management of your digital assets.
  3. Assess your crap. Examine current state, identify any issues with the deployment or limitations with the product(s). Determine the actual requirements and analyze whether or not the tool actually meets the organization’s needs. This step can be interesting or depressing, depending upon whether or not you’re responsible for the next step.
  4. Fix your crap. Make changes to follow “best practices.” Work with vendors to understand the level-of-effort involved in configuring their products to better meet your needs. The temptation will be to replace the broken tools, but these aren’t $5 screwdrivers. Your organization made a significant investment of time and money and if you want to skip this step by replacing a tool, be prepared to provide facts and figures to back up your recommendation. Only after you’ve done this, can you go to step 6.
  5. Monitor your crap. If someone else knows your crap is down or compromised before you do, then you’ve failed. The goal isn’t to be the Oracle of Delphi or some fully omniscient being, but simply more proactive. And you don’t need to have all the logs. Identify the logs that are critical and relevant and start there: Active Directory, firewalls, VPN, IDS/IPS.
  6. Replace the crap that doesn’t work. But don’t make the same mistakes. Identify requirements, design the solution carefully, build out a test environment. Make sure to involve necessary stakeholders. And don’t waste time arguing about frameworks, just use an organized method and document what you do.

Now you have the foundation of any decent information security program. This process isn’t easy and it’s definitely not very sexy. But it will be more effective for your organization than installing new tools every 12 months.

 

Tagged , , , , , , , ,

Splunk Funk

Recently, I was asked to evaluate an organization’s Splunk deployment. This request flummoxed me, because while I’ve always been a fan of the tool’s capabilities, I’ve never actually designed an implementation or administered it. I love the empowerment of people building their own dashboards and alerts, but this only works when there’s a dedicated Splunk-Whisperer carefully overseeing the deployment and socializing the idea of using it as self-service, cross-functional tool.  As I started my assessment, I entered what can only be called a “dark night of the IT soul” because my findings have led me to question the viability of most enterprise monitoring systems.

The original implementer recently moved on to greener pastures and (typically) left only skeletal documentation. As I started my investigation, I discovered  a painfully confusing distributed deployment built with little to no understanding of  “best practices” for the product. With no data normalization and almost non-existent data input management, the previous admin had created the equivalent of a Splunk Wild West, allowing most data to flow in with little oversight or control. With an obscenely large number of sourcetypes and sources, the situation horrified Splunk support and they told me my only option was to rebuild, a scenario that filled me with nerd-angst.

In the past, I’ve written about the importance of using machine data for infrastructure visibility. It’s critical for security, but also performance monitoring and troubleshooting. Log correlation and analysis is a key component of any healthy infrastructure and without it, you’re like a mariner lost at sea. So imagine my horror when confronted by a heaping pile of garbage data thrown into a very expensive application like Splunk.

Most organizations struggle with a monitoring strategy because it simply isn’t sexy. It’s hard to get business leadership excited about dashboards, pie charts and graphs without contextualizing them in a report. “Yeah baby, let me show you those LOOOOW latency times in our web app.” It’s a hard sell, especially when you see the TCO for on-premise log correlation and monitoring tools. Why not focus on improving a product that could bring in more customer dollars or a new service to make your users happier?  Most shops are so focused on product delivery and firefighting, they simply don’t have cycles left for thinking about proactive service management. So you end up with infrastructure train wrecks, with little to no useful monitoring.

While a part of me still believes in using the best tools to gain intelligence and visibility from an infrastructure, I’m tired of struggling. I’m beginning to think I’d be happy with anything, even a Perl script, that works consistently with a low LOE. I need that data now and no longer have the luxury of waiting until there’s a budget for qualified staff and the right application. Lately, I’m finding it pretty hard to resist the siren song of SaaS log management tools that promise onboarding and insight into machine data within minutes, not hours. Just picture it: no more agents or on-premise systems to manage, just immediate visibility into data.  Most other infrastructure components have moved to the cloud, maybe it’s inevitable for log management and monitoring. Will I miss the flexibility and power of tools like Splunk and ELK? Probably, but I no longer have the luxury of nostalgia.

 

 

Tagged , , , , ,

Danger: Stunt Hacking Ahead

On 4/18, Ars Technica reported on a recent 60 Minutes stunt-hacking episode by some telecom security researchers. During the episode, US representative Ted Lieu had a cell phone intercepted via vulnerabilities in the SS7 network. I’m no voice expert*, but it’s clear that both the 60 Minutes story and the Ars Technica article are pretty muddled attempts to dissect the source of these vulnerabilities. This is probably because trying to understand legacy telephony protocols such as SS7 is only slightly less challenging than reading ancient Sumerian.

Since I was dubious regarding the “findings” from these reports,  I  reached out to my VoIP bestie, @Unregistered436.

Screenshot 2016-04-20 11.33.19

 

While we can argue over how useful media FUD is in getting security issues the attention they deserve, I have other problems with this story:

  • This isn’t new information. Researchers (including the ones appearing in 60 Minutes) have been reporting on problems with SS7 for years. A cursory Google search found the following articles and presentations, including some by the Washington Post.

German Researchers Discover a Flaw That Could Let Anyone Listen to Your Cell Calls 12/18/14

Locating Mobile Phones Using Signalling System #7 1/26/13

For Sale: Systems that Can Secretly Track Where Cellphone Users Go Around the Globe 8/24/14

Toward the HLR, Attacking the SS7 & SIGTRAN Applications H2HC, Sao Paulo, Brazil, December 2009

  • If you’re going to reference critical infrastructure such as the SS7 network, why not discuss how migration efforts with IP convergence in the telco industry relate to this issue and could yield improvements? There are also regulatory concerns which impact the current state of the telecommunications infrastructure as well. Maybe Ted Lieu should start reading all those FCC documents and reports.
  • Legacy protocols don’t get ripped out or fixed overnight (IPv4 anyone?), so the congressman’s call to have someone “fired” is spurious. If security “researchers”  really want things to change, they should contribute to ITU, IEEE and IETF working groups or standards committees and help build better protocols. Or *shudder* take a job with a telecom vendor. We all need to take some ownership to help address these problems.

*If you want to learn more about telecom regulation, you should definitely follow Sherry Lichtenberg. For VoIP and SS7 security, try Patrick McNeil and Philippe Langlois.

Tagged , , , , , ,