Tag Archives: Information Security

Supply Chain Security Jumps the Shark

Can we collectively agree that the supply chain security discussion has grown tiresome? Ten years ago, I couldn’t get anyone to pay attention to the supply chain outside of the federal government crowd, but now it continues to be the security topic du jour. And while this might seem like a good thing, it’s increasingly becoming a distraction from other topics of product security, crowding out meaningful discussions about secure software development. So like a once-loved, long-running TV show that has worn out its welcome but looks for gimmicks to keep everyone’s attention, I’m officially declaring that Supply Chain Security has jumped the shark.

First, let’s clarify the meaning of the term Supply Chain Security. Contrary to what some believe, it’s not synonymous with the software development lifecycle (SDLC). That’s right, it’s time for a NIST definition! NIST, or the National Institute of Standards and Technology, defines supply chain security broadly because this term refers to anything acquired by an organization.

…the term supply chain refers to the linked set of resources and processes between and among multiple levels of an enterprise, each of which is an acquirer that begins with the sourcing of products and services and extends through the product and service life cycle.

Given the definition of supply chain, cybersecurity risks throughout the supply chain refers to the potential for harm or compromise that may arise from suppliers, their supply chains, their products, or their services. Cybersecurity risks throughout the supply chain are the results of threats that exploit vulnerabilities or exposures within products and services that traverse the supply chain or threats that exploit vulnerabilities or exposures within the supply chain itself.

(If you’re annoyed by the US-centric discussion, I encourage you to review ISO 28000 series, supply chain security management, which I haven’t included here because they charge you > $600 for downloading the standard.)

Typically, supply chain security refers to third parties, which is why the term is most often used in relation to open source software (OSS). You didn’t create the OSS you’re using, and it exists outside your own SDLC, so you need processes and capabilities in place to evaluate it for risk. But you also need to consider the commercial off-the-shelf software (COTS) you acquire as well. Consider SolarWinds. A series of attacks against the public and private sectors was caused by a breach against a commercial product. This compromise is what allowed malicious parties into SolarWinds customers’ internal networks. This isn’t a new concept, it just gained widespread attention due to the pervasive use of SolarWinds as an enterprise monitoring system. Most organizations that have procurement processes include robust third party security programs for this reason, but they aren’t perfect.

If supply chain security isn’t a novel topic and isn’t inclusive of the entire SDLC, then why does it continue to captivate the attention of security leaders? Maybe because it presents a measurable, systematic approach to addressing application security issues. Vulnerability management is attractive because it offers the comforting illusion that if you do the right things, like updating OSS, you’ll beat the security game. Unfortunately, the truth is far more complicated. Just take a look at the following diagram that illustrates the typical elements of a product security program:

Transforming_product_security_EXTERNAL

Executives wants uncomplicated answers when they ask, “Are we secure.” They often feel overwhelmed by security discussions because they want to focus on what they were hired for: to run a business. As security professionals, we need to remember this motivation as we build programs to comprehensively address security risk. We should be giving our organizations what they need, not more empty security promises based on the latest trends.

Tagged , , , , , , ,

Compliance As Property

In engineering, a common approach to security concerns is to address those requirements after delivery. This is inefficient for the following reasons:

  • Fails to consider how the requirement(s) can be integrated during development, thereby avoiding reengineering to accommodate the requirement 
  • Disempowers engineering teams by outsourcing compliance and the understanding of the requirements to another group.

To improve individual and team accountability, it is recommended to borrow a key concept from Restorative Justice, Conflict as Property. This concept asserts that the disempowerment of individuals in western criminal justice systems is the result of ceding ownership of conflict to a third-party. Similarly, enterprise security programs often operate as “policing” systems, with engineering teams considering security requirements as owned by a compliance group. While appearing to be efficient, this results in the siloing of compliance activities and infantilization of engineering teams. 

Does this mean that engineering teams must become deep experts in all aspects of information security? How can they own security requirements without a full grounding in these concepts? Ownership does not necessarily imply expertise. While one may own a house or vehicle and be responsible for maintenance, most owners will understand when outside expertise is required.

The ownership of all requirements by an engineering team is critical for accountability. To proactively address security concerns, a team must see these requirements as their “property” to address them efficiently during the design and development phases. It is neither effective nor scalable to hand off the management of security requirements to another group. While an information security office can and should validate that requirements have been met in support of Separation of Duties (SoD), ownership for implementation and understanding belongs to the engineering team. 

Tagged , , ,

The Five Stages of Cloud Grief

Over the last five years as a security architect, I’ve been at organizations in various phases of cloud adoption. During that time, I’ve noticed that the most significant barrier isn’t technical. In many cases, public cloud is actually a step up from an organization’s on-premise technical debt.

One of the main obstacles to migration is emotional and can derail a cloud strategy faster than any technical roadblock. This is because our organizations are still filled with carbon units that have messy emotions who can quietly sabotage the initiative.

The emotional trajectory of an organization attempting to move to the public cloud can be illustrated through the Five Stages of Cloud Grief, which I’ve based on the Kubler-Ross Grief Cycle.

  1. Denial – Senior Leadership tells the IT organization they’re spending too much money and that they need to move everything to the cloud, because it’s cheaper. The CIO curls into fetal position under his desk. Infrastructure staff eventually hear about the new strategy and run screaming to the data center, grabbing onto random servers and switches. Other staff hug each other and cry tears of joy hoping that they can finally get new services deployed before they retire.
  2. Anger – IT staff shows up at all-hands meeting with torches and pitchforks calling for the CIO’s blood and demanding to know if there will be layoffs. The security team predicts a compliance apocalypse. Administrative staff distracts them with free donuts and pizza.
  3. Depression – CISO tells everyone cloud isn’t secure and violates all policies. Quietly packs a “go” bag and stocks bomb shelter with supplies. Infrastructure staff are forced to take cloud training, but continue to miss project timeline milestones while they refresh their resumes and LinkedIn pages.
  4. Bargaining – After senior leadership sets a final “drop dead” date for cloud migration, IT staff complain that they don’t have enough resources. New “cloud ready” staff is hired and enter the IT Sanctum Sanctorum like the Visigoths invading Rome. Information Security team presents threat intelligence report that shows $THREAT_ACTOR_DU_JOUR has pwned public cloud.
  5. Acceptance – 75% of cloud migration goal is met, but since there wasn’t a technical strategy or design, the Opex is higher and senior leadership starts wearing diapers in preparation for the monthly bill. Most of the “cloud ready” staff has moved on to the next job out of frustration and the only people left don’t actually understand how anything works.

AWS_consumption

Tagged , , , , , , , ,

Infrastructure-as-Code Is Still *CODE*

After working in a DevOps environment for over a year, I’ve become an automation acolyte. The future is here and I’ve seen the benefits when you get it right: improved efficiency, better control and fewer errors. However, I’ve also seen the dark side with Infrastructure-as-Code (IaC). Bad things happen because people forget that it’s still code and it should be subject to the same types of security controls you use in the rest of your SDLC.

That means including automated or manual reviews, threat modeling and architectural risk assessments. Remember, you’re not only looking for mistakes in provisioning your infrastructure or opportunities for cost control. Some of this code might introduce vulnerabilities that could be exploited by attackers. Are you storing credentials in the code? Are you calling scripts or homegrown libraries and has that code been reviewed? Do you have version control in place? Are you using open source tools that haven’t been updated recently? Are your security groups overly permissive?

IaC is CODE. Why aren’t you treating it that way?

devops_borat

Tagged , , , , , ,

Security Group Poop

One of the most critical elements of an organization’s security posture in AWS, is the configuration of security groups. In some of my architectural reviews, I often see rules that are confusing, overly-permissive and without any clear business justification for the access allowed. Basically, the result is a big, steaming pile of security turds.
While I understand many shops don’t have dedicated network or infrastructure engineers to help configure their VPCs, AWS has created some excellent documentation to make it a bit easier to deploy services there. You can and should plow through the entirety of this information. But for those with short attention spans or very little time, I’ll point out some key principles and “best practices” that you must grasp when configuring security groups.
  • A VPC automatically comes with a default security group and each instance created in that VPC will be associated with it, unless you create a new security group.
  • “Allow” rules are explicit, “deny” rules are implicit. With no rules, the default behavior is “deny.” If you want to authorize ingress or egress access you add a rule, if you remove a rule, you’re revoking access.
  • The default rule for a security group denies all inbound traffic and permits all outbound traffic. It is a “best practice” to remove this default rule, replacing it with more granular rules that allow outbound traffic specifically needed for the functionality of the systems and services in the VPC.
  • Security groups are stateful. This means that if you allow inbound traffic to an instance on a specific port, the return traffic is automatically allowed, regardless of outbound rules.
  • The use-cases requiring inbound and outbound rules for application functionality would be:
    • ELB/ALBs – If the default outbound rule has been removed from the security group containing an ELB/ALB, an outbound rule must be configured to forward traffic to the instances hosting the service(s) being load balanced.
    • If the instance must forward traffic to a system/service outside the configured security group.
AWS documentation, including security group templates, covering multiple use-cases:
Security groups are more effective when layered with Network ACLs, providing an additional control to help protect your resources in the event of a misconfiguration. But there are some important differences to keep in mind according to AWS:
Security Group
Network ACL
Operates at the instance level (first layer of defense)
Operates at the subnet level (second layer of defense)
Supports allow rules only
Supports allow rules and deny rules
Is stateful: Return traffic is automatically allowed, regardless of any rules
Is stateless: Return traffic must be explicitly allowed by rules
We evaluate all rules before deciding whether to allow traffic
We process rules in number order when deciding whether to allow traffic
Applies to an instance only if someone specifies the security group when launching the instance, or associates the security group with the instance later on
Automatically applies to all instances in the subnets it’s associated with (backup layer of defense, so you don’t have to rely on someone specifying the security group)
Additionally, the AWS Security Best Practices document, makes the following recommendations:
  • Always use security groups: They provide stateful firewalls for Amazon EC2 instances at the hypervisor level. You can apply multiple security groups to a single instance, and to a single ENI.
  • Augment security groups with Network ACLs: They are stateless but they provide fast and efficient controls. Network ACLs are not instance-specific so they can provide another layer of control in addition to security groups. You can apply separation of duties to ACLs management and security group management.
  • For large-scale deployments, design network security in layers. Instead of creating a single layer of network security protection, apply network security at external, DMZ, and internal layers. 

For those who believe the purchase of some vendor magic beans (i.e. a product) will instantly fix the problem, get ready for disappointment. You’re not going to be able to configure that tool properly for enforcement until you comprehend how security groups work and what the rules should be for your environment.

aws_poop

Tagged , , , , ,

Fixing a Security Program

I’m still unsettled by how many security programs are so fundamentally broken. Even those managed and staffed by people with impressive credentials. But when I talk to some of these individuals, I discover the key issue. Many seem to think the root cause is bad tools. This is like believing the only thing keeping you from writing the Next Great American novel is that you don’t have John Steinbeck’s pen or Dorothy Parker’s typewriter.

In reality, most of the problems found in security programs are caused inferior processes, inadequate policies, non-existent documentation  and insufficient standards. If buying the best tools actually fixed security problems, wouldn’t we already be done? The truth is that too many employed in this field are in love with the mystique of security work. They don’t understand the business side, the drudgery, the grunt work necessary to build a successful program.

For those people, here’s my simple guide.  I’ve broken it down to the following essential tasks:

  1. Find your crap. Everything. Inventory and categorize your organization’s physical and digital assets according to risk. If you don’t have classification standards, then you must create them.
  2. Document your crap. Build run books. Make sure you have diagrams of networks and distributed applications. Create procedure documents such as IR plans. Establish SLOs and KPIs. Create policies and procedures governing the management of your digital assets.
  3. Assess your crap. Examine current state, identify any issues with the deployment or limitations with the product(s). Determine the actual requirements and analyze whether or not the tool actually meets the organization’s needs. This step can be interesting or depressing, depending upon whether or not you’re responsible for the next step.
  4. Fix your crap. Make changes to follow “best practices.” Work with vendors to understand the level-of-effort involved in configuring their products to better meet your needs. The temptation will be to replace the broken tools, but these aren’t $5 screwdrivers. Your organization made a significant investment of time and money and if you want to skip this step by replacing a tool, be prepared to provide facts and figures to back up your recommendation. Only after you’ve done this, can you go to step 6.
  5. Monitor your crap. If someone else knows your crap is down or compromised before you do, then you’ve failed. The goal isn’t to be the Oracle of Delphi or some fully omniscient being, but simply more proactive. And you don’t need to have all the logs. Identify the logs that are critical and relevant and start there: Active Directory, firewalls, VPN, IDS/IPS.
  6. Replace the crap that doesn’t work. But don’t make the same mistakes. Identify requirements, design the solution carefully, build out a test environment. Make sure to involve necessary stakeholders. And don’t waste time arguing about frameworks, just use an organized method and document what you do.

Now you have the foundation of any decent information security program. This process isn’t easy and it’s definitely not very sexy. But it will be more effective for your organization than installing new tools every 12 months.

 

Tagged , , , , , , , ,

Splunk Funk

Recently, I was asked to evaluate an organization’s Splunk deployment. This request flummoxed me, because while I’ve always been a fan of the tool’s capabilities, I’ve never actually designed an implementation or administered it. I love the empowerment of people building their own dashboards and alerts, but this only works when there’s a dedicated Splunk-Whisperer carefully overseeing the deployment and socializing the idea of using it as self-service, cross-functional tool.  As I started my assessment, I entered what can only be called a “dark night of the IT soul” because my findings have led me to question the viability of most enterprise monitoring systems.

The original implementer recently moved on to greener pastures and (typically) left only skeletal documentation. As I started my investigation, I discovered  a painfully confusing distributed deployment built with little to no understanding of  “best practices” for the product. With no data normalization and almost non-existent data input management, the previous admin had created the equivalent of a Splunk Wild West, allowing most data to flow in with little oversight or control. With an obscenely large number of sourcetypes and sources, the situation horrified Splunk support and they told me my only option was to rebuild, a scenario that filled me with nerd-angst.

In the past, I’ve written about the importance of using machine data for infrastructure visibility. It’s critical for security, but also performance monitoring and troubleshooting. Log correlation and analysis is a key component of any healthy infrastructure and without it, you’re like a mariner lost at sea. So imagine my horror when confronted by a heaping pile of garbage data thrown into a very expensive application like Splunk.

Most organizations struggle with a monitoring strategy because it simply isn’t sexy. It’s hard to get business leadership excited about dashboards, pie charts and graphs without contextualizing them in a report. “Yeah baby, let me show you those LOOOOW latency times in our web app.” It’s a hard sell, especially when you see the TCO for on-premise log correlation and monitoring tools. Why not focus on improving a product that could bring in more customer dollars or a new service to make your users happier?  Most shops are so focused on product delivery and firefighting, they simply don’t have cycles left for thinking about proactive service management. So you end up with infrastructure train wrecks, with little to no useful monitoring.

While a part of me still believes in using the best tools to gain intelligence and visibility from an infrastructure, I’m tired of struggling. I’m beginning to think I’d be happy with anything, even a Perl script, that works consistently with a low LOE. I need that data now and no longer have the luxury of waiting until there’s a budget for qualified staff and the right application. Lately, I’m finding it pretty hard to resist the siren song of SaaS log management tools that promise onboarding and insight into machine data within minutes, not hours. Just picture it: no more agents or on-premise systems to manage, just immediate visibility into data.  Most other infrastructure components have moved to the cloud, maybe it’s inevitable for log management and monitoring. Will I miss the flexibility and power of tools like Splunk and ELK? Probably, but I no longer have the luxury of nostalgia.

 

 

Tagged , , , , ,

Danger: Stunt Hacking Ahead

On 4/18, Ars Technica reported on a recent 60 Minutes stunt-hacking episode by some telecom security researchers. During the episode, US representative Ted Lieu had a cell phone intercepted via vulnerabilities in the SS7 network. I’m no voice expert*, but it’s clear that both the 60 Minutes story and the Ars Technica article are pretty muddled attempts to dissect the source of these vulnerabilities. This is probably because trying to understand legacy telephony protocols such as SS7 is only slightly less challenging than reading ancient Sumerian.

Since I was dubious regarding the “findings” from these reports,  I  reached out to my VoIP bestie, @Unregistered436.

Screenshot 2016-04-20 11.33.19

 

While we can argue over how useful media FUD is in getting security issues the attention they deserve, I have other problems with this story:

  • This isn’t new information. Researchers (including the ones appearing in 60 Minutes) have been reporting on problems with SS7 for years. A cursory Google search found the following articles and presentations, including some by the Washington Post.

German Researchers Discover a Flaw That Could Let Anyone Listen to Your Cell Calls 12/18/14

Locating Mobile Phones Using Signalling System #7 1/26/13

For Sale: Systems that Can Secretly Track Where Cellphone Users Go Around the Globe 8/24/14

Toward the HLR, Attacking the SS7 & SIGTRAN Applications H2HC, Sao Paulo, Brazil, December 2009

  • If you’re going to reference critical infrastructure such as the SS7 network, why not discuss how migration efforts with IP convergence in the telco industry relate to this issue and could yield improvements? There are also regulatory concerns which impact the current state of the telecommunications infrastructure as well. Maybe Ted Lieu should start reading all those FCC documents and reports.
  • Legacy protocols don’t get ripped out or fixed overnight (IPv4 anyone?), so the congressman’s call to have someone “fired” is spurious. If security “researchers”  really want things to change, they should contribute to ITU, IEEE and IETF working groups or standards committees and help build better protocols. Or *shudder* take a job with a telecom vendor. We all need to take some ownership to help address these problems.

*If you want to learn more about telecom regulation, you should definitely follow Sherry Lichtenberg. For VoIP and SS7 security, try Patrick McNeil and Philippe Langlois.

Tagged , , , , , ,

The Endpoint Is Dead To Me

Another day, another vulnerability. This month the big non-event was Badlock, following the recent trend of using a splashy name to catch the media’s attention so they could whip the management meat puppets into a paranoid frenzy. Many in the security community seem unperturbed by this latest bug, because let’s face it, nothing can surprise us after the last couple of really grim years.

But then more Java and Adobe Flash bugs were announced, some new iOS attack using Wi-Fi and NTP that can brick a device, followed by an announcement from Apple that they’ve decided to kill QuickTime for Windows instead of fixing a couple of critical flaws. All of this is forcing  me to admit the undeniable fact that trying to secure the endpoint is a waste of time. Even if you’re using a myriad of management tools, patching systems, vulnerability scanners, and DLP agents; you will fail, because you can never stay ahead of the game.

The endpoint must be seen for what it truly is: a limb to be isolated and severed from the healthy body of the enterprise at the first sign of gangrene. It’s a cootie-ridden device that while necessary for our users, must be isolated as much as possible from anything of value. A smelly turd to be flushed without remorse or hesitation. No matter what the user says, it is not valuable and nothing of value to the organization should ever be stored on it.

This doesn’t mean I’m advocating removing all endpoint protection or patching agents. But it’s time to get real and accept the fact that most of this corporate malware is incapable of delivering what the vendors promise. Moreover, most often these applications negatively impact system performance, which infuriates our users. But instead of addressing this issue, IT departments layer on more of this crapware on the endpoint, which only encourages users to find ways to disable it. One more security vulnerability for the organization. A dedicated attacker will figure out a way to bypass most of it anyway, so why do we bother to trust software that is often more harmful to business productivity than the malware we’re trying to block?  We might as well give out “Etch A Sketches” instead of laptops.

google_on_my_etch_a_sketch_by_pikajane-d3846dy

Tagged , , , , , ,

Mindful Security

Recently, I was invited to participate in a podcast on the topic of mindfulness. A friend and former editor of mine thought it would be helpful to discuss how the practice could potentially help stressed-out and oversubscribed IT professionals. I’ve actually had a meditation practice on and off for the last eight years and sometimes it seems like it’s the only thing keeping me sane. Especially in the high-pressure realm of information security. While I’m not an expert on meditation, I’ve spent a considerable amount of time studying and trying to understand its impact. Hopefully, people will find the information useful as an alternative method of reducing the inevitable stress and anxiety we’re all feeling in this industry lately.

Tagged , , ,
%d bloggers like this: