Category Archives: Blog Posts

Infosec Riot Grrrl Manifesto*

BECAUSE us girls crave respect and authority in our chosen field of Information Security.

BECAUSE we wanna make it easier for girls to see/hear each other’s work so that we can share strategies and criticize-applaud each other.

BECAUSE we must infiltrate the Infosec field in order to create our own destiny.

BECAUSE I am not your mother, your sister, your wife or your girlfriend. So when I speak with authority, keep your emotional baggage and neuroses to yourself.

BECAUSE we recognize fantasies of a macho security dictatorship as a set of impractical lies meant to keep us simply dreaming instead of creating the revolution in Information Security by envisioning and creating alternatives to the bullshit military-posturing way of doing things.

BECAUSE we want and need to encourage and be encouraged in the face of all our own insecurities, in the face of beergutboyinfosec that tells us we can’t play in their sandbox, in the face of “authorities” who say our skills are the worst.

BECAUSE we don’t wanna assimilate to someone else’s (boy) standards of what is or isn’t.

BECAUSE we are unwilling to falter under claims that we are reactionary “reverse sexists” AND NOT THE TRUEINFOSECCRUSADERS THAT WE KNOW we really are.

BECAUSE we know that information security is much more than just reactivity and are patently aware that the punk rock “you can do anything” idea is crucial to the coming angry infosec grrrl revolution which seeks to promote the psychic and cultural lives of girls and women in our profession everywhere, according to their own terms.

BECAUSE we are interested in creating non-heirarchical ways of being, collaborating and working, based on communication + understanding, instead of competition + good/bad categorizations.

BECAUSE doing/reading/seeing/hearing cool things that validate and challenge the status quo can help us gain the strength and sense of community that we need in order to figure out how bullshit like racism, able-bodieism, ageism, speciesism, classism, thinism, sexism, anti-semitism and heterosexism figures in our professional and personal lives

BECAUSE we see fostering and supporting girl infosec professionals of all kinds as integral to this process.

BECAUSE we see our main goal as sharing information and supporting allies over making profits according to traditional standards.

BECAUSE we are angry at a society that tells us Girl = Dumb, Girl = Bad, Girl = Weak, Girl = Not technical.

BECAUSE we are unwilling to let our real and valid anger be diffused and/or turned against us via the internalization of sexism as witnessed in girl/girl jealousism and self defeating girltype behaviors.

BECAUSE I have run out of time, patience and f*#&s in pandering to egos.

BECAUSE I believe with my wholeheartmindbody that girls constitute a revolutionary force in information security that can, and will revolutionize our profession and the world.

*Based on the original Riot Grrrl Manifesto by Kathleen Hanna and Bikini Kill.

When Compliance Goes Bad

You may laugh at the image above, but for many of us, similar absurdities can be found in our own policy frameworks. Governance matters, because badly written, confusing policies and standards will drain the productivity of your technical teams as they run around trying to figure out what’s actually required. Cormac Herley expressed this lunacy best in his paper, So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users:

“Given a choice between dancing pigs and security, users will pick dancing pigs every time.” While amusing, this is unfair: users are never offered security, either on its own or as an alternative to anything else. They are offered long, complex and growing sets of advice, mandates, policy updates and tips. These sometimes carry vague and tentative suggestions of reduced risk, never security. We have shown that much of this advice does nothing to make users more secure, and some of it is harmful in its own right. Security is not something users are offered and turn down. What they are offered and do turn down is crushingly complex security advice that promises little and delivers less.

As security and governance professionals, we are trusted stewards for our organizations. We have an obligation to ensure that individuals make good choices by clearly communicating our expectations. Otherwise, we just come off like institutional bullies.

Security Policy RTFM

When I start a new position with an organization, the very first thing I do is review the policy framework and its contents. I don’t dig into the network diagrams. I don’t pester security engineers for current vulnerability findings or pentesting reports. I don’t even look at the strategy content first. Why would I spend time reading documents that are basically the digital equivalent of a sleeping pill? Because policies, standards and procedures represent the manual of an organization. Spend time reviewing it and you’ll soon discover how mature the security program really is.

Maybe I developed this habit from my time as a Unix engineer. In the days before Google and ubiquitous wireless, you had to know how to read man pages and use them to solve problems quickly. There were many times I would be sitting in an icy server room at midnight without a network connection, trying to figure out why a volume wouldn’t mount or a NIC wasn’t working, but apropos or man -k saved me. The CLI was the way through those troubleshooting sessions by uncovering various arguments and switches found in the man pages. It made me a better technologist, because I learned that good engineering is as much about documentation as it is about delivering a solution.

And yes, I was that person who when asked by a junior tech how to do something in *nix would respond with, “Man $insert_command_here.” I even threatened to change my middle name to RTFM at one point. While there was a part of me that reveled in the superiority of having pierced the highest levels of esoteric knowledge, I also genuinely wanted people to appreciate the elegance of a system that allowed you to have all the tools you needed to troubleshoot it.

Recently, I realized that an organization’s policy framework and its contents function in a similar way. You can learn how leadership prioritizes risk and empowers its governance team (or doesn’t). You can uncover processes and the inner workings of different business units. You’ll also find out quickly how dysfunctional the security program is based on the breadth of the content and how well it’s organized. Tedious, circuitous and often bloated, policy documents can be a challenging source to mine for intelligence, but it’s the best place to start. So, RTFM your organization by reviewing its policies and standards, otherwise you’ll struggle to separate the valuable elements of your program from pure security theater.

Cloud-Native Consumption Principles

The promises of Cloud are alluring. Organizations are told they can reduce costs through a flexible consumption-based model, which minimizes waste in over-provisioning, while also achieving velocity in the development of new digital products without the dependence on heavy, centralized IT processes. This aligns closely with the goals of a DevOps transformation, which seeks to empower developers to build better software through a distributed operational model that delivers solutions more quickly with less overhead. However, most enterprise cloud journeys begin with a “lift and shift” from the on-premise data center to an IaaS provider. This seems like the easiest and fastest way to begin acclimating to the new environment by finding and leveraging similarities in deployment and consumption of digital assets. While this path may initially seem to expedite adoption, the migration is soon bogged down by the very issues that prompted the organization to adopt cloud: cumbersome, centralized processes that don’t support developers’ need for automation and speed.

With startups, which don’t have the existing processes and organizational hierarchy to be realigned to a new way of working, applications have no barriers to becoming Cloud-Native. They begin that way. Enterprises weren’t initially built around a Cloud model, so the implementation is often based Conway’s Law, the design and provisioning mirroring the existing organizational hierarchy. The only difference being that instead of a server team deploying bare-metal or on-premise virtual machines, they build an instance in the cloud. While there are some incremental gains, much of the latency from human middleware and legacy processes remain. After the short honeymoon based on a PoC or pilot projects, the realities of misaligned business processes grind progress to a halt. This also results in higher spend because cloud resources are not meant to be long-running snowflakes, but ephemeral and immutable. Cloud is made for cattle, not pets.

The source of this friction becomes clear. While cloud is referred to as “Infrastructure as a service,” many assume this is equivalent to data center hosting. However, Cloud is an evolution in the digital delivery model, where bare-metal is abstracted away from the customer, who now consumes resources through web interfaces and APIs. Cloud should be thought of and consumed as a software platform, i.e., Cloud-Native.  As defined by the Cloud Native Computing Foundation (CNCF):

Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.

Therefore, to maximize the value of cloud adoption at scale, it is necessary to become Cloud-Native, and the effort must be tightly coupled to DevOps automation efforts.

In 1967, Paul Baran discussed the creation of a “national computer public utility system,” a metered, time-sharing model. Cloud, and by extension Cloud-Native, is the manifestation of that prediction and, as with other utilities, the consumption of “compute as utility” must be distributed and self-service in order to achieve cost benefits. What about governance and security concerns? Cloud Service Providers (CSP) have built-in capabilities to establish policy restrictions at the organization, account, resource and/or identity level. Native security controls can be embedded to function seamlessly, providing the automated monitoring, alerting, and enforcement needed to minimize risk and meet audit requirements. By decoupling compliance from control, these capabilities are more efficiently consumed through the platform via policy-as-code integrated into declarative Infrastructure-as-code (IaC). Alternatively, organizational risk is increased when using manual provisioning, abstraction layers or traditional controls that are not cloud-ready or Cloud-Native with this environment.

In an attempt to ease organizations’ struggle with cloud adoption, Azure and AWS have developed Well-Architected Frameworks to promote better cloud consumption and design. Both consist of five pillars to evaluate the quality of a solution delivery: 

  • Operational excellence
  • Security
  • Reliability
  • Performance (Efficiency)
  • Cost optimization

While helpful, these frameworks fail to communicate the urgent need for automation and tight coupling to the application development lifecycle in order to achieve a successful cloud migration. For example, from the AWS Operational Excellence Pillar, “operations as code” is only listed as a design principle to “limit human error and enable consistent responses to events.”

Ultimately, Cloud at scale, is best consumed as a software platform though the automated development processes essential to DevOps, otherwise the costs of side-channel pipeline provisioning and long-running, inefficiently sized workloads soon outweigh the initial benefits.

To summarize, the principles of a Cloud-Native consumption model include:

  • Automated provisioning of all resources as code through pipelines owned by product teams
  • Distributed self-service to achieve velocity and empower business segments
  • “Shift Everywhere” security through Policy-as-code embedded into the Infrastructure-as-Code
  • Decoupling of compliance from operational control through the use of CSP native capabilities to automate governance, monitoring, alerting and enforcement

To be effective, these principles are best operationalized through the unification of any cloud initiative with a DevOps effort. Otherwise, the cloud effort will be crippled by the existing technology bureaucracy.

Tagged ,

Assume No Intent

Lately, possibly as an outcome of the neuroscience-light infusing our mainstream culture, corporate-speak has started including some disturbing expressions into the jargon. One of the most egregious, “assume positive intent,” has become ubiquitous with both leadership and human resource departments. The underlying notion is that in our interactions with others, we shouldn’t assume negative intentions, only positive ones. But given the difficulties surrounding diversity and inclusion efforts across corporate America, isn’t this notion of assuming intent dangerously ingenuous and possibly conflict-avoidant?

The roots of this notion of assuming intent likely arise from the well-publicized concept of the psychological negativity bias. It’s part of human evolution, this self-protection mechanism in our biology that seeks to identify threat and push it away. While our threat response system is useful in life-or-death situations, it’s not always helpful in our day jobs. How do we learn to circumvent strong emotional protection responses in order to allow us to make better choices in the corporate world? The thought is that if we can scrub the interaction clean of any negative emotions, then we can propel a more positive conversation. This is based on a cognitive bias called the “framing effect” and the psychological technique of “cognitive reframing.

However, humans’ emotions and their interactions are far more complex and nuanced. While this reframing can be useful, it doesn’t take into account factors such as individuals’ predisposition for positive or negative affect, socio-economic context, how the negativity bias tends to attenuate with age or even the limitations of reframing. Moreover, when speaking to a disenfranchised person or group, ideas like “assume positive intent” can quickly turn into gaslighting and conflict-avoidance, a way to short circuit an important conversation about real inequity that could resolve and prevent a bigger conflict. It’s imprudent to think reframing alone can be used as a shortcut around the critical practice of conflict resolution.

What we really need in our interactions are cognitive placeholders in our conflict resolution playbook. Cues that allow us to stop, evaluate and reconsider the emotional undercurrent in an exchange with another person. Reminders for when it’s time to step back and recalibrate. Maybe the better approach is to assume no intent, but when hearing something that gives rise to concern, be ready for a professionally intimate conversation rooted in authenticity.

DevSecOps Decisioning Principles

I know you’ve heard this before, but DevOps is not about tools. At its core, DevOps is really a supply chain for efficiently delivering software. At various stages of the process, you need testing and validation to ensure the delivery of a quality product. With that in mind, DevSecOps should adhere to certain principles to best support the automated SDLC process. To this end, I’ve developed a set of fundamental propositions for the practice of good DevSecOps.

  • Security tools should integrate as decision points in a DevOps pipeline aka DevSecOps.
  • DevSecOps tool(s) should have a policy engine that can respond with a pass/fail decision for the pipeline. 
    • This optimizes response time.
    • Supports separation of duties (SoD) by externalizing security decisions outside the pipeline.
    • “Fast and frugal” decisioning is preferred over customized scoring to better support velocity and consistency. 
    • Does not exclude the need for detailed information provided as pipeline output.
  • Full inspection of the supply chain element to be decisioned, aka “slow path,” should be used when an element is unknown to the pipeline decisioner. 
  • Minimal or incremental inspection of the supply chain element to be decisioned, aka “fast path,” should be used when an element is recognized (e.g. hash) by the pipeline decisioner.
  • Decision points should have a “fast path” available, where possible, to minimize any latency introduced from security decisioning.
  • There should be no attempt to use customized risk scores in the pipeline. While temporal and contextual elements are useful in reporting and judging how to mitigate operational risk, attempts to use custom scores in a pipeline could unnecessarily complicate the decisioning process, create inconsistency and decrease performance of the pipeline.  
  • Security policy engines should not be managed by the pipeline team, but externally by a security SME, to comply with SoD and reduce opportunities for subversion of security policy decisions during automation.

Using a master policy engine, such as the Open Policy Agent (OPA), is an ideal way to “shift left” by providing a validation capability-as-a-service that can be integrated at different phases into the development and deployment of applications. Ideally, this allows the decoupling of compliance from control, reducing bottlenecks and inconsistency in the process from faulty security criteria integrated into pipeline code. By using security policy-as-code that is created and managed by security teams, DevSecOps will align more closely with the rest of the SDLC. Because at the end of the day, the supply chain is only as good as the product it delivers.

Tagged , , , , , ,

DevSecOps Myths

I’ve been working in the DevOps space for a few years now and there’s no better way to send me into an episode of nerd rage than to encounter tragically misguided attempts to implement DevSecOps. Generally, this is due to security people who are uneducated in the basics of good DevOps practices and a misunderstanding of the collaboration necessary to be successful with integrating security into the process.

The following is a list of some common DevSecOps myths I’ve encountered:

  • Special tools or software. While DevOps-friendly tools are nice, you generally don’t have to go out and buy a lot of new security software. Often, you can integrate existing controls or even (gasp) use open source. No amount of vendor hype is going to magically add that dash of DevSecOps to your pipelines. To the people going on buying sprees, I have one thing to say: Please.Stop.
  • A separate software development initiative. One of the core values of DevOps is COLLABORATION. If the security team isn’t working with and actively engaged with the DevOps platform team, then you won’t get DevSecOps. These should be aligned and complimentary initiatives.
  • Models, frameworks, lingo or buzzwords. Security people love this stuff, but DevOps is a practical effort to make software delivery faster to support the business. The goal of DevSecOps is to add validation to this process. Period. Stop overcomplicating things.
  • Agilefall, i.e. Waterfall dressed as DevOps/Agile. Agile and DevOps are reality. Both approaches are more than a decade old and for security professionals to be in denial or unfamiliar with their workings is deluded.

What is DevSecOps? It’s an approach that focuses on Security as a Product Feature, not a checkbox. An initiative that includes people, processes and technology to align with Agile/DevOps efforts to match its delivery methods and desired velocity. But let’s take a cue from the DevSecOps Manifesto.

Through Security as Code, we have and will learn that there is simply a better way for security practitioners, like us, to operate and contribute value with less friction. We know we must adapt our ways quickly and foster innovation to ensure data security and privacy issues are not left behind because we were too slow to change.

Your Pets Don’t Belong in the Cloud

At too many organizations, I’ve seen a dangerous pattern when trying to migrate to public Infrastructure as a Service (IaaS) i.e. Cloud. It’s often approached like a colo or a data center hosting service and the result is eventual failure in the initiative due to massive cost overruns and terrible performance. Essentially, this can be attributed to inexperience on the side of the organization and a cloud provider business model based on consumption. The end result is usually layoffs and reorgs while senior leadership shakes its head, “But it worked for Netflix!”

Based on my experience with various public and hybrid cloud initiatives, I can offer the following advice.

  1. Treat public cloud like an application platform, not traditional infrastructure. That means you should have reference models and Infrastructure-as-Code (IaC) templates for the deployment of architecture and application components that have undergone security and peer reviews in advance. Practice “policy as code” by working with cloud engineers to build security requirements into IaC.
  2. Use public cloud like an ephemeral ecosystem with immutable components. Translation: your “pets” don’t belong there, only cattle. Deploy resources to meet demand and establish expiration dates. Don’t attempt to migrate your monolithic application without significant refactoring to make it cloud-friendly. If you need to change a configuration or resize, then redeploy. Identify validation points in your cloud supply chain, where you can catch vulnerable systems/components prior to deploy, because it reduces your attack surface AND it’s cheaper. You should also have monitoring in place (AWS Config or a 3rd party app) that catches any deviation and  automatically remediates. You want cloud infrastructure that is standardized, secure and repeatable.
  3. Become an expert in understanding the cost of services in public cloud. Remember, it’s a consumption model and the cloud provider isn’t going to lose any sleep over customers hemorrhaging money due to bad design.
  4. Hybrid cloud doesn’t mean creating inefficient design patterns based on dependencies between public cloud and on-premise infrastructure. You don’t do this with traditional data centers, why would you do it with hybrid could?
  5. Hire experienced automation engineers/developers to lead your cloud migration and train staff who believe in the initiative. Send the saboteurs home early on or you’ll have organizational chaos.

If software ate the world, it burped out the Cloud. If you don’t approach this initiative with the right architecture, processes and people, there aren’t enough fancy tools in the world to help you clean up the result: organizational indigestion.

burping_cloud

 

Tagged , , , , , , , ,

The Five Stages of Cloud Grief

Over the last five years as a security architect, I’ve been at organizations in various phases of cloud adoption. During that time, I’ve noticed that the most significant barrier isn’t technical. In many cases, public cloud is actually a step up from an organization’s on-premise technical debt.

One of the main obstacles to migration is emotional and can derail a cloud strategy faster than any technical roadblock. This is because our organizations are still filled with carbon units that have messy emotions who can quietly sabotage the initiative.

The emotional trajectory of an organization attempting to move to the public cloud can be illustrated through the Five Stages of Cloud Grief, which I’ve based on the Kubler-Ross Grief Cycle.

  1. Denial – Senior Leadership tells the IT organization they’re spending too much money and that they need to move everything to the cloud, because it’s cheaper. The CIO curls into fetal position under his desk. Infrastructure staff eventually hear about the new strategy and run screaming to the data center, grabbing onto random servers and switches. Other staff hug each other and cry tears of joy hoping that they can finally get new services deployed before they retire.
  2. Anger – IT staff shows up at all-hands meeting with torches and pitchforks demanding the CIO’s blood and demanding to know if there will be layoffs. The security team predicts a compliance apocalypse. Administrative staff distracts them with free donuts and pizza.
  3. Depression – CISO tells everyone cloud isn’t secure and violates all policies. Quietly packs a “go” bag and stocks bomb shelter with supplies. Infrastructure staff are forced to take cloud training, but continue to miss project timeline milestones while they refresh their resumes and LinkedIn pages.
  4. Bargaining – After senior leadership sets a final “drop dead” date for cloud migration, IT staff complain that they don’t have enough resources. New “cloud ready” staff is hired and enter the IT Sanctum Sanctorum like the Visigoths invading Rome. Information Security team presents threat intelligence report that shows $THREAT_ACTOR_DU_JOUR has pwned public cloud.
  5. Acceptance – 75% of cloud migration goal is met, but since there wasn’t a technical strategy or design, the Opex is higher and senior leadership starts wearing diapers in preparation for the monthly bill. Most of the “cloud ready” staff has moved on to the next job out of frustration and the only people left don’t actually understand how anything works.

AWS_consumption

Tagged , , , , , , , ,

Moving Appsec To the Left

After spending the last year in a Product Security Architect role for a software company, I learned an important lesson:

Most application security efforts are misguided and ineffective.

While many security people have a good understanding of how to find application vulnerabilities and exploit them, they often don’t understand how software development teams work, especially in Agile/DevOps organizations. This leads to inefficiencies and a flawed program. If you really want to build secure applications, you have to meet developers where they are, by understanding how to embed security into their processes.

In some very mature Agile organizations, application security teams have started adding automated validation and testing points into their DevOps pipelines as DevSecOps (or SecDevOps, there seems to be a religious war over the proper terminology) to enforce the release of secure code. This is a huge improvement, because it ensures that you can eliminate the manual “gates” that block rapid deployment. My personal experience with this model is that it’s a work-in-progress, but a necessary aspirational goal for any application security program. Ultimately, if you can’t integrate your security testing into a CI/CD pipeline, the development process will circumvent security validation and introduce software vulnerabilities into your applications.

However, this is only one part of the effort. In Agile software development, there’s an expression, “shifting to the left,” which means moving validation to earlier parts of the development process.  While I could explain this in detail, DevSecOps.org already has an excellent post on the topic. In my role, I partnered with development teams by acting as a product manager and treating security as a customer feature, because this seemed more effective than the typical tactic of adding a bunch of non-functional requirements into a product backlog.

A common question I would receive from scrum teams is whether a security requirement should be written as a user story or simply as acceptance criteria. The short answer is, “it depends.” If the requirement translates into direct functional requirements for a user, i.e. for a public service, it is better suited as a user story with its own acceptance criteria. If the requirement is concerned with a back-end service or feature, this is better expressed as acceptance criteria in existing stories. One technique I found useful was to create a set of user security stories derived from the OWASP Application Security Verification Standard (ASVS) version 3.0.1 that could be used as a template to populate backlogs and referenced during sprint planning. I’m not talking about “evil user stories,” because I don’t find those particularly useful when working with a group of developers.

Another area where product teams struggle is whether a release should have a dedicated sprint for security or add the requirements as acceptance criteria to user stories throughout the release cycle. I recommend having a security sprint for all new or major releases due to the inclusion of time-intensive tasks such as manual penetration testing, architectural risk assessments and threat modeling. But this should be a collaborative process with a product team and  I met regularly with product owners to assist with sprint planning and backlog grooming. I also found it useful to add a security topic to the MVS (minimum viable solution) contract.

I don’t pretend to have all the answers when it comes to improving software security, but spending time in the trenches with product development teams was an enlightening experience. The biggest takeaway: security teams have to grok the DevOps principle of collaboration if we want more secure software. To further this aim, I’m posting the set of user security stories and acceptance criteria I created here. Hopefully, it will be the starting point for a useful dialogue with your own development teams.

Tagged , , , , ,
%d bloggers like this: