Tag Archives: appsec

Supply Chain Security Jumps the Shark

Can we collectively agree that the supply chain security discussion has grown tiresome? Ten years ago, I couldn’t get anyone to pay attention to the supply chain outside of the federal government crowd, but now it continues to be the security topic du jour. And while this might seem like a good thing, it’s increasingly becoming a distraction from other topics of product security, crowding out meaningful discussions about secure software development. So like a once-loved, long-running TV show that has worn out its welcome but looks for gimmicks to keep everyone’s attention, I’m officially declaring that Supply Chain Security has jumped the shark.

First, let’s clarify the meaning of the term Supply Chain Security. Contrary to what some believe, it’s not synonymous with the software development lifecycle (SDLC). That’s right, it’s time for a NIST definition! NIST, or the National Institute of Standards and Technology, defines supply chain security broadly because this term refers to anything acquired by an organization.

…the term supply chain refers to the linked set of resources and processes between and among multiple levels of an enterprise, each of which is an acquirer that begins with the sourcing of products and services and extends through the product and service life cycle.

Given the definition of supply chain, cybersecurity risks throughout the supply chain refers to the potential for harm or compromise that may arise from suppliers, their supply chains, their products, or their services. Cybersecurity risks throughout the supply chain are the results of threats that exploit vulnerabilities or exposures within products and services that traverse the supply chain or threats that exploit vulnerabilities or exposures within the supply chain itself.

(If you’re annoyed by the US-centric discussion, I encourage you to review ISO 28000 series, supply chain security management, which I haven’t included here because they charge you > $600 for downloading the standard.)

Typically, supply chain security refers to third parties, which is why the term is most often used in relation to open source software (OSS). You didn’t create the OSS you’re using, and it exists outside your own SDLC, so you need processes and capabilities in place to evaluate it for risk. But you also need to consider the commercial off-the-shelf software (COTS) you acquire as well. Consider SolarWinds. A series of attacks against the public and private sectors was caused by a breach against a commercial product. This compromise is what allowed malicious parties into SolarWinds customers’ internal networks. This isn’t a new concept, it just gained widespread attention due to the pervasive use of SolarWinds as an enterprise monitoring system. Most organizations that have procurement processes include robust third party security programs for this reason, but they aren’t perfect.

If supply chain security isn’t a novel topic and isn’t inclusive of the entire SDLC, then why does it continue to captivate the attention of security leaders? Maybe because it presents a measurable, systematic approach to addressing application security issues. Vulnerability management is attractive because it offers the comforting illusion that if you do the right things, like updating OSS, you’ll beat the security game. Unfortunately, the truth is far more complicated. Just take a look at the following diagram that illustrates the typical elements of a product security program:

Transforming_product_security_EXTERNAL

Executives want uncomplicated answers when they ask, “Are we secure.” They often feel overwhelmed by security discussions because they want to focus on what they were hired for: to run a business. As security professionals, we need to remember this motivation as we build programs to comprehensively address security risk. We should be giving our organizations what they need, not more empty security promises based on the latest trends.

Tagged , , , , , , ,

Moving Appsec To the Left

After spending the last year in a Product Security Architect role for a software company, I learned an important lesson:

Most application security efforts are misguided and ineffective.

While many security people have a good understanding of how to find application vulnerabilities and exploit them, they often don’t understand how software development teams work, especially in Agile/DevOps organizations. This leads to inefficiencies and a flawed program. If you really want to build secure applications, you have to meet developers where they are, by understanding how to embed security into their processes.

In some very mature Agile organizations, application security teams have started adding automated validation and testing points into their DevOps pipelines as DevSecOps (or SecDevOps, there seems to be a religious war over the proper terminology) to enforce the release of secure code. This is a huge improvement, because it ensures that you can eliminate the manual “gates” that block rapid deployment. My personal experience with this model is that it’s a work-in-progress, but a necessary aspirational goal for any application security program. Ultimately, if you can’t integrate your security testing into a CI/CD pipeline, the development process will circumvent security validation and introduce software vulnerabilities into your applications.

However, this is only one part of the effort. In Agile software development, there’s an expression, “shifting to the left,” which means moving validation to earlier parts of the development process.  While I could explain this in detail, DevSecOps.org already has an excellent post on the topic. In my role, I partnered with development teams by acting as a product manager and treating security as a customer feature, because this seemed more effective than the typical tactic of adding a bunch of non-functional requirements into a product backlog.

A common question I would receive from scrum teams is whether a security requirement should be written as a user story or simply as acceptance criteria. The short answer is, “it depends.” If the requirement translates into direct functional requirements for a user, i.e. for a public service, it is better suited as a user story with its own acceptance criteria. If the requirement is concerned with a back-end service or feature, this is better expressed as acceptance criteria in existing stories. One technique I found useful was to create a set of user security stories derived from the OWASP Application Security Verification Standard (ASVS) version 3.0.1 that could be used as a template to populate backlogs and referenced during sprint planning. I’m not talking about “evil user stories,” because I don’t find those particularly useful when working with a group of developers.

Another area where product teams struggle is whether a release should have a dedicated sprint for security or add the requirements as acceptance criteria to user stories throughout the release cycle. I recommend having a security sprint for all new or major releases due to the inclusion of time-intensive tasks such as manual penetration testing, architectural risk assessments and threat modeling. But this should be a collaborative process with a product team and  I met regularly with product owners to assist with sprint planning and backlog grooming. I also found it useful to add a security topic to the MVS (minimum viable solution) contract.

I don’t pretend to have all the answers when it comes to improving software security, but spending time in the trenches with product development teams was an enlightening experience. The biggest takeaway: security teams have to grok the DevOps principle of collaboration if we want more secure software. To further this aim, I’m posting the set of user security stories and acceptance criteria I created here. Hopefully, it will be the starting point for a useful dialogue with your own development teams.

Tagged , , , , ,