Application security practitioners often preach about the importance of shifting security left in the software development life-cycle (SDLC). The reason this catch-phrase so-easily resonates with leadership is simple: if it’s possible to identify and remediate security vulnerabilities earlier in an application’s development process, it’s easier and cheaper to address them. The logic checks out.

Consider this: a banking web application has been in development for 9 months and is finally ready to be reviewed by the product security team before being pushed to production. During the review, the security team quickly identifies multiple high risk vulnerabilities:

  • The application’s identity provider is actually the same one used by the organization for its internal systems.
  • Role-based access controls (RBAC) weren’t implemented properly.
  • Hard-coded credentials are stored as comments in source-code.
  • Missing nonce’s in the application’s forms allow for cross-site request forgery (CSRF).

Oops.

Now, what should’ve been a quick review just before shipping the latest release has turned into any product-owner’s worst nightmare. The business now has to sink weeks into retro-actively patching, re-architecting, and debugging code fixes – only to have another round of security reviews to validate the remediations’ efficacy. This is usually where product security teams are made out to be “the bad guys”.

Shifting security left aims to solve this problem by implementing two critical functions into the SDLC.

  1. Embedding security architects in the Requirements and the Planning and Design phases of the SDLC.
  2. Integrating automated security checks into the developer’s build pipeline.

In a previous blog post, we discussed the purpose of the Requirements and the Planning and Design phases. More specifically, we explained that having security architect’s inputs early-on can identify security issues – allowing proactive security controls to be baked-in rather than being bolted-on at the end.

In this post, I’ll explain how we here at Abricto Security are practicing security in our own SDLC for our Cloud Security and Reporting Automation product. Here’s our current architecture:

 

It’s pretty straightforward, but here’s a quick run-down of how a build typically occurs:

  1. A developer modifies or creates new code in our code-base and pushes the new code to a GitHub repository.
  2. Jenkins polls the repositories and detects that new code has been pushed and it triggers a build.
  3. Once the new code has been built by Jenkins, it creates the latest docker images and pushes them to Docker Hub.

The automated nature of this build process allows us to embed security checks that will “break” the build if security vulnerabilities are detected.

A broken build indicates that an automated security check has failed. The failure is then inspected and the vulnerability identified must be either accepted or remediated.

Below are the individual steps of our Jenkins project configuration.

Container Vulnerability Scanning

To understand if any new vulnerabilities affect our latest containers, we leverage trivy to scan the container and its layers to identify any unpatched software, services and libraries.

Static Application Security Testing (SAST)

We’re huge fans of the open-source community, so it comes as no surprise that we leverage SonarQube for our static code analysis. SonarQube can is easily added to Jenkins via a community-supported plugin and it quickly identifies low-hanging fruit like hard-coded credentials, weak hashing algorithms, and unsafe functions.

We also run our code through Semgrep to find context-dependent vulnerabilities and make use of OWASP’s Dependency Checker to track vulnerable software components.

Dynamic Application Security Testing (DAST)

Once our code-bases have been scanned and validated, the containers are then built and ready for DAST scanning. Dynamic application scanning is a precursor to penetration testing. The purpose is not to find all vulnerabilities, but rather to find vulnerabilities that are well-suited for detection by automate-able checks. Such vulnerabilities include:

  • Missing cookie flags
  • HTTP response headers
  • Reflected cross-site scripting
  • SQL injection
  • Command injection
  • Server-side request forgery
  • Open directory listing
  • Verbose error messaging
  • Vulnerable software components

OWASP ZAP is an open-source DAST scanner that can be configured to run in headless mode and auto-generate a findings report. We schedule ZAP to conduct nightly scans against the latest image builds to detect new vulnerabilities. Scans are ran nightly rather than as in-line build steps in order to keep our build times fast since these scans can often take thirty minutes or more.

Thanks for reading. Subscribe to our newsletter below to be notified when we publish future blog posts where we’ll dive deeper into each component of this CI/CD pipeline.