Despite the rapid growth of DevSecOps practices throughout various industries, there still seems to be a fair amount of trepidation, particularly among security practitioners and auditors. One of the first concerns that pops up is a blurted out “You can’t do DevSecOps here! It violates separation of duties!” Interestingly, this assertion is generally incorrect and derives from a general misunderstanding about DevSecOps, automation, and the continuous integration/deployment (CI/CD) pipeline.

Myth – DevSecOps violates separation of duties.

What is “Separation of Duties”?

First, it may helpful to understand what “separation of duties” (aka SoD or “segregation of duties”) is and what purpose it serves. You can read various write-ups defining separation of duties from Wikipedia, SANS, and the AICPA. Separation of duties is an internal control intended to reduce the incidence of errors and fraud in a system. At base, the belief is that having 2 or more people involved in creating and reviewing changes (whether to code or configs) is a net positive. I like the AICPA’s definition:

“The principle of SOD is based on shared responsibilities of a key process that disperses the critical functions of that process to more than one person or department. Without this separation in key processes, fraud and error risks are far less manageable.”

In general, having an extra set of eyes on various system changes is helpful. However, there’s an obvious glaring hole in separation of duties, which is that it doesn’t (and cannot) account for collusion. So, in smaller team environments, it’s still possible to have errors and fraud survive to production, despite the internal control being in place.

It’s important to understand the core value proposition represented by this internal control, because it will enable you to explain to auditors how automation still supports the goals of CI/CD and audit.

2 Myths of Separation of Duties with DevSecOps

Myth 1: DevOps + CI/CD Means Pushing Straight to Production

First and foremost, if you drill into concerns about meeting separation of duties requirements in DevSecOps, you’ll often find that security and audit people are likely misinformed. There is a misimpression that having a CI/CD pipeline in place means developers are pushing code straight from their IDE to production with no oversight or testing. Ironically, nothing could be farther from the truth. In fact, most organizations with a CI/CD pipeline do not have a fully automated system without human intervention. Moreover, in most organizations it’s exceedingly rare to have only 1 person managing everything from dev to test to ops and deployment. All but the smallest startups will typically have at least 1-2 engineers to code and an ops person to handle environment management (and deployment).

Myth 2: Separation of duties Is Effective At Stopping Fraud and Errors

One thing we know beyond a reasonable doubt is that errors in systems continue to occur no matter how much separation of duties is in place. As for fraud, DevSecOps and CI/CD makes it easier to detect fraud. As such, while having an extra set of eyes absolutely *is beneficial* for reducing errors, there will always be a point of diminishing returns. Moreover, humans cannot achieve the velocity necessary to keep up with the modern pace of business.

Separation of Duties Compliance in DevSecOps+CI/CD

The typical referenceable architecture for DevSecOps and CI/CD

DevSecOps Separation of Duties

At New Context, we come across many customers that have a great looking CI/CD pipeline, but still require some level of human intervention. After engaging with New Context by using our LS/IQ platform, organizations will receive guidance on how to implement a successful referenceable CI/CD pipeline.


  • The IDE should have integrated lint-like checks for code quality and code security.
  • The repository should be scanned on a regular, recurring basis by both a static application security testing (SAST) tool and a software composition analysis (SCA) tool (SCA checks libraries and functions/methods for versions with known vulnerabilities).
  • Later in the pipeline use dynamic application security testing (DAST) in addition to standard code quality testing.
  • UAT testing can also be heavily automated, especially when leveraging a test-driven development (TDD) methodology.
  • Infrastructure configurations can also be automated and checked using tools like terraform and kitchen-terraform. Additionally, images or containers should be pre-hardened with appropriate security tools integrated into the images or the hosting environment (such as sidecars for containers).
  • All of these tools and tests will generate output that must be fed into your issue tracker (e.g., JIRA, Pivotal)

In closing, we highlight that conflicts like the one described here between DevSecOps initiatives and security or audit can be reasonably addressed, but only if all parties are willing to have open, respectful, and mindful conversations. Our Lean Security approach puts a strong emphasis on creating these conditions, which helps elevate the level of professionalism across organizations while improving efficiency, effectiveness, and security. We’re pretty excited about applying our insights on improving business management and organizational culture applying lessons learned from Lean, DevOps, TDD, and more.

This post was updated on October 17, 2019.

Case Study

Reducing Risk and Cost

DevSecOps Services

Get fast, secure and optimized delivery of highly scalable applications.

Learn More