I'm an old-school Orange Book person, who has been working with both NIAP Common Criteria, well, since we wrote it, and with Ron Ross and the NIST Controls since v3 (you'll see I'm listed as part of the joint task force). Recently, I've been thinking about the older notions of assurance (what we have captured as the SA-08 enhancements, as well as the SC-03 enhancement and of course AC-25 and Reference Monitors. These notions were great in a Waterfall Model world, but how does the notion of assurance fit in an Agile World?
I'm also involved with the Annual Computer Security Applications Conference; see https://www.acsac.org (week of Dec. 9, 2024 in Honolulu HI). I'm coordinating a panel to discuss this issue: "Where Does Developmental Assurance and SSE Fit in an Agile DevSecOps World?". I'm trying to scare up some panelists, especially from the Agile side of the house (I think I've got some folks on the more traditional side). I'll paste the abstract and questions below. If you might be interested, or possibly have a suggestion for a panelist, email me at faigin -at acsac -dot org (excuse the Multics syntax; it stymies email address scrapers)
Thanks. Here's the abstract:
When we did the TCSEC, the focus was on assurance through engineering. That's what the system architecture requirements were doing as one moved from B1 through A1. Elements of this were expressed in NIST SP 800-160, and in the SA-8 enhancements where security engineering enhancements were emphasized. But these lofty notions of yore are crashing onto the cliffs of reality. We see efforts such as NIAP focusing on essentially EAL1 -- developer user documentation and a security target – because that's what is being done commercially – and combining that with some level of specified testing. We're seeing the DOD moving to agile acquisition, exploring checkout pipeline testing and lacking the time to put in detailed design efforts and development standards (instead relying on modeling and maybe some correspondence to reality). Are we back to "better, faster, cheaper - pick any two"? Are the tried and true notions of doing system security engineering and having disciplined development and design of code dead? Will the buzzwords of "AI" and "Zero Trust" save us?
This panel dovetails with the recent establishment of Sandia’s Digital Assurance for High Consequence Systems (DAHCS) Mission Campaign. This campaign (with an advisory board chaired by Dr. Gene Spafford) invests in research that develops generalizable scientific foundations to safeguard high-consequence systems such as satellites, hypersonic vehicles, nuclear weapons and critical infrastructure like nuclear power generators. It aims to reshape the scientific domain from one driven by expert-dependent pockets of excellence — through techniques like red teaming, security-by-design and formal analysis — into a sustainable, scalable and rigorous discipline. Yet in many of these disciplines, the push has been towards agile development and DevSecOps, so how are these two divergent approaches to be reconciled? Formal methods and security-by-design are often time consuming and measured; this is the opposite of the quick pace of agile.
Ron Ross argues that “Consumers need transparency, especially when hardware, software, and firmware components are being used in many systems that are part of the U.S. critical infrastructure. We know a lot about the food we eat and the medicines we take. It might be time to use the assurance concepts that have been developed over the past four decades to increase the trustworthiness of the components and systems that we depend on to protect individuals and the Nation.” Lacking that, is there a way to provide consumers of software and systems with an “Assurance Label” that accurately reflects the confidence they can have in the correctness of the design and implementation?
Panel Questions
Can the traditional notions of Development Assurance (Security Architectures, Detailed Design Decomposition and Review, Security Engineering Principles) be incorporated into Agile and Rapid Development methodologies?
What approach should be used to build highly trustworthy software in an Agile world? Are formal methods truly dead?
How can we ever gain confidence with all the frameworks and glueware in use behind the scenes? Have our systems gotten so complex that we can no longer understand or assess them (and AI, I’m looking at you)?
Is the battle lost: Have our systems become so distributed and complicated with so many pieces that an engineered security architecture has become impossible?
Is there a way to accurately label software so consumers and acquisition agencies can accurately gauge or request the level of assurance provided or required?
[link] [comments]