Marcus, you are constructing a straw man and then ripping him apart.
This is fun and maybe it's good exercise. But this has nothing to
do with formal assurance as applied to computer security.
I agree that over the decades many clueless academics have developed
inadequate approaches for applying formal assurance to systems. If
you wish to beat that straw man, the best source of material is an
ancient article by the late Alan Perlis published circa 1980 in the
CACM called "Social Processes and Proofs" or some such. I saw him
give a talk on it, too, and he was a riot.
Incidentally, I don't endorse the "Subject" line of this message.
Assurance (the more the better) reduces the risk of "oops" but can
never eliminate it entirely. Computer systems are engineering
artifacts as you pointed out, not idealized mathematical entities; no
formal assurance or other design analysis will find everything. The
amount of assurance is a decision between vendor and customer based on
the customer's willingness to reduce uncertainty by paying for more
design analysis to achieve higher assurance. It ain't cheap.
mjr> The mistake the formal verification folks make is when they
mjr> confuse the model with reality. Just because you can model
mjr> something doesn't mean you have reproduced or defined the
The Orange Book doesn't do this. We don't do this. Nobody I've ever
talked to who tried to seriously do Orange Book formal assurance
has done this.
If there's anything "wrong" with formal assurance it's the simple fact
that it's easy for careless practitioners to do it wrong, and expend a
lot of effort for nothing. That's why it's still the province of
people with special training and not something taught routinely like
circuit analysis or Karnaugh maps or bridge stress analysis.
mjr> In Rick's observation ..., the word "proof" can be used
mjr> meaningfully, since he is referring to a proof of one set of
mjr> definitions that are artificial in terms of another. :) That's
mjr> pretty believable. You're "proving" that the interface to the
mjr> system meets its requirements. Big Deal -- you're not able to
mjr> *prove* that the system actually *works* as specified, though
mjr> you can use this process to obtain a massive body of evidence
mjr> that points to its working the way you hope it does.
Ahh but it *IS* a Big Deal. Without that "proof" you have nothing
but oratory to support your claim that the design achieves any of
the system's requirements. The short history of computer security
has many, many examples of how convincing oratory has proven
insufficient. Pasting security levels into an existing OS looks
convincing but consistently yields major holes.
It's *hard* to design a system that achieves its security
requirements in the face of a broad class of attacks. Formal
assurance is a tool to find design weaknesses related to formally
modeled security properties. It does its job.
mjr> In practical terms what you're doing is writing your
mjr> assumptions, proving they are correct, and hand-waving about
mjr> the details. :)
You use "hand waving" and *proofs* and *testing* and *inspection*
and they take you from real-world requirements to implementation.
You use "hand waving" (or "proof through oratory") to establish the
correspondence between the formal requirements and the real-world
requirements (i.e. the practical equivalence of noninterference
with the intent of multilevel security requirements).
You use *proofs* to demonstrate that the formal requirements
(noninterference) are achieved by your system design.
Then you use *testing* to demonstrate the relationship between the
design and the implementation. You also use code inspection
("implementation analysis") to show correspondence to design.
Without the formal proofs, you have nothing but "hand waving" to
show that your security requirements are fulfilled by your design.
Mere oratory like this has consistently failed in practice to yield
secure system designs. It's like trying to design a large network
without an analytical tool. Same strengths and weaknesses.
com roseville, minnesota