Great Circle Associates Firewalls
(November 1994)
 

Indexed By Date: [Previous] [Next] Indexed By Thread: [Previous] [Next]

Subject: Re: Assurance means never having to say "oops".
From: mjr @ tis . com
Date: Wed, 9 Nov 94 13:41:21 -0500
To: padgett @ tccslr . dnet . mmc . com (A. Padgett Peterson, P.E. Information Security)
Cc: "firewalls @ greatcircle . com"@uvs1.dnet.mmc.com

A. Padgett Peterson, P.E. Information Security writes:
> mjr> I'm an experimental computer scientist, and here's where I have
> mjr> to (politely) part company with the formal methods folks, who
> mjr> appear to believe that software can be proven to be correct.
> mjr> This amounts to proving that software does not contain an
> mjr> error, which seems to be an attempt to prove nonexistence. :)
> 
> No (and have been certified by the late greate State of Florida to
> say so). Formal methods can be satisfied by a state diagram that
> covers every possible condition. Given this *there are no errors*,
> only states (of course sometimes there are undiscovered states and
> the fact that such exists constitutes an error, not the state).

	I'm not licensed or certified to think, and barely to drive
but if formal methods can be satisfied with a state diagram that
covers "every possible condition" and at the same time one admits
that there may be an undiscovered state, then the "proof" isn't
worth the paper it is printed on and you're doing nothing more
than wrapping a layer of formal-sounding obfuscation on top of
what is really an intelligent, informed set of assumptions.

	That's fine, but it's pseudoscience and while it plays
well to academics and people who are easily intimidated by fancy
graphs it doesn't actually advance the state of anything other
than by virtue of causing people to enumerate their assumptions
and to think about the problem. THAT IS VERY VALUABLE. I believe
that we should do this whenever possible, but I'd rather do it
without all the theoretical handwaving.

	An "undiscovered state" is a nice word for "a bug we didn't
think of."

	Any software engineer who has worked on real code can
tell you that "undiscovered states" are like roaches. Not only
are there more of them than you'd expect, but there are more
of them than you even want to think about. :)
 
> Truth tables, max & min terms, and boolean logic are part of what was
> once taught in schools before DOOM took over the classroom. The fact
> is that a computer is *by definition* predictable given an accurate
> model and therin lies the rub - accurate models are few and far
> in between.

	The mistake the formal verification folks make is when
they confuse the model with reality. Just because you can model
something doesn't mean you have reproduced or defined the thing.

	When you say "accurate models are few and far between"
you're making a pretty whopping understatement. A "model" is
necessarily limited, *EVEN* if you're restricting it just to
what you think you understand about the system. Formal models
survive only by carefully leaving whole parts of the problem
out, so they can even attempt to achieve a reasonable amount
of completeness.

	People who work with real computers can tell you that they
are *NOT* by definition predictable. They are *intended* to be
predictable, but in fact they do all manner of weird stuff. An
instruction that adds two numbers may give a wrong answer when
the machine gets hot. A bus that is supposed to give you one
chunk of data may give you another because the computer's clock
chip is having an off nanosecond.

	The only thing you can prove from a model is the
existence of the model, and that the model is self-consistent.
Circular reasoning is circular reasoning, whether it is
formal, theological, or political. At least the Marxists
and Freudian psychologists don't try to support their
positions with huge reams of boring proofs. They just tell you
you can't admit you're wrong because you're repressing
your anxiety or you're a brainwashed capitalist tool.
(I am both)

> Undocumented calls, proprietary algoritms, & race states all tend to
> muddy the waters so that a fully accurate model becomes somewhat
> difficult to produce just as Alan Turing predicted. As a consequence
> we have taken to IV&V & massive "beta tests" of software to try to 
> discover those states that we did not expect *because we do not
> fully understand the systems we are dealing with*.
> 
> Since "formal methods" today means essentially a SWAG, no wonder
> there is confusion but it stems from our lack, not the method's.

	I have to assume you're kidding here. Certainly, it is
not the fault of the formal method that it cannot accurately
model reality to a useful degree. Intellectual false starts
don't need apology -- it's not the Flat Earth Hypothesis'
fault that Earth appears to be unreletingly spherical, but
that doesn't somehow mean it should be taken more seriously
because reality was so inconsiderate as to refuse to acknowledge
it.


> >Formal assurance (as we're doing on the SNS Mail Guard) consists of
> >several separate tasks. The objective isn't to prove the system is
> >flawless, but to prove that the design is consistent with the
> >security policy we claim to enforce. 
> 
> This concept has bothered me since the first time I ran into it eons
> ago. Doesn't say it works in the real world, just that it is consistant
> with their model of it.

	Bingo. Now, I agreed with 99% of the things Rick wrote.
They are Good Common Sense. It seems (Rick, correct me if I am
stepping on your words here) that he advocates:

	1) Think about the problem methodically
	2) Try to define the correct behaviour of the system, so
		that we can have a better chance of *recognizing*
		a correctly behaving system if/when we see one
	3) Use the results from #2 to develop empirical tests that
		we can apply to see if we have a system that meets
		our design goal

	The only quibble I have with all this is when the word
"proof" comes into the picture. :) I'd say that one can use
these methods to amass a large amount of *EVIDENCE* that the
system is consistent with the policy it is designed to enforce,
but - *prove* it? Ha.
 
> >There are three parts to the formal assurance task: 1) develop a
> >formal specification of the security policy, 2) develop a formal
> >specification of the interface to the system's security related
> >services, and 3) do a proof that the interface does not contradict
> >the policy.
> 
> IV&V - the designers were supposed to do this up front also.

	In Rick's observation above, the word "proof" can be used
meaningfully, since he is referring to a proof of one set of
definitions that are artificial in terms of another. :) That's
pretty believable. You're "proving" that the interface to the
system meets its requirements. Big Deal -- you're not able to
*prove* that the system actually *works* as specified, though
you can use this process to obtain a massive body of evidence
that points to its working the way you hope it does.

	In practical terms what you're doing is writing your
assumptions, proving they are correct, and hand-waving about
the details. :)

	That's fine, and I support that, because I'm an engineer
and I've done hand-waving about the details all my career. What
gets my goat is when someone tries to *prove* that he can write
good code without dusting off a keyboard. :)
 
> Occasionally, I am called into a program to make an independant examination.
> First the team disappears for a lengthy period with all of the design data 
> and source code and do not emerge until we understand the design completely. 
> Then we start "playing with" the system until we feel thoroughly comfortable 
> with it. Only then does the real testing start. It is a long, slow process for
> complex systems, but and again IMHO it is necessary - anything else is a
> modern equivalent of monkeys banging on typewriters: occasionally you
> will get something useful, particularly if the environment is target rich,
> but is hardly exhaustive.

	This is what I call "experimental computer science" and I
am all in favor of it.
 
mjr.

Indexed By Date Previous: Re: Firewalls-Digest V3 #398
From: jkahn @ smiley . mitre . org
Next: Re: Harris Systems
From: Chris Spinolo <spig @ lanman . gsfc . nasa . gov>
Indexed By Thread Previous: Assurance means never having to say "oops".
From: padgett @ tccslr . dnet . mmc . com (A. Padgett Peterson, P.E. Information Security)
Next: Re: Assurance means never having to say "oops".
From: Rick Smith <smith @ sctc . com>

Google
 
Search Internet Search www.greatcircle.com