mjr> I'm an experimental computer scientist, and here's where I have
mjr> to (politely) part company with the formal methods folks, who
mjr> appear to believe that software can be proven to be correct.
mjr> This amounts to proving that software does not contain an
mjr> error, which seems to be an attempt to prove nonexistence. :)
No (and have been certified by the late greate State of Florida to
say so). Formal methods can be satisfied by a state diagram that
covers every possible condition. Given this *there are no errors*,
only states (of course sometimes there are undiscovered states and
the fact that such exists constitutes an error, not the state).
Truth tables, max & min terms, and boolean logic are part of what was
once taught in schools before DOOM took over the classroom. The fact
is that a computer is *by definition* predictable given an accurate
model and therin lies the rub - accurate models are few and far
Undocumented calls, proprietary algoritms, & race states all tend to
muddy the waters so that a fully accurate model becomes somewhat
difficult to produce just as Alan Turing predicted. As a consequence
we have taken to IV&V & massive "beta tests" of software to try to
discover those states that we did not expect *because we do not
fully understand the systems we are dealing with*.
Since "formal methods" today means essentially a SWAG, no wonder
there is confusion but it stems from our lack, not the method's.
Rick Smith from the Secure Computing Technology Center (in case you
wondered why SCTC) rites:
>Formal assurance (as we're doing on the SNS Mail Guard) consists of
>several separate tasks. The objective isn't to prove the system is
>flawless, but to prove that the design is consistent with the
>security policy we claim to enforce.
This concept has bothered me since the first time I ran into it eons
ago. Doesn't say it works in the real world, just that it is consistant
with their model of it.
>That's the bottom line -- you find the problems so you can fix them before
>they cause a security problem.
Sounds like what we used to call "defining the envelope" but that was
supposed to happen *before* the first line of code was written. True, it
needs to be dynamic and accept changes since the world may not be so nice
as to follow your model.
>There are three parts to the formal assurance task: 1) develop a
>formal specification of the security policy, 2) develop a formal
>specification of the interface to the system's security related
>services, and 3) do a proof that the interface does not contradict
IV&V - the designers were supposed to do this up front also.
>Formal assurance costs more since you have to do more engineering work
>to make things _really_ tight. It's worth it if you need to ensure a
>really strong separation between security domains. That requires a lot
>more caution than most Internet subscribers show today.
Well, it costs more since you need *different* people on the review team
and they have to be as good as the people who designed it in the first
place - often small organizations do not have this sort of staffing. Here
it is a good idea to present something you think is good to outsiders
for them to bang on. Fallibility is human and it is also good to recognize
it. Unfortunately requests for free IV&V by issuing a challenge while holding
back design information as "proprietary" IMHO sounds more like arrogance.
Occasionally, I am called into a program to make an independant examination.
First the team disappears for a lengthy period with all of the design data
and source code and do not emerge until we understand the design completely.
Then we start "playing with" the system until we feel thoroughly comfortable
with it. Only then does the real testing start. It is a long, slow process for
complex systems, but and again IMHO it is necessary - anything else is a
modern equivalent of monkeys banging on typewriters: occasionally you
will get something useful, particularly if the environment is target rich,
but is hardly exhaustive.
Unfortunately, venture capitalists usually want something *now* and few
engineers expect to remain in an organization for their whole career so
glitz and glamours (the original meaning) are the norm. Some protections
even work - against amateurs - but professionals do not play by the rules
(or even know or care other than as a clue to vulnerabilities what the
security policy a system was designed to was), they play to win.
If a professional were to be given a target of a certain system, (s)he would
not knock on the front door, a duplicate would be purchased and it would be
disassembled, poked, prodded, and turned every which way but loose (the third
of Clint Eastwood's comedies). At the same time, all of the people surrounding
the system would be investigated (there is a very real reason why many
biometric access devices also have a heartbeat monitor). Sometimes a
very good security system places *personnel* at risk unless this aspect is
Now in truth I see a lot of good thinking coming out of Minnesota these
days (guess those winters give them more time to think 8*) wrapped in
some strange packages. Occasionally it is a good idea to return to basics
and think about what you want you image to be. Sometimes a review of
Dogen Zenji's "Instructions for the Zen Cook" are in order.