Information security is based on secrets; no surprise there.
I think, however, that there is a useful distinction to be made
between limited-scope (user or account-specific) secrets, and what I call
structural secrets. Structural secrets are fundamental system weaknesses
intentionally left unguarded in the original design. Structural secrets, if
and when revealed, utterly undermine the security and integrity of the
whole system, or a class of devices, or an array of data files.
The 2600 hertz tone is the classic example of a structural secret
designed into a network control system. When the phone phreaks discovered
that whistling a 2600 tone gave them control of a telephone switch, they
had discovered Alladin's Lamp. Everything was possible. And beyond the
secret of the 2600 tone, there were apparently few defensible barriers or
controls built into The Mother's Network. When the structural secret about
the telephone net's control channel was revealed, the whole security
paradyme (such as it was) collapsed and Ma Bell was left naked and open for
exploitation and exploration.
Isn't this is what most people mean when they refer to "security
(Admittedly, system bugs can have the same affect, but because they
are not part of the conscious design, the don't draw the same scorn --
except perhaps on this list;-)
STO is, after all, a slur -- typically a term of disgust and
professional embarassment. (Ironically, it was even among the sneering
phone phreaks of yore, and it remains a scornful description over on alt.
2600. STO is much more than a mere descriptive phrase, or some sly acronym
for secrecy. For at least 20 years it has been a phrase used to
bitterly describe and condemn a security-blind design.
Narrow-focus (user or account-specific) secrets -- like a
traditional account password, a SecurID secret key, or even an crypto
session key -- escape the STO condemnation because (even if they are
somehow compromised) the design of the larger system, to a greater or
lesser degree, limits the damage associated with their revelation.
Thankfully, few secrets revealed can create a legend like 2600.
Robert Bonomi <bonomi @
edu> tried to sort out the
classes of secrets that are used in authentication systems:
>*All* secure authentication systems, with the _possible_ exception of bio-
>metrically based ones, are based on 'secrets', in one form or another. The
>user 'proves' identity or authorization, by demonstrating knowledge of the
>'secret'. In broad, there are two methods of demonstrating such knowlege:
>1) disclosing the secret to the opposite party, who *also* has knowledge of
>the secret, and who makes a 'simple' equality test. 2) -using- that 'secret',
>in a manner which "proves" (to some arbitrary 'confidence limit') posession
>of that knowledge, *without* disclosing the secret iteself. <snip>
>'Type 1' authentication includes things like re-usable passwords, a phone
>number that is known only to 'authorized' users, knowing what 'port' a
>specific service is on, etc.
>'Type 2' authentication includes things like HHA, one-time passwords,
>biometric systemss, true challenge-response systems, etc.
An obvious but important footnote: Not only secrets are ID
authentication devices. I suggest we can more productively use the classic
and hoary description of the three ways in which a computer can
(automatically) authenticate a claimed identity: something known, something
held, something one is.
Overlooking the value of a physical security token, be it a brass
key or a OTP card, is to overlook the logic behind the most widely used
schemes for robust (two-factor) ID authentication today.
Vin McLellan +The Privacy Guild+ <vin @
53 Nichols St., Chelsea, Ma., USA Tel: (617) 884-5548