[Some definitions for the unwary:
Evaluated system: one that has been tested (evaluated) by the
NCSC (NSA) at a specific level of security. it appears
on the EPL (evaluated products list).
System under evaluation: one that is not yet on the EPL. Note
that it may never get evaluated or complete evaluation.
Orange Book: the TCSEC (trusted computing security evaluation
criteria) sets out a number of features that are checked
in evaluation. Systems are evaluated at different digraphs
such as the ones you've seen here:
D = no security (e.g., windows, etc)
C2 = basic features (e.g., UNIX, VMS)
B = more complicated features including labelling
A = *simpler* with often fewer features but with
associated formal design, etc
Trusted systems: systems designed with an eye towards the TCSEC
Assurance: the property of knowing that what you *think* is,
actually *is*, applied to computer security.
CMW: compartmented mode workstation. A B1 system with some B2
features. Includes labelling but not covert channel
analysis or some of the really wizzo stuff.
Ray Kaplan writes:
>> The rigor of trusted system design is a market disaster and will
>>never succeed. When you talk to the trust engineers it's like talking to
>>a fredian psychologist. The logic isn't entirely circular, but if you've
>>bought into it, then it's inescapable.
>No argument there. However, I confess to confusion that is beyond this.
>The more security work I do, the less some of it makes sense. On one hand,
>there are people trying to solve hard problems. Many are mired in
>organizational politics, have no resources, and can't seem to even deal
>with the need for a security policy that is actually implemented and
>enforced. On the the other hand, the problems that they confront can't be
>solved with the simple technology that they are using. In the face of
>this, I've tried to balance the confusion that I feel by seeking out
They are definitely hard problems. It's one thing to have
a system that's full of holes; it's another to have a system that's
full of holes and which is used for electronic commerce. It's even
another thing still to have a system that's full of holes and which
is used for launching H-bombs.
On my machine here (switchblade.iwi.com) I'm not running
any security and it's *GREAT*! I'm not even behind a firewall! I
refuse to firewall off my own home. :) *BUT* my business papers
and processing are all done on a different machine and the only
thing you can steal from my server here is a bunch of source code
I've mostly posted to the 'net years ago. I believe that is the
*TYPICAL* Internet connection and I believe this is a perfectly
good approach. It doesn't scale real well, though.
Now, if I were going to make my machine here be a server
for electronic funds transfer, you *betcha* I would not be running
X on it! And I'd probably strip it down to a state of near uselessness
in order to secure it.
If my machine were the launch console for H-bombs I'd
strip it to a point of beyond uselessness, to secure it! :)
The point here is that the solutions need to match the
problems. IF people who buy their computing solutions do it with
that in mind (they don't!) it's not too bad - you buy an ordinary
box for ordinary purposes and an CompaqLaunchPro for your H-bomb
console and suit the engine to the task. Most people use pliers
to drive nails, too; I know I've done it in the past.
Where trusted systems get bad is when word comes from on
high that all computing must be secure, and that everything needs
to be as secure as the launch console and suddenly nobody can get
their work done. At one customer site, we started engineering a
firewall and they wanted to make *SURE* (in the sense of assurance)
that Web-based virusses could not get in. Suddenly, all the solutions
become complex, draconian, expensive -- and WORST OF ALL - you can't
use the Netscape browser anymore.
I submit to you that any computing system that is a
general purpose one (not a dedicated launch console or whatever)
that can't run Netscape is going to be a marketing disaster.
> For instance, I find some comfort in strict security
>engineering perspectives that demonstrate:
> 1) That C level TCSEC security features can't keep different
> classifications seperated - something else is necessary.
> 2) *Someone* needs to look at designs / code / deployment / operations
> and measure its ability to meet *some* standard.
But, Ray, that's my *point* -- People have been saying this
for *years*! The Association of Computer Security Greybeards have
been talking about item #2 above for 10,12 years now and no real
progress has been made. That's why it's been a complete rout out
My take is that the ACSG have been too "hard core" and
basically called for "if it's not perfect, don't do it" which
caused the market to say, "ok." and go someplace else.
Something is necessary, but I don't know what it is (actually
I think I do but I'm keeping that idea for myself) and whatever it
is, it's not what's currently out there.
The problem with computers is that they're so absolute. In
engineering, when you build a bridge, you figure out how strong the
materials need to be, multiply by two or three, and away you go.
There's nothing you can do like that in computing. The computer
security analogy of "engineering overhead" would go something like:
-> First, run C2 security
-> Then, to be sure, shut off all processes except one
-> Lastly, power it off
There's just no way, with computers, to build in the
invisible redundancy that you can in a bridge. Or maybe there is?
*THAT* is my challenge to the ACSG: make the security an invisible
part of the infrastructure, like an engineer can when building a
> 1) Most trusted systems are very hard to use. However, I
> this has more to do with the difficulty of the problems that
> they address from a strict security engineering perspective.
Yep. They *ARE* hard problems!
For the readers of this list who are not steeped in orange,
let me give an example. CMWs aren't able to really hit B2 because they
don't enforce unique access to devices. Sounds like a small matter?
Well, the problem is that the X display and its frame buffer are a
nasty problem if you're trying to keep data from here from leaking to
there. So, in orange land, you have to make the X server a "trusted"
process that itself enforces unique access. Making the X server a
trusted process means doing a security analysis (from a data leaking
perspective) of X. This is a problem. Can you imagine the cost and
effort? Can you imagine the impact on time to market? Yet the orange
book dogma is that it must be done or you run no windows. Using a
SPARC in console mode is not fun.
Anyhow, I agree with Ray. They are hard problem. Sometimes,
the manly thing to do is to say, "WOW! THAT IS A KILLER PROBLEM!"
and bag it, try to think of another way to do it, or just give
up entirely. That's what most of the world has done with secure
computing. Microsoft has not been hurt by the fact that Windows
has no security. I'm surprised they tout it in NT. They sure hide
it where nobody will bother to use it, but at least they made it
easy(er) to use.
> 2) The only ones who use them are those who are forced (or force
> themselves) to adhere to strict security engineering
I can think of a number of really crude responses I'd
love to make here. :) I know several people who like to be forced
to do painful, humiliating, or just plain uncomfortable things.
But even the masochists I know could't eroticize using a B2 system.
>Bottom line is this "cheap, easy, everybody else is doing it" dynamic can
>be shown to be flawed.
I don't agree. I absolutely do not. The annals of industry
are full of large companies that ignored the "cheap, easy, everybody
else is doing it" and - they're either out of business or they're
no longer large.
>In my view, boiling the mess down to cold
>reasoning reveals the inescapable conclusion that first order security
>engineering principals are simply not being followed by most
>security-realted efforts - including most contemporary internetworking
Fundamental principles can be hard to follow, though. Take
one of my favorites:
"buy low, sell high."
Sometimes it's hard to implement. Sounds easy, doesn't it?
Formal computer security is a lot like that. It's full of easy sounding
stuff that is insanely hard to do. Covert channels? Great idea. You
can spend a million bucks a year thinking about covert channels
in your toaster oven and that's not even touching a network.
> By some measures, the *real* absurdity is the notion that you can
>simply connect two dispperate security perimiters together with cheap, easy
>solutions and expect everything to be OK.
But the alternatives are:
1) Doing nothing (which is proven to have problems)
2) Doing something formal and very complex (which is
proven to take forever, cost a mint, and
do something ridiculous like give you an
Internet connection that only accepts
That leaves "Do the best you can within cost constraints, time
to market, and reasonable effort."
>I ended up at
>lunch with a table of developers from several major vendors that were all
>working on building trusted X for their respective CMWs. Not thinking, I
>made a remark about how easy it must be to grab MIT code and bash it into
>shape. Spoons dropped into soup and there was laughter and choaking.
>"Err, should I move to a different table?", I said. Naw, these folks just
>pulled me up short. After regaining his composure, one guy told me that -
>of course - this was their first idea. However, in reading the MIT code,
>he found comments burried deep in an impossibly convoluted case statement
>in X code that said "hey, I know this is ugly, but I'm a graduate student
>and I don't have to care." True or not, it makes the point that building
>something you can trust means starting from scratch. Givens in this
>process seem to include everything you pointed out - and more!
That's a true story. I suspect that millions of dollars have
been spent to trying to security engineer X. The real answer is:
"Under our guidelines, you really can't DO that."
The orange book game seems to be all about trying to get
around the orange book. It's this set of simple rules that are
hard to implement (at A1 you have to have frictionless disk drives)
(just kidding) that basically keep you from doing much of the stuff
you want to do like being on a network, writing files, playing
web. Trust engineering seems to be the process of saying, "ok,
given these constraints, how can we still manage to do it?"
The correct answer is: "give up, bag formal security,
get an account on AOL"
At least that's what a lot of people seem to do.
A number of times I have talked to folks who really should
not be on the 'net. I've listened to their firewall requirements,
reviewed their designs, and recommended that they cancel the T1,
and buy everyone at the facility an account on AOL, a modem at
home, and an extended work policy that lets them spend an hour a
day at home Internetworking.
For some reason, this recommendation shocks people because
I guess they think I'm supposed to sell firewalls.
>Nothing against MIT (I don't
>want to malign them in the least), but be it X or Kerberos - you can't even
>expect public domain code to be supportable, let alone trustworthy - except
>by the loosest of business and technical standards.
Ray, Ray, Ray...
You're lapsing into orange book think again!! Rewriting
everything from scratch only works if you're Rob Pike. And it
doesn't work if you need to use other people's products!
Whenever I hear someone say the kind of thing you're
saying in the paragraph above, I know I am talking with
someone who has never had to put a product out under deadline,
on 4 different platforms, next week, for customers who want
to pay half what you're charging for it.
I hate to break it to you, but a lot of commercial code
is complete, unmitigated crap, too. You just don't get to see
it because it's proprietary. Look at Windows internals and then
look at 4.4BSDlite and it's like the difference between finger
paintings and an painting by Meissonier. [I am implying that
4.4BSD is really nice stuff, and Windows is, well, a successful
commercial product that is cheap, fast, easy.]
>> At this point the trusted system mavens usually raise their
>>hands and say, "Time to market isn't everything! I'd rather have
>>security." The problem is that most of their users would rather have
>>Windows 95, Photoshop, the latest version of MSword, BSD4.4, etc.
>Yes, indeed. I propose that this is the problem. While trusted system
>mavens are not free from blame, the *real problem* is the notion that you
>can actually run Windows 95, Photoshop, the latest version of MSword,
>BSD4.4, etc without *some* disipline unless you don't care about security.
Thank you for playing.
Go tell your commercial customers to bag Windows and guess
who they'll bag.
It's not a question of not caring about security; it's a
question of caring about getting the job done. With 99% of the world
that takes top priority. Before you even think about security, make
sure it helps get the job done BETTER than the way it's being done
now, and if the answer is "it doesn't" then stick with the Government
customers who have different success criteria.
>>Look at the evaluated systems out there: they are all obsolete and
>>you can hardly run anything interesting on them. So the mission critical
>>systems get built on foundations of sand (no security) because the
>>secure systems suck too much to contemplate using. Give most users
>>a choice between CompuServe and DOCKMASTER and see which wins.
>Indeed. However, to heap all of the blame on DOCKMASTER and its ilk is not
>fair. Its the foundations of sand that most organizations mission critical
>systems stand on that is the problem!
I'm not heaping blame. I'm just pointing out that DOCKMASTER
is literally unusable when compared with, say, Compuserve. They both
fulfill the same PURPOSE. If secure systems are going to win they
have to be useable. I was just using DOCKMASTER as an example. That
it has its adherents is expected. MULTICS was a commercial flop.
>Yes - but, only from the perspective of those who insist on running mission
>critical systems on foundations of sand! I think that a strict security
>engineering-based evaluation of this reveals that *an answer* IS available
>now. The real question is NOT if "the best answer" is available.
"an answer" that is not "the best answer" is going to be
a commercial failure. It may be "the right thing" in some people's
eyes but all that means is that you'll have a line of mourners
at your funeral.
You keep dancing around it - say it outright. What you're
hinting at is that everyone should run multilevel systems and that
they should replace their installed base and applications base,
eat the retraining cost, and use slow, crufty software that
costs 3X what the other stuff costs. Is that what you're saying?
If so, you need to have a MUCH stronger case that it'll help my
business to *COMPETE* and achieve world marketplace domination.
If you think that covert channel analysis was tough, try
selling evaluated systems as a way of achieving a productivity
>> More trusted system philosophy: "if you don't use trusted
>>systems you are clearly not concerned about security." That's nonsense.
>Ummm, how about if we soften this to say "if you don't use first order
>security engineering principals, you are clearly not concerned about
Noooo, how about let's tell the truth: "if you don't use first
order security engineering principles it's probably because you had
other work to do, that took a higher priority."
I am *CONCERNED* about my roof developing a leak. I am not
*DOING* anything about it. That doesn't mean I don't care, or that
I am clueless about roofing, or that I am an evolutionary dead end.
>can pick and choose the pieces that we like and bend them into shape until
>they DO get usable systems built.
That's exactly what I see happening. The user community has
flatly rejected evaluated systems. They picked the pieces they liked.
They're waiting until usable systems get built. In the meantime
they have work to do. They WILL trade.
>BTW, I'd sure love to hear the details about exactly how
>your SMGs configureation to run TCP/IP over Email and do NFS works. Or,
>was this a joke?
It wasn't a joke. You simply encapsulate IP packets in
uuencoded Email messages, manually create the label and mail
it out. It's *slow* and the latency will kill you, but it will
work. There's a tunnel driver for UNIX (Jeff Onions') that sets
up a virtual network interface. All packets routing to the interface
appear in /dev/tun0 for read. You simply read each packet at an
application level, uuencode, and mail. On the other end, you reverse
the process. It requires a collaborator.
You *can* run TCP/IP over anything that supports the
band width, even serial lines, DNS packets, Email...