>>The rigor of a
>>trusted system design can (and is regularly) destroyed by misapplication,
>>improper operation, and slip-shod management. This certainly includes
>>firewalls based on this technology.
Marcus J Ranum <mjr @
> [I'm going to do an evil thing, here, and completely slide by
>the rest of Ray's well-written and thought-out mail, because I think
>that the paragraph above cuts directly to the heart of the matter.
No problem - thanks. And, here I even flubbed the reply to the list by not
attributing you (e.g., omitted "Marcus J Ranum writes" in my reply ;) Hey,
were it not for the turns that discussions take, this might as well be a
book, instead of a eh? The electronic publishing model that I'm birthing
has this free flowing style at its core. Its also nice to hear that there
still appears to be some sanity left amid my confusion ;)
>I'm not picking on Ray here - but I'm going to use the paragraph above
>to explain and illustrate why the multilevel security philosophy has
>not, and never will, catch on, unless it's reformulated and remarketed.]
> The rigor of trusted system design is a market disaster and will
>never succeed. When you talk to the trust engineers it's like talking to
>a fredian psychologist. The logic isn't entirely circular, but if you've
>bought into it, then it's inescapable.
No argument there. However, I confess to confusion that is beyond this.
The more security work I do, the less some of it makes sense. On one hand,
there are people trying to solve hard problems. Many are mired in
organizational politics, have no resources, and can't seem to even deal
with the need for a security policy that is actually implemented and
enforced. On the the other hand, the problems that they confront can't be
solved with the simple technology that they are using. In the face of
this, I've tried to balance the confusion that I feel by seeking out
reasonable answers. For instance, I find some comfort in strict security
engineering perspectives that demonstrate:
1) That C level TCSEC security features can't keep different
classifications seperated - something else is necessary.
2) *Someone* needs to look at designs / code / deployment / operations
and measure its ability to meet *some* standard.
While there are other security engineering ideas that I find interesting,
these illustrate why trusted system ideas are "comfort food" amid my
confusion. Were it not for the fact that I seem to keep coming back to
them in the harsh light of retrospect, I'd just keep them on the shelf
along with the other cold / flu / "need a boost" remedies.
> The view Ray presents above: "trusted systems are great, but are
>just not being used right" is, in my view, a complete cop-out. The reason
>trusted systems are not being used right is because the way they are
>written they are UNUSABLE. Only someone who is forced to use them would
>even consider touching them!
In general, I agree. Only two points:
1) Most trusted systems are very hard to use. However, I believe that
this has more to do with the difficulty of the problems that
they address from a strict security engineering perspective.
2) The only ones who use them are those who are forced (or force
themselves) to adhere to strict security engineering
>How has this happened? Well, it's a number of things:
> 1) Technology moves too fast for formal, dogmatic paradigms
> 2) Market-driven forces will not wait for formal methods
> 3) Time to market is everything - vendors throw everything
> overboard (especially security!) to get it out the
> door on time
Absloutely no argument there. However, I hasten to add that the drivers for
your list of causes are all based on a more fundamental dynamic:
organizations seem hell bent on going down paths that seem - at first blush
- to be the easiest, cheapest... Perhaps only because everyone else is
doing it, perhaps only because its the only apparent way to do things.
Bottom line is this "cheap, easy, everybody else is doing it" dynamic can
be shown to be flawed. While this idea may be better discussed in a forum
like Risks, I believe that there is something practical for firewallers
(and other security types) here. In my view, boiling the mess down to cold
reasoning reveals the inescapable conclusion that first order security
engineering principals are simply not being followed by most
security-realted efforts - including most contemporary internetworking
efforts. By some measures, the *real* absurdity is the notion that you can
simply connect two dispperate security perimiters together with cheap, easy
solutions and expect everything to be OK.
> What's happened is that in order to meet the insanely complex
>design criteria of trusted systems, vendors have to design systems that
>are obsolete before they even go into evaluation. For example, by the
>time a vendor has rewritten their X server for CMW, it's 2 revs out
>of date, obsolete, buggy, and lacking support for the latest device
>drivers. In a technological market like the one we're in, if your
>product cycle is greater than 8 months-to-market you are TOAST.
Absolutely. However, here is a story that illustrates the more basic
problem with all of this. A few years ago I was at one of those "sort of
public" security conferences that recently opened its previously classified
sessions to the likes of me - you know, places where most attendees share
the kinship of clearences and obscure security-realted work. I ended up at
lunch with a table of developers from several major vendors that were all
working on building trusted X for their respective CMWs. Not thinking, I
made a remark about how easy it must be to grab MIT code and bash it into
shape. Spoons dropped into soup and there was laughter and choaking.
"Err, should I move to a different table?", I said. Naw, these folks just
pulled me up short. After regaining his composure, one guy told me that -
of course - this was their first idea. However, in reading the MIT code,
he found comments burried deep in an impossibly convoluted case statement
in X code that said "hey, I know this is ugly, but I'm a graduate student
and I don't have to care." True or not, it makes the point that building
something you can trust means starting from scratch. Givens in this
process seem to include everything you pointed out - and more!
I just left CyberSAFE (dominate supplier of commercial Kerberos.) After
two years of absolute insanity and pain, they actually have a brand new,
spanking clean code base that plays with the world while giving them a shot
at actually building things on top of it. Nothing against MIT (I don't
want to malign them in the least), but be it X or Kerberos - you can't even
expect public domain code to be supportable, let alone trustworthy - except
by the loosest of business and technical standards.
> At this point the trusted system mavens usually raise their
>hands and say, "Time to market isn't everything! I'd rather have
>security." The problem is that most of their users would rather have
>Windows 95, Photoshop, the latest version of MSword, BSD4.4, etc.
Yes, indeed. I propose that this is the problem. While trusted system
mavens are not free from blame, the *real problem* is the notion that you
can actually run Windows 95, Photoshop, the latest version of MSword,
BSD4.4, etc without *some* disipline unless you don't care about security.
All by itself, object linking and embedding is enough to curl my hair -
internetworked or not.
>Look at the evaluated systems out there: they are all obsolete and
>you can hardly run anything interesting on them. So the mission critical
>systems get built on foundations of sand (no security) because the
>secure systems suck too much to contemplate using. Give most users
>a choice between CompuServe and DOCKMASTER and see which wins.
Indeed. However, to heap all of the blame on DOCKMASTER and its ilk is not
fair. Its the foundations of sand that most organizations mission critical
systems stand on that is the problem!
> The reason trusted systems are mis-deployed is because they
>are terrible for real world use.
> Trusted system guys have to stop telling the people who are
>trying to get real work done "nononono! you're using it WRONG!" and
>should spend thier time trying to make trusted systems that are easy
>to use, with the security features completely hidden from the user.
Amen! If you run for office, I'll vote for you! Fact of the matter is
that if we are ever to have technology that we can trust, we'll have to
build in security from the get go. At the risk of setting off a debate
about crypto, I hasten to point out that we might never see this given the
politics of secrets ;)
> But don't bother - it's too late. The installed base of
>insecure systems and practices is too large to be replaced and
>the demon of backwards compatibility rides all our backs. There
>might have been a time when secure computing could have become
>the norm but now it's too little, too late. I've had the pleasure
>of addressing the Association Of Computer Security Greybeards and
>when I've said this sort of thing their reaction is one of horror.
>"Trustworthy computing is almost a reality! Don't throw the baby
>out with the bathwater!" -- the sad fact is that it's been 10 years
>of effort and all that's come of it is obsolete software that is
>5 years behind the market curve. The baby never even got close
>to the bath.
Yes - but, only from the perspective of those who insist on running mission
critical systems on foundations of sand! I think that a strict security
engineering-based evaluation of this reveals that *an answer* IS available
now. The real question is NOT if "the best answer" is available. Given
the politics of defense spending, I'm glad that CMW got out and deployed so
we could see its problems BEFORE things got ugly for those who built
businesses based on their belief that foundations of sand needed to give
way to sound security engineering.
>>The only thing I'd add is that the "make them be treated differently" be
>>stiffened to something like "proveably force them to be treated according
>>to the security policy."
> Nope. Forget proofs. Come on. The proof guys have been
>ploughing that field for years and have come up empty. The reality
>is that proofs don't scale well with complexity, and in case you
>haven't noticed, every release of every program is 10% larger and
>more complex than the previous. The proofnicks have had their turn
>and it's been a dead loss.
Yep, but I think I may have mislead you. The only "proofs" that I require
when I do a security assessment is that the security policy is actually
implemented in a way that actually supports the business goals that are
supposed to underpin the thing in the first place! I'm not a math or
crypto or trusted system or defense guy. I'm only asking how a given mix
of security features meets the problems that it was deployed to solve. I
know, most security status quos are defacto conditions and I freely admit
my pureist perspective. Its this tension that makes me want to go get a
job mowing lawns rather than continue my security career ;) The local
McDonalds even has some openings ;) Or, maybe the universe is just playing
a joke on me: there ain't ever gonna be no real security. 'course, this
might ensure my income for a while ;)
>>>Why aren't more people doing
>>>this? Because of compatibility and installed base and support
>>>issues. I've seen grown men start to cry when they even THINK about
>>>running a labelled network...
>>Indeed. However, those who are serious about this do it - painful as it
>>is. Yoda (Star Wars Jedi Master) was right: "There is only do or not do,
>>there is no try."
> More trusted system philosophy: "if you don't use trusted
>systems you are clearly not concerned about security." That's nonsense.
Ummm, how about if we soften this to say "if you don't use first order
security engineering principals, you are clearly not concerned about
security"? The fact that many think that the only apparenmt contemporary
examples of security engineering principals are found in broken down
evaluation systems is a consideration. I *do* know of vast internetworked
systems that meet their security policy without a shread of evaluation -
save the security engineering principals that were used to design / deploy
/ manage them.
> Only people with a lot of money and a lot of time can afford to
>bother and they usually do their important computing (where the work
>REALLY gets done!) on PCs at home! Sometimes trusted system think
>reminds me of those guys who'd rather ride a Harley Davidson hardtail
>than anything else in the world. "Sure it's slow, corners like a hippo,
>brakes like a banana on a greased cookie tray, drips oil, and sounds
>like a trainwreck - BUT IT'S A HARLEY" -- "Sure, it's Version 6 UNIX
>with no TCP/IP and no windows and it's slower than mud and I can
>only run it on hardware that is slower than my toaster oven's clock
>chip but it's A1!"
Or, more to the point for me: I love my own 1961 double door microbus. I
know it well, I can fix it when it breaks, I know its limitations, and I
know just how far I can trust it. When I need something different, I'll go
get it (e.g.,a big, nasty, 4 wheel drive, five ton truck with a blade for
plowing the driveway after a blizzard.) Like waiting for the city to
plough the street, one could wait to improve the security of access to
outside networks. However, like I'd abandon my microbus for a nasty five
ton (that may not even have a heater) to make a path until the pros from
city snow removal arrive - I gotta do *something* to open and maintain a
trustworthy path to external networks. This has to be done - even in the
face of the fact that the five ton is to cumbersome to do a good driveway
job in all but the simplist cases. As the driveway is a mess (even after I
hack at it with the 5 ton and its blade), a private network will still be a
mess by some standards after its internetworked. Hey, that Harley is a
mess. But, sure beats the hell out of sitting on the side of the road
waiting for a more perfect ride to happen by. *Given the choice*, I'll
pick the Harley for lots of reasons - despite the fact that its an old,
> I've been working with a lot of people lately and I haven't
>actually run into anyone in the commercial space IN MY ENTIRE CAREER
>who has been actually deploying trusted system technology. Perhaps
>you have, but from my viewpoint it looks like a total rout.
> It's not that people are not serious about trusted systems,
>it's that trusted system designers aren't serious about producing
>useable systems. [Actually, they are, they just haven't succeeded]
Yep, good point. I'd only point out that in the wake of their efforts we
can pick and choose the pieces that we like and bend them into shape until
they DO get usable systems built.
>> 1) The attempt to invent a new evaluation criteria (in the form of
>> a remake of the U.S. Trusted Computer System Evaluation Criteria
>> (TCSEC) into The Federal Criteria) seems to have failed - the Orange
>> book looks like the status quo for a while.
> Yep. The attempt appears (from here) to have been to write
>an envelope criteria that could be stretched to cover ANYTHING so that
>way people could actually get what they want to use in the door. It
>failed but I think it was mostly because the documentation was so
>big and arcane that nobody except the authors had time to read it all. :(
As it seems with most serious efforts to codify basic principals in a way
that accomidates everyone, eh? In my view, the only hope is that we can
simplify everything back to first order security engineering principals
that *can* actually be deployed.
>> 2) It looks like the attempt to have a nice compromise in the form
>> of the mix of security features found in the Compartmented Mode
>> Workstation (CMW) has fallen into disfavor.
> That was an interesting effort. From where I stand, the CMW
>effort was an internal revolution against trusted systems, playing
>within the rules. If you look at some of the things in CMWs they
>were anathema to the hardcore trust engineers. My take on it was that
>the users were Sick and Tired of having workstations with no windows.
>CMW seems to have been an elaborate maskirovka to get NFS and X-windows
>into DOD computing.
> I suspect it's failed because of time to market. Even the vendors
>have got to hate having to maintain stuff that's 2 revs out of date
>because of the evaluation.
Yep. And, as a reault of it, we now have some interesting experience to
build on - not all of which has been very nice.
>>Smail is great. However, if
>>you are REALLY going to wall off mail, it takes trusted technology that
>>actually implements a security policy to control the upgrade/down grade
>>issues between compartments/levels.
> The SMG (if that's what you're referring to) is a case in
>point. Give me one of the current SMGs and I can configure it to
>run TCP/IP over Email, and do NFS into and out of a classified
>environment. I believe this little loophole is being fixed but the
>whole problem is one of those "emperors new clothes" type deals.
>If you allow ANY large amounts of data in or out, I can run IP
>over it. Period. All you can do is make it slow and expensive. The
>long and short of the story is that it's a wasted effort. If the
>data needs to be absolutely secure: isolate it.
Or, use IP pipes that can actually enforce the security of the two security
perimiters that are being connected. Some of the RFC 1108 stuff seems to
be as viable as vendor offering that put DES in their IP kernels for
packets. The trick is that ALL of the channels that connect a security
perimiter with another of its ilk must do it in a way that does not break
their respective security policy.
>>I propose that this is because the "99% of the things that
>>people want to do" are not given due consideration in the light of a
>>rigerous risk analysis.
> Of course not! It's usually considered in terms of time to
>market and productivity gains.
>> As examples, consider that object linking and MIME
>>are wonderful new things that everyone wants to do. However, the lack of
>>security in the current designs that are seeing widespread implementation
>>is profound. That is, no one stopped to do a risk analysis. Had they done
>>so, I believe that the rigor of using a trusted system to enforce a
>>security policy that was targeted at reducing these risks would have become
>>common place by now.
> Hell no!
> Run trusted systems just so I could do MIME? No WAY! I'll
>just do MIME and bash some stuff into the interpreter to make it a
>little better and ride the tiger. That's what 99% of the people out
>there will do.
Indeed, this is the classic approach that I know. However, I'm not saying
put an A level system up to deal with MIME. I'm saying that some trusted
technology can be used to keep the MIME that is coming from untrusted
sources off systems that can't defend themselves. I propose that the way
you do this is to use your approach in conjunction with an isolation
mechanism that seperates trusted Email from that which is suspect. For
instance, the internal network's critical partitions are walled off by
mandadory controls that only allow them to interoperate with others of
thier ilk. I can see a departmental PC that is reserved to such a
partition in the face of the realization that a process engineer's PC that
is outside of that partition will never cut it since they are downloading
freeware drivers. BTW, I'd sure love to hear the details about exactly how
your SMGs configureation to run TCP/IP over Email and do NFS works. Or,
was this a joke?
> Rigor is nice but every time rigor gets put up against
>technological progress, it loses. :(
> I'm not saying to trash rigor, but if you're going to be
>doing in-depth risk analysis all the time, you're going to have
>to make it fast or you'll get left behind. I've seen too many cases
>where an organization has been thinking real hard about a firewall
>and found out that while they were thinking the guys in the research
>lab put in a T1 line. :(
Yep. The cure is for the bosses to say three things:
1) Hey, guys - listen up. This new lab now holds our golden eggs.
*All* traffic in and out of it will subscribe to the following
rules (bla, bla, bla...)
2) Here is some money, a body, and my franchise for you to pay
3) I'll be back from time to time to check how well my rules are
being followed. If I catch one (or all) of you messing around
with the rules, I'll own your first born male child
In my experience, this is really ugly, expensive, and hard - and, it takes
the disipline of hard-headed security engineering to make it work. Hang
evaluations except for the parts that are needed: the ability to *actually*
force certain traffic to be seperated from certain other traffic. A B
level system as part of an isolation mechanism to solve this problem - yes.
A B level system for all of the isolation mechanism - no.
Fact of the matter is that I usually only see this happen *after* the
organization has had the hell scared out of it by an incident. Sadly, when
it happens its usually devestating. I've as many fear grenades as any
security consultant. However, I prefer to get back to first order security
engineering principals and ignore the ugly mess that surrounds trying to
make everything on a private network flow through a convoluted evaluated
mess that is impossible to use. Slice off a chunk of the work that has
been done in trusted systems, trim off the bad parts, and use the rest in
the narrow scope of a well defined problem. I've a friend who travels the
world's back woods widely on a shoe string. Rather than starve since he
can't stomach the meat in open markets that is half rotten, he takes his
sharp knife and trims off the bad parts and lives on a reduced level of
sustanance. Most times, the piece of meat with rotten edges is the only
alternative to having nothing to eat. Why don't we see
anyone allowing their firewall builders to slice off a little chunk 'o B level
to interconnect their mission critical applications?
RayK 8) - Better Living Through Authentication - I usually only speak for myself
Ray Kaplan - Security Services - P.O. Box 23210 - Richfield, MN 55423
Phone / FAX (612) 861-7198 - currently: kaplan @
But, as with everything else in life, this will change.