> > To be honest, aren't we really discussing a security model for Unix
> > that's been extracted in retrospect?
Peter Da Silva replied:
> Nope. It was pretty obvious to me at UCB in 1980. I believe that CSRG
> understood it too... after all I was just a freshman, they were grad
But *today* we're interpreting the model in terms of mandatory access
control. At least, that's the distinction between Type Enforcement and
chroot(). TE has always been an explicit mechanism to unconditionally
block accesses according to explicit permissions.
While they may have had some security goals in 1980, we can't possibly
claim they were trying to do *mandatory* access control. They may have
recognized some notion that access control in Unix should focus on the
directory structure, but this did not represent a *mandatory* access
control objective. The only truly mandatory access control appears in
the kernel mode/user mode distinction at the processor level -- not
even "root" can make arbitrary code transition into kernel mode during
normal system operation (well, not directly).
One reason chroot() is clearly *not* mandatory access control is
because the networking code isn't the only place that chroot() style
protection is broken. What about access to raw devices? A fellow here
who's been looking at chroot() likes to talk about how it only
protects access to the "cooked" file system while allowing backdoor
access via the "raw" devices, mknod() and such. At least, he's run
some experiments that access supposedly protected data that way.
Now, I know we can trot out another bandaid to fix mknod(), but don't
we have a pattern here? This is beginning to look like the problem of
keeping naughty users from achieving root -- there's always one more
hole hidden under some subdirectory or an unexpectedly transitive
trust relationship. This comes from not having a set of mandatory
protection objectives *up* *front* and designing to achieve them.
com secure computing corporation