thompson jeffrey w writes:
> Actually, the problem here is that if an operating system is not secure
>then there is no guarantee that the applications running on it can be "trusted"
I don't think you read my posting carefully enough.
There's a chain of trust, from the solder on the boards all
the way up to the application in its virtual address space. If there
is any weakness in any foundation, at any level, there is no
guarantee of anything. Add to that the fact that everything, from
the solder, to the silicon, to the instruction set, to the
operating system, to the compiler, to the application, is made
"secure"?! I'll settle for "damn good" - anything better
than "it seems to work" is a pretty strong assertion. :)
>This has little to do with whether a program was coded "well", but whether
>you can trust that the program remains "well". Actually, it is very important
>to secure the base, as without it even the most secure application will not
>hold up. (Beware the power of uid 0)
I disagree. Indeed, I disagree violently. Looking at the
history of UNIX software security flaws, the applications are
far and away the biggest culprits. Securing the base is nice, but
it's kind of silly if the application you're running is sendmail!
Most of the cool O/S bugs require you to be logged into the system
to exploit them, whereas most of the network daemon bugs let an
attacker wander in without a login. I can tell you which I worry
>There is a difference between having a program run functionally well, and
>it being secure.
You should run for office! :)
The sentence above combines a practical measure of
reality with a cosmic ideal, as if the two can be meaningfully
compared. "Secure" is the ideal. "Appears to be OK" is the
reality. It doesn't *GET* any better than "I think it works."
>> At every point, each part of the system relies on the
>> guarantees made to it by the ones below it. Sometimes the
>> guarantees are false, often they are not. But at every point
>> in the system you're relying on human-generated code.
>This is why you have other humans verify your work. Preferably those with
>some knowledge of the issues which will affect your code.
That's called "common sense." No argument there. But
havigng someone review your work doesn't make it "secure" it
makes it a little bit closer to "I think it works."
From: thompson jeffrey w <jwthomp @