Matthew Thompson enscribed thusly:
> > >PC hardware doesn't have this sort of support. Remember, it was
> > >designed with the DOS FAT filesystem in mind, which isn't sensitive to
> > >system states the way UNIX filesystems are. So whenever power is lost,
> > >the system loses state and the filesystems get horked.
> >> Ha ha ha ha... Many the people I know now run Norton or do some
> >>other check disk on boot up in the office because of FAT file system
> >>corruptions! That was at the very heart of the MS-DOS 6.0 debacle over
> Remember.. The DOS 6 debacle was about users not understanding write behind
> caching properly, and turning their machines off before the system had
> finished flushing data to disk, not surprisingly resulting in file system
> corruption. This no doubt compounded by years of people telling users it
> was safe to turn their machines off if they were sitting at the DOS prompt.
> the right anwer was of course to run smartdrv /c before shutting down to
> flush the cache. later versions address this by 1. Disabling write behind
> caching, and it's performance gains by default. 2. Always flushing the
> cache before returning to the DOS prompt.
Hmmm, perhaps my point there was a bit too subtle. The deal with
smartdrv was merely a causal factor, it wasn't the fatal flaw. Yes the
lazy writes did "cause" the failures which in turn corrupted the double
spaced drives right into oblivion. My point was that the DOS FAT file
system (and by extension the Win96 VFAT file system) is not more stable
that the UNIX file systems, quite the contrary. It was never designed to
detect and correct the kinds of problems which have since been encountered.
If it's not smartdrv, it's some other ill behaved application (Windows? :-) )
that ends up leaving the file system in a bad state. At least the UNIX file
systems were designed with the idea that - well maybe something might
Viruses have been a major contributing factor but too too many times
I've had users come to me with corrupted DOS harddrives. Sometimes Norton
(or other disk editors) works wonders, often not. Sometimes it was a virus,
sometimes it was just failing to detect that "cross linked chain" that
included a directory for just a few too many power offs / power ons. It
really is amazing how corruption accumulates in those file systems once it
gets started - whatever the source! This is just not a robust file system.
It depends TOO much on insuring that all of the data got written out
and nothing did anything wrong before power-off. As soon as that's not
guarenteed, it becomes brittle and corruption started to accumulate!
That's why a number of people I know now run scandisk or chkdsk or
Norton Utilities whenever they boot up their system. They can't even check
to see if the file system was shut down clean or dirty so now they "fsck"
their DOS and Windows file systems on every reboot! It's fascinating how
fast attitudes change after a few corrupted disks! Oh the IRONY!
But - back to the original posters point - It is not the hardware.
It is most definitely the software. But of course you should always go
for the best, most reliable, most serviceable hardware you can get. It
doesn't help to have that nice robust workstation grade firewall down for
a day or two while a service rep obtains that replacement board. Sometimes
availability of parts and replacements are an issue too.
If you can't afford that BIG FAT service contract which guarentee's
on site service and less that 24 hour replacement of all components, maybe
a PC IS the better choice. You can stock replacements yourself and in
many cases have an entire standby PC for less than the cost of one years
service contract for the higher price box. Redundancy verses reliablity -
another trade-off consideration...
If you're really truely worried about down time and outages then
even workstation grade systems don't cut it. Time for fault tolerant and
high availability systems. An don't forget those hot stand-bys. What!
Too expensive you say? Yes. Of course. It's always a balance of value
verses investment. I don't get these for my firewalls because the value
just doesn't justify the investment to me, my customers, or my management.
But each case is different. For some, PC grade "Best Buy" is sufficient
(not for me). Others will demand the fault tolerance. That's why there
IS NO one answer for all of this. All you can do is judge.
> What does this mean? Simply if you're using a system which stores critical
> file system structures in memory, and you care about corruption, you should
> protect the integrity of it's power supply. You just don't run mission
> critical Unix, Netware, NT, VMS, MVS, DOS, OS/2, Win95, YourOsHere systems
> without UPS backup. If the systems arn't properly configured to shut
> themselves down in in event of prolonged mains failure, you damn well
> better hope ther'es a human around who will do it for them... Smart UPS's
> just arn't that expensive.
Absolutely agree. Especially on security/mission critical firewalls
and bastion hosts!
Actually - you should do all of the above. Protect your data.
Make sure it gets written out. Make frequent backups of critical data.
Make sure critical systems HAVE THEIR OWN backup power supply even when
you have a room or building wide UPS. The truely BIG UPS's (motor/generator
types with BIG flywheels) can create a false sense of security. Fact is,
they provide a single point of failure in mission critical systems - often
a LOT of mission critical systems. Is this wise? Isn't a distributed
system of backups and power supplies safer? I prefer a UPS on my firewalls,
dedicated to that firewall, no matter what other power is provided.
Michael H. Warfield | (770) 985-6132 | mhw @
(The Mad Wizard) | (770) 925-8248 | http://www.wittsend.com/mhw/
NIC whois: MHW9 | An optimist believes we live in the best of all
PGP Key: 0xDF1DD471 | possible worlds. A pessimist is sure of it!