On Sat, 25 Oct 1997, Billy Verreynne wrote:
> years now. Sure there's been problems but most of them were resolved. Safe
> and secure? Well, if you have proper security policies and software in
> place and properly trained staff then why not?
What's proper software if the OS isn't safe?
> Aw come on! Since when has the actual implementation of a protocol stack
> played a huge part in response times? Response times have more to do with
Having seen many Unix stacks go down in flames on big Web sites, and
having seen stack code changes by vendors raise their thresholds for
that by several hundred hits per second, I'd say pretty often. Moving
from serially getting the next available local port number to a hashed
table increased the throughput on a couple of stacks by over 100% for
> the physical network (bridges, routers, etc), buggy network drivers,
> network service software and so on. And you expect sub-second response from
> an OLTP system running accros a WAN with 300+ users! - I really doubt that.
300 users is an awefully small network. I've had subnets with almost
half as many users on them.
> A single bad SQL statement from a dumb user can trash db performance. Or
> some wise guy doing a FTP across the WAN overloading the band witdh!
If an FTP session kills your WAN, I'd suggest you buy some consulting in
network design and capacity planning.
> > I would argue that NT still has much more flak to go as fortune 1000
> > companies start trying to take it out of pilot and into production for
> > certain 'mission critical' applications.
> The flak NT has been receiving in many cases are IMHO just because some
> Unix lovers dislike Bill Gates (who doesn't?) and hate the idea of another
> operating system addressing the same server market. Agreed, NT is by far
> not mature as UNIX, but to simply disregard it as buggy and u/s contradicts
> _many_ companies that are using NT as the standard departmental server
> platform. And as I mentioned, NT is used to run mission critical systems
> and _has_ proved to be robust and stable enough.
Most companies have simply lowered the bar of what 'robust and stable
enough' mean to them. Also, most of those NT servers are replacing
Netware servers, which weren't exactly the best for mission-critical and
robust either. Robust for a departmental server is *much* different than
robust for a firewall handling traffic for *all* the departments as
well. Scalable for 300 users is trivial compared to scalable for 30,000,
or over 300,000 in the case of high-volume Web servers. Think you could
get anyone to run a full-feed news *peer* on NT at the moment? Even more
than databases, which traditionally raised the compting bar, the Internet
has given new meaning to scalability, especially for interactive use.
> Agreed. But AFAIK only Microsoft's marketing engine is spouting the crap
> that NT is the only o/s to fill the need. Personally, I rather be running
> database engines on Unix than on NT because of hardware scalebility, but
> that does not eliminate NT as good alternative.
This is 'firewalls', not 'databases', and while NT may be ok for your
database environment, we've got some where it wouldn't even come close to
fulfilling the requirements. Any chance of getting an NT system to give
sub-second access to a table with 1.8 billion rows? Care to add a fully
clustered environment to that? Oh, that's right, you can't do shared
memory segments... I can play the database game pretty well, and would
be happy to off-line from firewalls.
> Hehehe. Why not? Unix is not that high and mighty! :-) SVR4 has only
But compared to NT it's on the top of Everest. Anti-Unix posts are as
bad as Anti-NT posts where there is no technical substance. I'm quite
capable of picking holes in about 15 OS'. Perhaps we can topic drift
back to firewalls, or at least the qualities of an OS which give it good
or bad firewalling traits?
Paul D. Robertson "My statements in this message are personal opinions
net which may have no basis whatsoever in fact."