> i have a question concerning scalability of firewalls,
> specifically as an intranet firewall between LAN's, WAN's
> at high speeds.
define high speed. is this 1.544mbit, 10mbit, 45mbit, 100bmit, 600bmit,
1 gbit? more? full duplex for media which support this?
short take: if we are talking T1's and 10mbit ethernet as 'high' speed,
then i wouldn't worry about whether or not the OS can handle it on any
US$1500+ computer purchased after 1-dec-1997 at US pricing.
> I know most of u out there will respond that solaris is the
> preferred OS for this configuration.
It depends on our mood and how long it has been since we were on hold for
a patch from sun to fix a production problem for a system which we could
have fixed if given source code. note: same argument for all non-source
> But I would like to hear from people who use NT or other OS
> for heavy-duty firewall's, and what they use as a platform.
> The main issue is, can NT scale up to Unix, given the right
> hardware ?
long take. actually....
define the right hardware.
I'll bet $1000 that an NT machine with a PCI card which had a set of
custom ASICs and a few FPGAs which implemented a firewall in hardware and
used the PCI bus/NT Os purely for configuration/management purposes would
be able to keep up with whatever you could hook up to the card.
Given this, perhaps we should redefine 'right hardware' to be what you can
go down to compUSA/microcenter/fry's/etc and purchase. call it
high end pentium/ppro/p2
Now we have to define scaling. IBM,sun,m$ have all thrown it around like
an unwanted mother-in-law, but, nobody has gotten off their arse to define
what it *really* is. My working definition is that a scalable environment
is one where the application is not machine specific, and has the relevant
data/location abstractions to allow 1+ machines to serve the application.
This would *REQUIRE* some 'middleware' agent which can properly broker
client requests so that the system can allow dynamic increase/decrease of
capacity as more hardware is brought into service (or taken down for
service/failures). a (IMHO) proper environment allows the services
provided to be sufficiently abstract such that ANY system running any
supported OS (some services, e.g. RDBMS may have hw/os constraints) can
assist with providing additional capacity for that service.
With that above definition of scalable in mind, it becomes immediately
obvious that software design as well as the design of the entire computing
environment needs to be considered, right down to, and including the
Ok, now that we have scalable defined, can NT scale as well as unix? I'd
hazard that it has or can have the relevant abstraction layers to do a
decent job of it. I'm *sure* that the MS website, despite its numerous
security holes and other assorted shortcomings, could be held up as an
example of where a web system can be distributed across god only knows how
many NT machines and, for the most part, actually work.
Personally, i'd be a little skittish about using NT in a production
environment until *I* understood it as well as unix and could accurately
predict how it would behave under what software/user load so that i could
size the server pool appropriately.
I'm not sure, though, that this is what you *meant* by scalable.
There are two questions which come to mind that likely are more
accurate representations of what you meant.
First an assumption. From your letter, i'll assume that to you unix ==
solaris. Common usage, from what i've seen, is that unix rrepresents a
class of operating systems. solaris is a member of that class, and
inherits most of the properties of ATT sysv, and some of the properties of
quickie other defs: solaris = solaris 2.5.1/2.6. NT=4.0. both patched.
Now what i think that your questions may have been:
1) "Can the hardware which NT supports be sized so as to support as much
capacity as unix hardware can within prespecified response time
Again, assuming that unix == solaris, and by hardware, you mean PC
hardware vs. sparc hardware, the answer is "No. NT doesn't scale as well
as unix." Admittedly, solaris on x86 is also capable of handling more
users/whatever per unit cpu/memory than NT on intel iron making this true
even without the hw assumption.
2) "Can NT, the OS scale to support some of the larger hardware platforms
I'm *sure* that it could, if altered. right now, stock NT server supports
what, 4 cpus? and 8 with the enterprise server release. Solaris, last i
checked, could happily suck up 64 cpus on the galaxy
 hmm, product idea, i said it first for those patent pissants
 NT cpu scaling is based upon memory, not looking at facts,
check with your friendly neighborhood assmilation agent for the
truth about collectiveOS (NT).
 My guess with JavaCheater's OS is based upon what i've heard there
high end boxes are. Look at their web page to verify both this
and to see what other benchmarks they are cheating on. perhaps
they will merely label their cpu's as 600mhz, when only one part
(the clock) actually runs that fast.
Craig I. Hagan "It's a small world, but I wouldn't want to back it up"
hagan(at)cih.com "True hackers don't die, their ttl expires"
"It takes a village to raise an idiot, but an idiot can raze a village"
Stop the spread of spam, use a sendmail condom!
In Bandwidth we trust