-----BEGIN PGP SIGNED MESSAGE-----
>>>>> "Paul" == Paul Sangster <sangster @
Paul> All performance tests need to be taken with a grain of salt.
Paul> Its real important that people read the environment that the
Paul> test was performed and determine how similar that is to
Paul> their environment. Doing performance tests on routers,
Unfortunately, that is often the beyond the skills of many reading
the article. "Dilbert Manager SQL Server" problem.
Paul> speeds/congestion, local configurations (size of rulesbase,
Paul> user configuration, number of routes), and most importantly
Paul> IMHO the mix of protocols used during the test.
Some firewalls care about the mix of protocols, others do
not. However, since the mix was not varied, one can't tell.
Paul> We have found that throughput numbers derived from ftp
Paul> aren't very good indications of how fast other protocols can
Paul> go through a box. Testing protocols like HTTP is an
Again, it depends on the firewall. Lumping a bunch of numbers
together doesn't reveal this.
>> 2. The DataCommunications magazine tests do no test to verify
>> that the machine isn't in fact a router. Had they done this,
>> they would have discovered in their first round in November
>> 1995, that neither Raptor or TIS or ANS could implement their
>> policy. With new versions from these vendors, the story is now
Paul> What "policy" did you have in mind?
Well, in their first round of tests, they requested that the service
network not be able to communicate with the rest of the
network. "able" was not defined, and in particular not tested.
It isn't fair to test compare a machine that allows non-SYN-bit
packets to pass through an interface against one that prevents
them. As such, that represents a different security policy.
Additionally, in the first round, as I mentioned, finger was
required. Some firewalls can not cope with allowing finger (mostly
this is ancient news, but a remarkable number of new entries on the
market don't support it without SOCKs or something)
>> 4. The DataCommunications magazine tests do not measure
>> performance as a function of complexity of policy. BorderWare
>> has a very simple policy, Firewall-1 and Raptor and BlackHole
>> have much more complicated, much more flexible policies. TIS
>> Gauntlet allows a simple policy with their UI, and a more
Paul> Not sure why the policy UI effects performance. As long as
The complexity of the UI determines how complex the average policy
implemented can be. Gauntlet 3.0, for instance, has some extensions to
the good-old-fwtk (1.x) netperm table that allows it be less than
linear (in number of lines in the netperm table) for additional
services. The UI uses this. However, if you implement a policy that
the UI can't do for you (via editing the netperm table), you can
return to a netperm table with linear size dependance on rule
complexity. (However, each line is sometimes exponential in complexity)
Paul> Or even better add in user authentication to the HTTP
Paul> stream. This is also a very real world test. During the
Paul> last Data Comm testing they explicitly didn't use user
Paul> authentication. Our product version wouldn't allow public
Well, in my opinion, you therefore failed the test. You couldn't
implement the policy, and should receive a score of zero throughput in
A new test, including user authentication, should have been done.
] It isn't that sun never sets; rather dawn and dusk are united | one quark [
] Michael Richardson, Sandelman Software Works, Ottawa, ON | two quark [
] mcr @
ca http://www.sandelman.ottawa.on.ca/ | red q blue q[
] panic("Just another NetBSD/notebook using, kernel hacking, security guy"); [
-----BEGIN PGP SIGNATURE-----
Comment: Processed by Mailcrypt 3.4, an Emacs/PGP interface
-----END PGP SIGNATURE-----