> On Mon, 20 May 1996 20:21:39 +0100 Security Responsible wrote:
> >I'm evaluating two firewalls and have been asked to do performance
> >I've been looking around in the archive, but haven't seen much on the
> >topic. I went to DATACOM's web site which offers some interesting
> info. But
> >I have to perform the tests in house.
> >Does anyone have a "scientifical" method, tool... to do these tests ?
> This field of study is still in its infancy: lots of PhD thesis
> have to be written.
> Here is a bibliography that may be helpfull:
> Ranum, Marcus. "Firewalls Performance Project",
> Ranum, Marcus. "Firewall Performance Measurement Techniques:
> A Scientific Approach",
> Bradner, S. & McQuaid, J. "Benchmarking Methodology for Network
> Interconnect Devices", Request For Comment # 1944,
> Newman, David. "Can Firewall Take the Heat?", Data Communications,
> also see the "Firewalls-performance @
com" mailing list
> archives for a summary of such a study I conducted few months ago.
> If anybody has some more references, please share with all of us.
> Denis Valois
> Internetworking Technologies, Security Team (NCEPWXS)
> SITA (Societe Internationale de Telecommunications Aeronautiques)
> HERAKLION - 1041 route des Dolines
> 06560 Valbonne
> tel: (+33) 22.214.171.124
> fax: (+33) 126.96.36.199
> net: Denis .
Does depend much on what you really meant by testing firewalls.
If the firewall is treated as an isolated item of equipment (which
obviously it isnt) then normal performance tests for throughput etc., can
be undertaken. Its also possible to carryout tests with home-brew test
kits or test suites obtained from external sources.
If your objective is to manage risk, there are three areas of activity.
Products and systems can be built to a trusted systems criteria. These are
published documents intended to lead into independent evaluation and
certification. However, you could use the criteria as a 'design to meet'
exercise if you are happy to do that, want current technology, and want to
avoid the very heavy costs of, for example, ITSEC evaluation.
You might want to use Mil-Std 2167A as a build methodology. If you are
employing some existing commercial product, you will have to use a
modified Mil-Std 2167A. Where its logical to use this particular
methodology is that Orange Book draws heavily off it and both ITSEC and
Common Criteria are developed up from Orange Book.
You might well be advised to employ a metaCASE tool which includes a
structured method. That will allow all sorts of testing during development
and afterwards for maintenance. Which structured method you employ depends
on personal taste and maybe on industry standards. SSADM is pretty good
but if you are in aerospace you may have to use HOOD or local labour may
only be familiar with some other methodology.
You might want to use a formal method like Z or VDM and will have to if
you aim to achieve a high level like A1 or E6.
If you are taking a custom engineered path, you really should use a
project control methodology and that might be something like PRINCE but
again there will be local flavours of project management methodologies and
tools which may be most appropriate.
OTOH you could employ the industry standard methodology of pencil and
cigarette pack, but make sure you file the packs for future use. If you
are in a smoke free zone maybe you will have to use paper napkins which
are a little harder to handle and the pencil can smudge.
That just relates to product and information systems.
To really get benefit, you do need to carry out a risk survey and produce
a risk policy. You could use statistical or fault tree methodologies to do
this, or even a combination of both. You may find that a methodology like
CRAMM in an electronic application may help you get there faster. There
are many risk analysis tools available, some of them just a few $ to run
on a PC, although they dont tend to be that user friendly so you may need
to hire someone who knows how to drive them and that consultant will have
his own idea of which tools are best (or maybe which ones he is
So if you have done a risk analysis and produced a risk policy, you have a
specification which you can use for procurement and/or development. If you
employ a criteria and development methodology suite, you can ensure that
the developing solution can be tested against the risk policy.
Then you reach the implementation phase and should carry out an
accreditation. That involves making sure that what got delivered meets
your specification and that the resulting system works as you need and
expect - like it handles the traffic volumes, provides X% uptime, meets
all the security and risk targets and that risk has not been introduced
during implementation through human error or whatever.
Once everything is running you need to keep testing your risk policy to
make sure it still adequately addresses your risk management needs. You
have to enforce the policy, and you also need to modify your systems to
meet any changing needs. In security, that is not only testing the counter
measures to identified risk, but also making sure that you nice new
systems are not being subverted by the people who use them or operate
OTOH if thats all too much to handle, you can do what many do and thats
just waive a hand over the system with a few basic, and maybe
inappropriate, crude tests of parts of the systems, respond to the lastest
fashions and hyped fears, and pray twice daily.