On Wed, 16 Aug 1995, Darren Reed wrote:
> In some mail from Marcus J. Ranum, sie said:
> > The problem is that testing a firewall is best done based
> > on a design review, followed by an implementation walkthrough,
> > and then some practical testing to see if the implementation
> > appears to be doing what it's supposed to. This is time consuming,
> > expensive in terms of expertise, and unpredictable.
> Is it acceptable to automate parts of this testing, not by poking at it,
> but using methods similar to those in tiger/cops (kuang) ?
I don't know about firewalls, but for formal certification and
accreditation, one of the important criteria is repeatability - and that
almost mandates automation.
There is also a question of regression - has this fix I have implemented
for problem A introduced a new problem B? You need to have a baseline
against which to measure deltas.
And it is expensive and time consuming and not as deterministic as we
would like, but (barring mjr's dead chicken) it's the only game in town,
even if you do SEI/CMM Level 5 management.