Last weekend, Fred Cohen spun the following from the performance
> I am interested in devising a simple, but thorough, test for the
>secure W3 server provided on this server. Perhaps a good way to look at
>the issue of testing is to, as a group, consider the issues in testing
>this relatively small and easily understood program. I have often
>found that by starting with simple examples such as this one, we can
>learn about more complex issues of testing big things like firewalls.
Systems tend to collect two interesting types of behavior: those that
reflect specific functional elements of the design (like exercising
all the stuff on Unix "man" pages) and those that reflect higher
level, often system wide objectives (like security requirements).
Neither can be tested exhaustively.
What you _can_ do is apply traditional testing concepts to both sets
of requirements. For example, test all boundaries, key values,
interesting values, and a set of randomly chosen values. There are
other things. Being repeatable is obviously a plus.
To be really thorough, you should also develop a convincing set of
arguments as to why the design is sufficient to fulfill the stated
security requirements. This "convincing argument" process sometimes
uncovers assumptions you may have forgotten during development and
will help you identify further security or design tests that more
thoroughly exercise important pieces of the system.
This is a lot of work, but perhaps not too much for a small piece of
code. Unfortunately, the resulting test suite is usually very specific
to the artifact being tested. But it's about as thorough as you can
com roseville, minnesota