"Pearce, Danny" thus spake unto me: > `http://www.iss.net - RealSecure/Internet Security Scanner(set of) > `http://www.wheelgroup.com - NetRanger/NetSonar > `http://www.nai.com - CyberCop > `http://www.axent.com - NetRecon > > Plus a few others that are not so good > > Abirnet SessionWall > NFR Network Flight Recorder (www.nfr.org) What are your criteria for saying which of these are and aren't good?
Are you considering only the scope of the vulnerability database which is of somewhat decreased value in the face of packet manipulation attacks mentioned by the SNI paper? Some versions of the above systems are not able to detect attacks that are in their vulnerability database when an attacker is fragmenting traffic or otherwise manipulating traffic.
Are you considering how well the products scale in terms of managing them in a large, heterogeneous, distributed environment? Some of these are limited in the number of monitors that can be deployed per management console, the range of physical media types and network protocols supported, and the bandwidth that the monitor can keep up with.
Are they extensible by the end user or does the customer have to rely on the vendor to release new attack signatures? I would hate to have a window of time where a known and understood attack can get by because I am waiting for the next product release. I have yet to see a vendor release updates more frequently than once a month. In some environments that window is too large of an exposure.
The worst things we can do as security professionals is say that a product is good or bad without giving the context in which that judgement is made.