> At 02:15 PM 6/25/96 GMT, Ian Johnstone-Bryden wrote:
> <snip of stuff regarding good security data>
> >I would be interested to see any suggestions - on list or privately -
> >as to how this experiment can be developed.
> Could a set of logging rules be developed as a "standard"? It might
> -Telnet attempts to Server X
> -Port Scans
> Then sites could volunteer to make the log records available in a big
> database. Some initial conclusions could be made and most importantly
> could be identified over time.
> Any thoughts?
> Richard Stiennon richards @
> Director, Business Development www.netrex.com/richard
> Netrex, Inc. Voice: 810-352-9643
> 3000 Town Center, Suite 1100 Fax: 810-352-2375
> Southfield, MI 48075
An interesting thought and some interesting challenges.
The owner of the database would collect some large volumes of data and
there would be a potential for increased risk if the database information
Right now there seem to be 2 primary challenges to surveying Internet
The first challenge is to encourage a large sample of respondents to
devote a very small amount of time every month to contributing
The second challenge is being able to draw representative data from the
One major issue is that only people with an existing security system
equipped with monitor and logging facilities KNOW that they are being
attacked and can reliably identify the form of attack and its origination.
Outside of this relatively small group of users some will know they have
been attacked because
1. the attempt was inept,
2. they can see the damage resulting from the attack,
3. they learn of the attack outside of their MIS systems. (that might
be that the attacker is discovered in some other way, possibly by
accident, or the attack results in a series of effects which can
be traced back by common policing/investigation practices)
Beyond those 2 groups, others will suspect or guess that they have been
attacked in some way. The majority of incidents may well effect this much
larger group and the even larger group who havent got a clue.
This introduces an analysis challenge.
Do you take the data from a small sample of people who KNOW they have been
attacked and are able to qualify the incident(s), and then list the
guesstimes and guesses to produce a set of statistics?
If you do the results are likely to be very unrepresentative of the
community as a whole because the majority of users are not even in a
position to guess. That distorted picture becomes even less representative
when the sample is a small % of the total number of users who do have the
capability to identify and qualify incidents.
Any automatic feed of log data is likely to be a small % of the total
number of users who have good security monitor and log capabilities and
that number is in turn a small % of the total number of users in the
Another factor in the equation is risk probabilities.
Some users who have implemented security systems may have done so because
someone convinced them that the Internet is a hot-bed of information crime
and closed the sale through the use of FUD. OTOH many protected users may
well have implemented risk reduction simply because they are potentially
very high risk targets across the board.
That high risk probability may be because the user has control of large
sums of money, highly attractive trade information, or is engaged in
political/ecological activities which are a target for militants.
Therefore protection of information systems is just one group of risk
reduction activities in an enterprise-wide programme. Those risks may also
include geographic factors.
The result could be that most of the users who have no protection actually
have very little need for protection and are not targeted regularly by
Therefore it might be that only 10% of all users have risk reduction
measures in operation, but only 12% of the community has any high
probability of risk. If 90% of the protected users account for most of the
users at high risk, any attempt to translate risk identification across
the community will be seriously flawed.
I can only see meaningfull figures being produced by analyzing large
samples and implementing a series of control groups and peripheral
sampling operations, one group could of course be volunteers
automatically uploading data/extracts from security logs (provided of
course that this could be done in a secure manner without exposing
contributors to new risks).