> Well, take a sample of sites that all have the same logging capability.
> something as benign as "door knob rattling" and pool the results. Even
> contributors would identify some interesting trends:
> - Popular domains from which attacks originate
> - Time of day/week variations
> - sophistication of tools used
> - trends (wow 20% increase in failed su attempts this month!)
I do think the idea is interesting and most things are technically
I think that a rolling small sample survey of this kind is of value in the
context of its sample size.
The first issue is how many sites would be prepared to participate in this
type of exercise and, of those, how many could be easily included for
technical reasons. If 100 major users participated they would account for
a much larger number of sites and networks which should produce usefull
trend information at least for similar users.
Beyond that there are potentially some legal issues and the matter of
accuracy. Legal issues could be addressed by not including details of
sites believed to be the base for any attack (other perhaps than grouping
by site type - like academic sites, etc.), or any other possibly
contentious data which someone could take legal exception to.
Many subscribers to this list might find details of attack launch pads of
great interest, but I believe that the better approach is for the person
suffering an attack to contact the site owner at what appears to be the
launch site. In most cases, that site will be keen to take action and may
be completely unaware that they are being used in this way. What might be
very usefull information is the % of launch sites who agreed to
co-operate, and the % of sites which dealt effectively with the problem,
but that is not going to come from any automated log report system.
Some of this depends on what you hope to get out of the information.
Someone running a firewall usually wants to know there is a hole and plug
it. Blacklisting sites which have hosted an attacker is a technical
solution short term which may introduce other risks like unacceptable
availability or even legal action by the blacklisted site.
OTOH identification that X number of sites were used to launch attacks
from this month, that this is a Y% increase over last month or the annual
average, that Z% of launch sites refused to co-operate, is data which
identifies an issue requiring community attention which could be codes of
practice, new legislation, or many other generic actions. It is also data
with low legal risks.
Accuracy may be an issue because the respondent site would be reliant on
the efficiency of their monitor and log systems.
> A frightening thing happened at one of our web sites the other day. We
> live with the site by parking it on a web server under www.itsdomain.com
> made the DNS updates to point to it. Before we had even notified the
> customer that it was ready, Inktomi had been into it and snarfed it
> was watching the DNS network for updates! It is possible that *every*
> network will be hacked the minute it goes "on-line".
> (Here's a data point. We have a customer that runs several web sites for
> fortune 100 company. We log over 150 attempts a day to telnet to the web
> domain. Granted, these are clueless doorknob rattling attempts, but
> seems high.)
This is interesting. We probably all know of some attacks either from
direct experience or from a known reliable source with direct experience.
By searching the growing number of discussion and user groups, it is
possible to see some evidence of current threats, but that is a time
consuming process which may not provide the basis for reliable trend
plotting. OTOH there are many people who have never heard of any threats
except in unsubstantiated % claims - that could well be because they and
their peer group have no ability to monitor for the risks.
There is still reluctance by victims to publish information on the
incidents which have affected them. Some of this reluctance is due to
pressures of time, and some automated reporting system to an intelligence
database would remove the problem of making time available to produce a
What seems to be a more common situation is deliberate suppression of
information. There may be several reasons for this but the 3 most common
appear to be:
1. Fear of censure for not preventing the incident,
2. Fear of loss of customer/supplier/stock holder confidence,
3. Fear that other attackers may target the site/new risks are
Last month there was a serious incident affecting a bank. The probability
is that several banks suffered in the same way but have so far been 100%
successfull in suppressing the information. It was a complex series of
actions which included some action through a gateway to a public network
and may have involved inside help. A police organization became aware of
the incident but the bank was not keen to co-operate by providing details
and making a formal report of a crime. It is not known if the bank made
any information available privately to other banks, but it appears that
all employees with access to information were told not to discuss the
matter with anyone internally or externally.
The main motivation in keeping quiet seems to be a fear that customers
will lose confidence in the bank and that further similar attacks might be
encouraged before a satisfactory counter can become operational. It might
also be fear of legal actions on the grounds of damages due to the bank
failing to take reasonable precautions.
In the meantime other banks and organizations may be unnecessarily
vulnerable to these risks through lack of information. What we dont know
is how widespread this problem is. Its also entirely possible that this
form of attack has been going on for some time because no prior victims
were prepared to share any information on incidents.
Rolling risk surveys will not provide a direct answer, but they would
provide core information to enable analysis tools to be populated and for
the community to put risk into a better perspective. They might also
encourage other information exchange systems and reduce risks.