At 09:29 AM 8/29/94 +1000, KIDSTOJ @
>I have heard many concerns expressed about the security of http in
>general and the CERN and NCSA daemons in particular. Would anyone
>like to comment?
>I'd be interested to hear from anyone who is handling http traffic through a
>firewall. (We are using a double screened subnet with an application relay
Well, while I don't know of any specific daemon problems the fact that neither
has a changeroot mode makes me think that security was a secondary issue for
the developers. You can run them changeroot by firing them up from inetd with
a wrapper, but this means the configuration file has to be in the changeroot
environment. You can also modify the daemon to run changeroot (its a simple
change for the CERN daemon, I haven't tried it for NCSA). This allows you
to put the config file outside of the environment. Both are far too large
for me to feel "warm and fuzzy" about their security.
We use a configuration similar to yours, with clients run on a bastion host
outside our main firewall (and inside another) and using xforward to relay
the displays to internal machines. We run a proxy server on the same host to
provide caching, and don't trust either the clients or the server.
Relaying HTTP through a firewall (e.g., SOCKS) would expose you to any
misbehavior which someone could convince a client to do. This would
probably be limited to actions which the user of the client was authorized
to do (but how many UNIX systems have all the patches which prevent users
from becoming root
installed?). Convincing the client to misbehave could be as simple as the
"Telnet URL" or "ghostscript" problems of last year (both fixed in current
software) which were ordinary features which provided functions which could
be exploited (in an unintended manner) by an outsider. An alternative
attack could take the form of "social engineering" (just convince the user
to drop a modified .mailcap, for UNIX, or a "neat INIT", for Macintosh, on
their client system).
As I see it, the risks of HTTP stem more from the open framework for expansion
which any system based on mapping document types to viewer applications has,
which exposes you to any problems in the viewers (like ghostscript) than it
does from actual problems in the server or http client software (although
the Telnet URL problem was a client problem). This is not just a WWW
problem, Gopher would
potentially have the same sorts of problems in an intelligent client which deals
with external viewers based on document type (such as xgopher).
The only way a firewall could protect you against those problems would be if
you used a relay which could selectively pass/deny document transfers based
on type. Even then, how do you distinguish valid postscript from invalid
postscript? Or do you just explain to your users that postscript isn't
supported because someone may not have updated their client yet? I don't
see that as a reasonable course of action, in my environment, which is why
we don't allow HTTP through our firewall.
Ken Shores, Sr. Network Analyst The Charles Stark Draper Laboratory, Inc.
com 555 Technology Square, Cambridge, MA 02139-3563
(617) 258-2529 Mail Stop 33