I have a Web server. This web server needs to get some data off
of another machine, inside my firewall. My programmer wants to write
his own little home grown TCP/IP application to communicate between
his program on my web server and his program on my interal machine.
He tests it internally, and it works fine. He puts his application on the
web server and I poke a hole in the router and the application does
Here's what I've found. I am opening up one TCP port for him to use,
but the unix function calls that he's using (the only ones he really knows
of) operate like this: Client calls the server on port X. Server responds
to Client on port X, giving it a high port number that is available. Client
calls the server on that high port number to communicate. This allows
for multiple client programs to be fired off.
My reaction is, hey, I can poke a hole at some known port number for
your application, but I can't just allow some random port number through
the router (can I?). His reaction is, hey, I can't just choose the port
that the client is going to use once it's connected.
How are these things normally accomplished? I need to have my Web
server serve up data that is on an internal machine. What data I need
from the internal machine depends upon the search criteria that the Web
user entered on their form. Is there some range of port numbers that
connect() and accept() are going to use that it's safe for me to allow
through my firewall, or better yet, is there a way I can control what port
number is assigned to the client so that I can only poke holes for the
clients I expect, etc. The web server and the internal machine are
both unix boxes.
Am I barking up the wrong tree? Is there a better way to be doing this?
I've considered a NULL modem cable between the 2 machines, but
I'm not sure it can handle the load like TCP/IP can.