> The basic problem, as I see it, is that Majordomo has no concept of
> system resources. I mean, it assumes everyone has enough CPU, memory
> and I/O capacity to handle any job, no matter how large.
This isn't a problem with majordomo. Lots of unix programs make this
assumtion. Although how would you restrict it?
> 1. a separate copy of majordomo is started for each incoming message
> 2. at least 1 sendmail is forked to respond to each incoming message
> 3. outgoing mail (to a list) is sent either as 1 message containing all
> recipients, or as M messages with N recipients.
You suggest queueing later, and that is the only thing that really
solves your problems. Since you have a problem with processing size.
The sendmail being forked you can't really help. Because it uses
`sendmail -t' to send outgoing mail. But that sendmail should goaway
after, the local deliveries have happened and the message is queued up
However, sendmail will just queue the message if the load gets over a
point. (see `O QueueLA=' in the sendmail.cf)
Finaly, most modern machines will run with a shared code segement so
the sendmail and perl should both not take up that much space.
Basicly the trade off here is speed v bandwidth. It sounds like you
are running on a machine with too little bandwidth (this is internal
bandwidth) to do what you want, you don't care at what speed the
e-mail gets out just that it doesn't take up too many resources. I
suggest you get a bigger machine for your mail host. You may also
have a problem with your whole configuration.
I see so many people who are basicly leaf nodes of the net that try
and do all there mail processing. There should be someone upstream of
you that is better connected that you should beable to ship your mail
Look at it this way. FedEx doesn't try and sort out every package at
the local drop off. They just take it all in bulk and ship it to one
(or is it up to two now?) site and sort it all there. The same
should be true of e-mail. Ship it all to your ISP and let them sort