I just had a rather unfortunate incident where one of my list admins
sent through something like 300 individual subscribe/unsubscribe
messages as well as 4 or 5 postings to a 20000 user mailing list, all at
Suffice it to say, my mail server didn't appreciate it at all :-)
The basic problem, as I see it, is that Majordomo has no concept of
system resources. I mean, it assumes everyone has enough CPU, memory
and I/O capacity to handle any job, no matter how large.
1. a separate copy of majordomo is started for each incoming message
2. at least 1 sendmail is forked to respond to each incoming message
3. outgoing mail (to a list) is sent either as 1 message containing all
recipients, or as M messages with N recipients.
If 1 message is sent, the related sendmail process ends up being
enormous. If M messsages are sent, the sendmail processes stay a
respectable size but all the bulk mailers I've seen fork a lot of
sendmails to get the job done.
If you have more than 1 low-traffic/high-user (high-traffic/low-user),
either can take a serious toll on your system.
I'm sure any of us can sit here all day and come up with fault upon
fault in Majordomo, but to be fair it is ideal for what it was designed
for -- small lists. The reality today is that more people are having
exponentially larger lists, and this is somewhere where a package like
Majordomo is very appealing because commercial packages all charge on a
So, the question is then, is there anything that can be done to minimise
the stress Majordomo causes on a system. I have been writing this
message for the last 2 hours and I still haven't come up with any ideas
I really like. The best one so far works something like this:
There is a majordomo daemon running at all times.
Majordomo has an input and an output queue.
When a new message comes in, it is written to the input queue.
Majordomo picks the new message out of the queue and processes it.
The output message is written to the output queue.
There is a delivery agent running at all times.
When a new outgoing message is written, the delivery agent will:
1. open an SMTP session to the mail server
2. open the message
3. send the message
4. check to see if there is another message waiting in the queue
if so, go to 2.
5. close the SMTP session
(basically like bulk_mailer, but with a little more smarts to do things
like reuse SMTP sessions, etc.)
In this situation, there is never more than 1 majordomo running nor is
there ever more than 1 sendmail running. Of course, on a heavily loaded
system, that may not be enough; just pump up the number of parallel
majordomos and delivery agents until you get the right mix. Or you
could have 1 majordomo, 1 delivery agent and you allow that delivery
agent to open multiple SMTP sessions to process the same message (ie:
split a 20000 user mailing list in to 10x2000 user lists), etc. The
details here are very flexible.
If I go back to my list of sample problems, the above resolves each. In
a simplified view:
1. a lightweight queuing program is invoked for each incoming message
rather than the heavy majordomo. Only 1 majordomo is ever invoked
irrespective of the number of messages received.
2. only 1 sendmail is invoked irrespective of the number of deliveries
that need to be made.
3. only 1 sendmail is invoked irrespective of the list size or the
number of lists.
Anyway, if anyone has any thoughts on the above (besides "go stuff
yourself" :-) I'd like to hear them. I'm sure I'm not the first person
to run in to Majordomo's scalability problems so I'm hoping there will
be a few interested parties in here to perhaps get something going.
Evan Champion * Director, Network Operations
mailto:email@example.com * Directeur, Exploitation du reseau
http://www.synapse.net/ * Synapse Internet