At 5:10 PM -0700 7/7/98, John Sechrest wrote:
> Specifically, one list has 130,000 addresses on it.
> another has 10,000 addresses on it.
> All of my current lists are below 1000, so I have no
> experience with large lists.
I run multiple lists > 50,000 users now, plus a few hundred lists
between 10-50,000, all on the same machine.
With majordomo, large lists are pretty straightforward. There are a
couple of places that'll bite you:
1) delivery side. make sure you use bulk_mailer or some other delivery
optimizer. I have very good results with bulk_mailer doing the batch
2) delivery side. You can overwhelm your system with delivery attempts.
you'll have to figure out how many simultaneous sendmail (or whatever)
connections you can handle, and then set up your system to stay below
that. That is easier said than done, since 99% of sendmail's load
balancing stuff is to keep INCOMING connections from overwhelming
servers, and is of small use outgoing.
2a) Hint one: set up all of your -outgoing aliases through bulkmail,
and feed bulkmail the "-ODeliveryMode=queueonly" option (I'm assuming
sendmail 8.x here. You *definitely* want sendmail 8.8 or later for
fixes to queue/load interactions). That sends outgoing mail into the
queue but doesn't immediately try to deliver it, so you don't magically
find 300 sendmails running after a large batch.
2b) hint two: use a cron job to spawn sendmails. I have a shell script
that goes off every minute. It checks to see how many sendmail's are
currently in the process table, and if it's below a given number,
spawns a bunch (for my big machine, I allow 100 sendmails at once, and
under 90 sendmails running, spawn five a minute). Don't try to tune
using the sendmail command line, you'll never get things clean. You
want a system, when mail starts flowing, to explode to high gear and
then level off cold when you hit your useful maximums, so it won't
thrash. That'll maximize your output.
2c) use a moderately short but noticable delay before retry. I use "O
MinQueueAge=20m". that keeps all those hundreds of sendmails from
thrashing over trying to redeliver mail, but doesn't hold mail too
2d) use multiple mail queues. After mail's sat for a while (on my site,
12 hours), it goes to a second queue that's processed a lot less often
(about once an hour instead of constant). After 3 days, to a third
queue that's processed half a dozen times a day. I use a 7 day bounce.
If it hasn't been delivered in 12 hours, it's not urgent, so you don't
need to waste cycles retrying, but you still want it to go out once the
system returns. After 3-4 days of downtime, you're basically just
hoping the system comes back... This keeps the main queue from growing
too large, minimizing the thrashing of all of those sendmails.
2e) tune majordomo so you don't have too many parallel admin requests
running. I have mine cut out at Loadaverge 4. you can get lock failures
and other problems if too many of these go off at once, because once
you get a really large list, most of your admin requests (sub/unsub)
are aimed at that list. and...
3) incoming mail: since incoming and non-bulk_mailer mail is delivered
immediately, it takes priority over outgoing mail that's merely queued
(in 2a). that's good, in general. But administrative updates start
taking longe,r and longer, and longer.... a 50,000 user mail list has a
subscriber list file about a megabyte long. 130,000 will be around
2.5-3 meg. Majordomo, when it updates, has to sequentially read/write
these files to add or delete addresses, so it slows badly as it scales
to large address lists. It's not uncommon for me to see 50-90 second
transactions on my larger lists (this is why 2e exists -- and set your
shlock timeouts to HUGE values in majordomo.cf. Really huge. But use
load averages to keep parallel access down, so minimize thrashing, so
you don't often need the long timeouts. But you will...).
This is one weakness in majordomo. I've looked at replacing the
subscriber list with a dbm file, and then teaching bulkmailer and other
routines like which/who to read from it, so that updates are closer to
simultaneous, but haven't done that yet. That's a future.... but since
I expect some lists to be pushing 300,000 by year end, I think I'll
need it. Sequential reading and writing of a flat file just doesn't
scale, and there's only so much you can do with faster hardware.
4) Plan on some kind of bounce automation, or you'll slowly go crazy.
Look into it before you wake up to 1,500 bounces a day, no after....
The only gotcha I think isn't obvious is the issues surrounding the
update time for add/drop to a large subscriber list. the outgoing stuff
is pretty straight forward. And until you live with huge bounce lists
every day, it's easy to think bounce processing can wait... (it's my
Chuq Von Rospach (Hockey fan? <http://www.plaidworks.com/hockey/>)
Apple Mail List Gnome (mailto:email@example.com)
Plaidworks Consulting (mailto:firstname.lastname@example.org)
<http://www.plaidworks.com/> + <http://www.lists.apple.com/>