On 29 Sep 1998, Jason L Tibbitts III wrote:
: >>>>> "RSW" == Randall S Winchester <rsw@Glue.umd.edu> writes:
: RSW> I did not mention last week, but there was this interesting
: RSW> *performance* page on http://www.lsoft.com/listserv-perf.html
: RSW> comparing how glaringly slow the original vanilla majordomo is
: RSW> compaired to listserv.
: Listserv had damn well better be fast. But it does a lot of things that I
: don't really want to do (i.e. it IS an MTA). One major problem is Perl;
: when you write your system in an interpreted language, it's obvious that
: absolute speed isn't #1 on your feature list.
They say in their comparison that the stubed out the delivery phase so the
MTA part was not in the equation, but they were not very specific.
: However, there are plenty of good fast MTAs and so we can do very well by
: availing ourselves of them. So we're free to worry about things other than
: final delivery.
For me the biggest time is going through majordomo.
: RSW> However I have compaired my mj1 server to my mj2 server for a very
: RSW> small list, and the mj2 baseline is quite a bit faster, which is a
: RSW> nice thing.
: This is actually quite surprising to me. Mj2 compile times are bound to be
: much slower, even with all of the autoloading, because we have to pull in
: MIME::Tools and other things (some of which we have no control over,
: because modules we use use them). MIME::Tools is not small, and it's not
: And for small lists, the delivery_rules stuff can't help that much; I
: figured that compile time would dominate.
: RSW> However I am curious how it scales with large lists.
: Probably not terribly well right now; I expect that the text database stuff
: will start to hurt as list sizes break a couple of thousand. One thing I
: can do is either get rid of the high priority class or optimize the way the
: database is stored so it doesn't require a search to extract the classes.
: Right now every post requires two searches through the database, which is
: obviously going to hurt. (I'm thinking of letting the backends take a
: small list of fields that it will 'index' for better access speed. The
: text database could then put that field as part of the filename, so
: extracting all of the 'EACH' members would require just opening
: 'subscribers+EACH' and reading.) But I can optimize later once I've found
: out what the bottlenecks are.
Humm, Not sure what the high priority class is, but I would just as soon run
them at the same "Precedence=Bulk" and skip the extra search (unless it is
a per list flag that defaults to off).
: RSW> One thought I have had was instead of doing the bulk mailer equivalent
: RSW> at deliver time, that it might be better to sort the addresses at
: RSW> subscription time.
: Note that it doesn't sort unless you ask it to. Thus you don't get the
: domain-clustering of bulk_mailer unless you ask for it. I plan to play a
: bit with BerkDB because it can keep the database sorted; perhaps that will
: make a difference. I'm not running any large lists at the moment so it's
: difficult to make any real predictions, but note that the debugging system
: collects performance statistics.
Ahh, that is part of why it is faster. Mj1 calls Bulk_Mailer as another
pass though sendmail and perl.
: Of course, for the text database we can always sort it at intervals. The
: additions that sit at the end won't make much difference at all.
This is interesting. So you just have TLB split the envelopes up with an
"assume ordered" flag, or just have mj2 do this. So you could have cron do
the sorting with "whatever sorting algoritm" or after N changes.
: RSW> A database indexed by MX records might make for a swifter sending
: RSW> phase also.
: I played with this a bit when doing TLB and found that it made so little
: difference that it wasn't worth all of the extra DNS traffic. Sorting by
: reverse domain tends to cluster hosts which would generally fall under the
: same MX together anyway.
I guess I really agree. I had bulk mailer sort down to the n-1 part of the
domain when the number of parts was greater then two. This helped in a
similar fashion as things tended to get grouped by domains and not hosts.
This works better with an MX world.
Note where MIT ends up. You can see though it would be better still to catch
wam.umd.edu && po1.wam.umd.edu. If the sorting is done before delivery then
it makes sence to spend more time doing it right...
: One final note: if your MTA is sendmail, you're going to spend most of your
: time waiting for it to do DNS lookups during the SMTP transactions. Anyone
: interested in managing multiple outbound SMTP connections in parallel?
I have been doing that with my sendmail since Sendmail-8.9.0.Beta6. I have
done a few things to improve performance, and still more I would like to
1) I do parallel deliverys. (Still want slow startup)
# maximum number of parallel queue jobs at one time
2)I added the Bulk_Mailer sort algoritm to the sort order.
# shall we sort the queue by hostname first?
3) I have multiple queue directories where each dir is a delayed day
slowed down by (MinQueueAge*(days_old+1))
# multiple queue directories
Mail queue: /var/spool/mqueue is empty
Mail queue: /var/spool/mqueue/0 is empty
Mail queue: /var/spool/mqueue/1 is empty
Mail queue: /var/spool/mqueue/2 is empty
Mail queue: /var/spool/mqueue/3 is empty
Mail queue: /var/spool/mqueue/4 is empty
Mail queue: /var/spool/mqueue/5 is empty
4) I limit the number of MX records per address. (aol is rude)
# maximum number of MX records per host
5) I limit the number of A records per MX. (aol is still rude)
# maximum number of A records per MX
6) The client does not wait after a QUIT.
7) I do not wait quite as long to connect.
Not much of this will work with a stock sendmail. I still have to send my
patches back for consideration...
As an aside, from the security side:
I have also configured my sendmail not to need to run setuid. It is a bit
more involved, but the daemon (sendmail server) runs a different cf file
(/etc/sendmail.sf) then what gets run by clients in user land. The client
sendmail knows to talk SMTP to the localhost daemon which has been taught to
listen for these. Anything run from root (i.e cron) runs as
"sendmail.sendmail". I am still having fun with this but it would have
stopped several past holes. This way I do not need to change any unix
binaries that want to exec /usr/lib/sendmail. There are seperate client and
From: Jason L Tibbitts III <email@example.com>