However, despite the big difference in compile times, the run-times of Perl's test suite aren't dramatically affected. The worst performer, Perl running on Solaris, compiled with Sun's cc and optimisation is 6% slower than the best performer, Perl running on FreeBSD, compiled with gcc and optimisation. This test involved a great deal of IO and process creation, and I thought that that might be part of the reason for the differences. So I've been using a Perl based application, SpamAssassin, to test whether or not there are big differences between the run times of the various Perl interpreters.
SpamAssassin
SpamAssassin is probably the best known open source anti spam system. It's a Perl based engine for running hundreds or thousands of rules over e-mail messages. Each rule has an associated score (which may be positive or negative), and the scores of all the rules that match are summed to generate a final score for the message. If the message scores over a certain (user defined) threshold it is deemed to be spam, and can be dealt with appopriately (rejected, dropped, quaranted, and so on).
SpamAssassin's rules are (broadly) two-fold. There are rules that execute custom code, and there are rules that perform regular expression matches on the text of the message.
SpamAssassin's a good example of an Internet application that needs lots of CPU grunt -- running all those rules is computationally expensive, so the more CPU that can be thrown at it, and the better the compiled code is, the better SpamAssassin will perform.
Methodology
I collected 178 messages from my Inbox, and saved them as individual files, one per message. The mean file size was 11719 bytes, the maximum size was 410,014 bytes, with a standard deviation of 47745.262.
I downloaded the code for SpamAssassin 3.1.5. SpamAssassin depends on a few other Perl modules; Digest::SHA1 2.11, HTML::Tagset 3.10, and HTML::Parser 3.55. I downloaded and saved these modules.
I wrote sa-speed.pl. This simple program just reports on the time taken to carry out various activities with SpamAssassin.
Each test was then carried out as follows.
First, Perl 5.8.8 was built, and installed in to a subdirectory of my home directory. This was achieved by running:
sh Configure -Dprefix='/home/nik/perl' [other options]
make
make install
make distclean
The
[other options]
consisted of -Dcc=gcc
for the gcc builds, and either -Doptimize='-O2'
or -Doptimize='-fast'
to specify the optimisation level appropriate for each compiler. Note that this was the same code built yesterday.Then SpamAssassin and the dependent modules were built. The following commands were carried out in each module's directory:
/home/nik/perl/bin/perl Makefile.PL
make
make test
make install
make distclean
I had to make a small post-install change to the SpamAssassin config file. By default, SpamAssassin tries to use a Bayesian classifier, and will automatically add messages that score very low to a whitelist. Both of these features can change how SpamAssassin behaves, and means that the results from different runs may not be comparable. Accordingly, to disable these features I added the following lines to
/home/nik/perl/share/spamassassin/10_misc.cf
.use_bayes 0
use_auto_whitelist 0
The following command was then run 20 times:
/home/nik/perl/bin/perl sa-speed.pl msgs/*
and the output redirected to a file.
Results
The output consists of four comma separated columns each expressing a time in seconds.
Column | Meaning |
---|---|
1 | The time taken to use the SpamAssassin module. |
2 | The time taken for the SpamAssassin constructor method to complete |
3 | The time taken for SpamAssassin to 'compile' the rules in to an internal form. Normally SpamAssassin does this on demand, but this would have created an artificial increase in the time taken for the first message to be processed, so this is split in to a separate action. |
4 | The time taken for SpamAssassin to scan all the messages |
For the purpose of these results I'm only going to consider the fourth column. Columns 1 through 3 account for a fractional part of the typical SpamAssassin run time, and any differences there are going to be lost in the noise.
The test results from running
make test
over the Perl distribution showed that the ordering of the binaries produced was (in decreasing order of speed):- FreeBSD, gcc, -O2
- FreeBSD, gcc, -O
- Solaris, gcc, -O2
- Solaris, cc
- Solaris, gcc, -O
- Solaris, cc, -fast
It is perhaps not entirely surprising that the results from this test show the same pattern. Specifically;
x freebsd-sa-gcc-O2.scan
+ freebsd-sa-gcc.scan
* sol8-sa-gcc-O2.scan
% sol8-sa-cc.scan
- sol8-sa-gcc.scan
@ sol8-sa-cc-fast.scan
: = Mean
M = Median
+----------------------------------------------------------+
| xx + * - @ |
| xx + ** %% -- @ @ |
|xxx ++ ** %%% -- @@@@ |
|xxx +++ *** %%%% ---- @@@@ |
|xxx + ++++ ***** %%%%% ---- @@@@ |
|xxx ++ +++++ ******%%%%%% ------- @@@@ @|
||:| |
| |_:M_| |
| |M:| |
| |:_| |
| |:| |
| |_:| |
+----------------------------------------------------------+
N | Min | Max | Median | Mean | Stddev | |
---|---|---|---|---|---|---|
x | 20 | 7.34126 | 7.394828 | 7.3663825 | 7.3681312 | 0.01648673 |
+ | 20 | 7.500831 | 7.717314 | 7.6855305 | 7.6599823 | 0.066550837 |
Difference at 99.5% confidence | ||||||
0.291851 | +/- | 0.0508838 | ||||
3.96099% | +/- | 0.690593% | ||||
* | 20 | 7.854177 | 7.975812 | 7.8991435 | 7.9065558 | 0.03201506 |
Difference at 99.5% confidence | ||||||
0.538425 | +/- | 0.0267254 | ||||
7.30748% | +/- | 0.362717% | ||||
% | 20 | 7.996386 | 8.119153 | 8.065116 | 8.0631314 | 0.032877573 |
Difference at 99.5% confidence | ||||||
0.695 | +/- | 0.0272961 | ||||
9.43252% | +/- | 0.370462% | ||||
- | 20 | 8.275332 | 8.408569 | 8.3421195 | 8.3357303 | 0.032013371 |
Difference at 99.5% confidence | ||||||
0.967599 | +/- | 0.0267243 | ||||
13.1322% | +/- | 0.362701% | ||||
@ | 20 | 8.567225 | 8.695052 | 8.612206 | 8.6122193 | 0.030636802 |
Difference at 99.5% confidence | ||||||
1.24409 | +/- | 0.0258203 | ||||
16.8847% | +/- | 0.350432% |
FreeBSD is the clear winner here, taking the top two positions, with a theoretical throughput of 24.18 messages processed per second. The worst performing Perl, compiled with Sun's compiler and the
-fast
optimisation option manages a (still respectable) 20.67 messages per second.What's perhaps most surprising here is that, at least with this workload, Solaris and Sun's commercial compiler performs the worst. Not only are the binaries that it generates slower at getting the job done (in this case, 16.8% slower) but it takes over twice as long to generate the binary (FreeBSD + gcc -O2 took 95 seconds to compile Perl, Solaris + cc -fast took 194 seconds to compile Perl).
Note: There is an optimisation opportunity that I have not reviewed yet. And given that this is day 59 I'm unlikely to get the chance to review it before the server has to be returned. Specifically, this is a multi-CPU machine, and the tests only exercised one message at a time. It is conceivable that if the tests were re-written to be either multi-processor or multi-threaded, that Solaris, by dint of (perhaps) having better support for scheduling and load balancing multiple tasks across multiple CPUs, might be able to perform better than FreeBSD on the same hardware.
No comments:
Post a Comment