[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Nmh-workers] nmh Speed Measured by ltrace(1).

From: Ralph Corderoy
Subject: Re: [Nmh-workers] nmh Speed Measured by ltrace(1).
Date: Sat, 29 Apr 2017 19:16:19 +0100

Hi Howard,

> Silly question. I don't see any mention of buffer cache effects being
> worked around.

No, good question...

> Are you running a scan, tossing the results, then rerunning the scan?

In effect, yes.  I get the ltrace command running just as I want, LC_ALL
set, scan(1) with the right options, e.g. -width, and so that's primed
RAM with the executables, libraries, and the folder's contents.  du(1)
says the folder is about 44 MiB BTW;  it has no sub-folders.  So it's
not large, but big enough that it's typical of my folders, actually one
of the small main ones, and that tickles readdir(1) enough to be the
main hog.

Also, ltrace is counting library calls so cache effects shouldn't affect
that statistic.  (Baring some zany alarm(3) triggering due to slow disk
access.  :-)  The time would be affected.

The `\time -v' output I gave for scanning the whole folder included

    \time -v scan +foo >/dev/null           -636-    -1309-
    File system inputs                      0        0
    File system outputs                     0        0

and I take that to show it didn't need to hit spinning rust.  If I run
on a rare folder it then shows 73,424 inputs and takes 33 s the first
time, 0 and 0.6 the second time immediately afterwards.

Given we have a test suite, I was wondering if it was also worth having
some benchmark commands and a standard corpus of mail to run it on.
Rather than initially clog the repository with that, it could be
consistently generated on the fly.

Cheers, Ralph.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]