chicken-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Chicken-users] two minor tweaks to runtime.c


From: Alaric Snell-Pym
Subject: Re: [Chicken-users] two minor tweaks to runtime.c
Date: Thu, 29 Sep 2011 13:02:24 +0100
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.18) Gecko/20110617 Thunderbird/3.1.11

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 09/29/2011 12:38 PM, Jörg F. Wittenberger wrote:

> I don't not have benchmarks for a reason: they would cost me too much
> time to do right.  Personally I don't believe too much in benchmarks
> anyway.  I believe in fast execution and source code review.

Ah, but how can you measure fast execution without a benchmark?

> How should the community ever be able to improve over the current state
> of affairs, if each suggestion is upfront required to come with a
> benchmark, which is than probably first taken apart to show how flawed
> it is?

If the benchmark is flawed, it should be fixed. I am getting the
impression you have encountered some terrible benchmarks!

> Given how small the difference to the code is: wouldn't it be reasonable
> to just give it a try?

Yes. But trying out some code involves reviewing it, then testing it -
both for correctness and, in this case, for a performance improvement;
and (the evil case...) for not worsening performance elsewhere. Which
needs a test suite and some benchmarks!

> Or let me take the threading problem I solved ages ago.  I did NOT want
> to get into that business.  All I wanted was to have my prog run on
> chicken as it did on rscheme.  Benchmarks said chicken is faster at
> that time.  What a lie a benchmark can be!  It was crawling slow.
> Tracked that down to the timeout queue.  Fixed the complexity issue.
> Problem solved.  Hm.  So how would I device a benchmark case for that one?

If the supposed performance improvement can't be benchmarked, then it's
pointless, as nobody will actually benefit from it. Any case where
somebody can benefit from a performance improvement can be turned into a
benchmark that consists of running the code that is sped up, and timing it.

Benchmarks are like unit tests; they are snippets of code that perform
some operation but, rather than testing correct responses, their
emphasis is on testing resource usage. We could work on a system by
iteratively hacking it then measuring performance by hand, but in doing
so, we will only measure the kinds of performance we personally care
about, and may well do things that reduce performance in other areas of
the system. Decent benchmarks can be put into the test suite, so future
performance tinkerers can see the consequences of their changes for
previous uses. And just like unit tests, performance benchmarks should
be chosen carefully for what they test. Unit tests are often easier to
write, as they have clearly-defined (sometimes in specifications,
sometimes in common sense) goals. Performance benchmarks are trickier. A
system that aggressively caches everything read might perform very well
on read latency and throughput, but terribly on memory consumption and
latency of noticing changes to source data. So the best benchmarks are
derived directly from applications, and include representative mixes of
operations to test overall performance as well as low-level
per-operation benchmarks!

ABS

- --
Alaric Snell-Pym
http://www.snell-pym.org.uk/alaric/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk6EXlAACgkQRgz/WHNxCGq02ACcDTZBt8R4f3PU8Zu7vl63TjIP
ShAAnjUl0K8Z3uCwpJMuVSb9bZ5uilcZ
=mZsg
-----END PGP SIGNATURE-----



reply via email to

[Prev in Thread] Current Thread [Next in Thread]