texmacs-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Texmacs-dev] TeXmacs cache behaviour [ was Re: Introduction to caches]


From: Joris van der Hoeven
Subject: [Texmacs-dev] TeXmacs cache behaviour [ was Re: Introduction to caches]
Date: Sat, 29 May 2004 13:22:30 +0200 (CEST)

> The best reference for getting up to speed quickly is "Elements of
> Cache Programming Style"
> by Chris Sears:
>
> http://www.usenix.org/publications/library/proceedings/als00/2000papers/papers/full_papers/sears/sears_html/

Thanks for this useful reference, which refreshes my knowledge
about hardware. In particular, I deduce that the current memory
management system based on linked lists for small object sizes
should actually be quite efficient.

Indeed, it seems not that important for good cache behaviour that
data is in consecutive memory. What is more important is that
one tries to do as many computations as possible on the same data.
The cache will handle this efficiently even if the data is scattered
across memory (except when you have bad luck and the locations of
the data differ by big powers of two (FFT implementors: be careful!)).

Now coming back to the current memory management system,
the linked list precisely has the effect that, when destroying
an object and reallocating it just afterwards, the object is
reallocated at the same location as the destroyed object.
Compared to a program like Guile, we indeed observe ten times
less cache misses. Assuming that Guile does not spend more than
half of its time in cache misses, it follows that TeXmacs-Guile
spends no more than 5% of its time in cache misses.

So I am getting pretty convinced that the TeXmacs memory allocation
scheme is actually quite good for our purpose. Several improvements
can/should still be made though:

  1) Make it more reliable by providing better debugging facilities
     for detecting leaks (or even eliminate them automatically).

  2) Implement the T -> const T& -> T::in optimization,
     at least for the core library routines.

  3) Implement compactification, not so much in order to increase
     speed, but in order to reduce the memory being used.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]