On 2 Aug 2011, at 9:38PM, Miguel de Benito Delgado wrote:
First, I would change the default location for the cache to something inside .TeXmacs. I would also suggest we/you think of another naming scheme which avoided creating thousands of entries in the same directory, since many filesystems (all?) suck at managing this.
You are right. I moved it to .TeXmacs/system/cache/images (system/cache already exists...)
In the attached script, and each directory will now contain only 16 files.
I also changed 'cp' to 'ln -s', so the result is linked to the image in the cache, which is even faster.
Yet another suggestion, more in the realm of wouldn't-it-be-great's: why not ditch the one-file-per-resolution idea and use some format like JPEG2000 which allowed for multiple resolutions in one file (or so I gather it does from the fact that it's wavelet based, please correct me if I'm wrong).
I think it should be whatever the frontend supports - I don't know what format images have to be to be rendered by qt. Yes, something scaleable would be best.
For now, it works nicely for me. Sadly, there isn't really a facility to redraw an image when something goes wrong, so you might have to erase ~/.imagecache
I do not understand what you mean by "a facility to redraw ... wrong". And actually, a management interface for the cache system would have to be implemented in order for this solution to be used.
I mean if something went wrong in the rendering... shouldn't happen, but could. Then you'd get the same mishappened image every time. Happened to me once because I had some bug.
This whole thing is a horrible hack. But since I work with images that take 20 seconds or more to render (just a simple 50x100 density plot from R, nothing fancy), and a document will have many of those, scrolling through the document was really maddening.
So it's already way better than the original situation, which makes the horrible hack a lovely horrible hack. If you have some time to spend in it and can make it less horrible and more lovely, that would be just great.
By the way, one could also consider embedding the data into the .tm file (using \raw-data?), if it's not too much weight. This would mean duplicate thumbnails for different .tm files which used the same image, though, so it might not be such a good idea, but would also discard cached thumbnails of images which changed often (meaning: changing the image file would result in a cache update which would replace the old data directly in the .tm file).
I think that would be best. Then we can also store the original image, instead of converting
png->eps->png. But I know nothing of how efficient TeXmacs is in storing data in the tm file. Maybe the file needs to be some kind of directory, or compressed directory.
Another possibility could be creating cache directories for each .tm file or, better, project: one item which has been in my personal to-do list for ages is (better) project management (with configurable image/style/script/aux folders and any other niceties), and this would fit well into that.
The rationale against the global cache is that it could get huge, but then, it would just be a matter of automatically checking and cleaning upon startup, so it might just be best. Hmm... yes, I think your idea is better.
I think in today's world, the cache will not get huge, because these png files are tiny. 30k? You'd fit 30,000 of them in a gigabyte. Which means 100 a day for a year. Hmmm... ok, maybe they do get pretty huge. But many are just 4k big. One could easily erase files where the last access time is more than X in the past.
And in theory, one could cache only those where the generation takes more than a certain time.
I could easily add that to the script if anyone thought that's a good idea... But what you suggested is probably better - just prune what hasn't been used.