|
From: | Rob Mahurin |
Subject: | Re: Octave suddenly slow |
Date: | Mon, 17 Nov 2008 18:39:16 -0500 |
On Nov 17, 2008, at 3:32 PM, Scott A. McDermott wrote:
Michael Goffioul, kudos sir, you came the closest:Sometimes, this kind of problem is triggered by having a large number of m-files in the current directory. But it does not look to be the case for you.Ends up that it's a large number of /any/ file in the current directoryinduces this problem. Pretty amazing --- even an operation that has absolutely nothing to do with files (like "for x=1:100,x,end") sits there for second after second before starting up, if there are a large number of files in the current directory. (My accidental "test case" had just over 8000 data files.)
I can reproduce this bug using octave 3.0.2 on Mac OSX and Linux.
$ time echo "printf('hi\n')" | octave -q hi real 0m4.468s $ time seq 1e4 | xargs touch # make ten thousand files real 0m3.579s $ time echo "printf('hi\n')" | octave -q hi real 0m6.692s $ time seq 1e5 | xargs touch # make 100,000 files real 0m41.446s $ time echo "printf('hi\n')" | octave -q hi real 8m34.034s
Adding 10^4 empty files to a directory makes octave open a couple seconds slower; adding 10^5 empty files makes it take eight minutes. I have heard of bugs in filesystems that cause this problem, but that doesn't seem to be the case since "ls | wc" is fast in the shell.
The filesystem bug I remember (or maybe an ls bug? or both?) was an inefficient sort in the code that lists the names of files in a directory. Poking through the code, I don't see anything like that.
Hmmmm ... dir_entry::read() seems to guess that it will live in a small directory. For the case I've outlined above, the attached patch should reduce the number of calls to Array::resize() from ~1000 to ~10. Not tested, though.
Rob -- Rob Mahurin Department of Physics and Astronomy University of Tennessee 865 207 2594 Knoxville, TN 37996 address@hidden
patch.txt
Description: Text document
[Prev in Thread] | Current Thread | [Next in Thread] |