[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [bug #31961] Out-of-control memory usage when run against a large di
Re: [bug #31961] Out-of-control memory usage when run against a large directory
Fri, 24 Dec 2010 18:08:29 -0200
The directory has no subdirectories. I have been unable to cound the number
of files in the directory because I can't do the usual find . | wc -l
I just tried the latest stable release 4.4.2 downloaded from the official
site and compiled from source and ran into the same problem.
Will try oldfind.
I know for a fact that the opendir call takes a long time to get a response
from the system. My PHP scripts only starts outputting anything after a
couple of minutes. I don't know why find would be using up memory while it
waits for the response.
On Thu, Dec 23, 2010 at 6:46 PM, James Youngman <address@hidden>wrote:
> Follow-up Comment #1, bug #31961 (project findutils):
> The sizes of the files (as oppposed to directories) is of course
> find never opens them, let alone reads them.
> What is the depth of the directory hierarchy under this directory? How
> many entries are there in each? Do you get the same characteristics if you
> use the "oldfind" binary that's also generated when you build find (from
> Also, findutils-4.4.0 is quite old now, since it was released on
> could you try 4.4.2 (from ftp.gnu.org) or 4.5.9 (from alpha.gnu.org)?
> Reply to this item at:
> Message sent via/by Savannah