On 2009-02-15 at 10:05 -0500, Yaron Minsky wrote:
Ari is right: there's nothing inherent about the algorithm that
should
require an ever-growing use of memory. OCaml itself is very
careful about
reclaiming unreferenced memory, but that of course does not
preclude a
memory leak in the code.
So far, I have no real clue as to what is going wrong. I could
imagine that
the caching at some level is overly aggressive. There are a number
of
configuration variables that control how much caching there is.
Some of
these are explicit caching numbers that are used by the actual DB,
and some
if it is caching that the prefix-tree datastructure does on its
own. For
instance, there is a bound (defaulted to 1000) on the number of in-
memory
nodes of the prefix-tree.
The idea that some weird query or a server in an unusual state is
exercising
some bug that blows up the memory utilization seems possible as well.
Has anyone confirmed if it's the db or recon process that is
blowing up in
memory? That would help figure out what's going on. For instance,
it's
pretty unlikely that a query from a web-crawler would cause the recon
process to explode in size.
It's recon, and the problem has stopped since I took keys.nayr.net out
of my config and that was the most recent change before things went
ballistic.
Ryan, sorry to name you and point figures publicly while still
investigating, but since at least one other person has seen the same
failure, warning trumps politeness. :(
-Phil