|From:||John D. Shortridge|
|Subject:||Request for information on stack usage|
|Date:||Fri, 26 Mar 2004 10:59:56 +1100|
Dear gprolog users,
I have a question which I hope someone may be able to help with. I am using gprolog 1.2.16 under Linux to decode meteorological messages, and a wonderful tool it is. However, in scaling up from test volumes of data to real-life volumes I have run into a problem with stack sizes. For example, my program will process 4000 records OK, but then if I duplicate those 4000 records and try to process the resulting 8000 I will get a fatal error due to local stack overflow. Then if I adjust LOCALSZ and try again I get a problem with global stack overflow. Then if I adjust GLOBALSZ I can process 8000 records OK, but when I duplicate them again and try to process 16000 it's back to local stack overflow. Clearly this is a problem which should be solved properly rather than worked around.
The program is written in C and Prolog, with the C program simply reading individual records from a flat file and passing each individual message to Prolog for decoding. In general terms I would have thought that each invocation of Prolog would start with a clean slate, but obviously that isn't the case and some resource is being chewed up behind the scenes.
In the archive, http://mail.gnu.org/archive/html/bug-prolog/2001-11/msg00005.html seems to describe my problem more or less exactly, but unfortunately following "Thread next" just leads to a joke about a blonde :-(.
I'd be grateful for any general guidance anyone can offer on what I'm doing wrong.
Computing Support Unit,
National Climate Centre,
Australian Bureau of Meteorology,
GPO Box 1289K,
Melbourne, Vic 3001
|[Prev in Thread]||Current Thread||[Next in Thread]|