discuss-gnustep
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Problem with NSAutoreleasePool in GNUstep


From: ramana rao
Subject: Re: Problem with NSAutoreleasePool in GNUstep
Date: Mon, 3 Dec 2001 20:31:48 +0530

Richard,

thanks for the response.

We tried with +freecache, still the memory used by the program is not going
down.

I still could not get why memory usage is not coming down after [pool
release]. In OpenStep the memory usage does come down?

This problem is not with particular machine, it is coming on all machines

thanks

Ramana Rao




----- Original Message -----
From: "Richard Frith-Macdonald" <richard@brainstorm.co.uk>
To: "ramana rao" <nalluri@orillion.com>
Cc: "discuss GNUstep" <discuss-gnustep@gnu.org>; "Vijaya Bhaskar Reddy K"
<bhaskar@orillion.com>; "Ravindra K S" <ravindra@orillion.com>; "Kotesh M V"
<kotesh@orillion.com>; "Krishna Kumar" <krishna@orillion.com>; "jpyoung"
<jpyoung@orillion.com>
Sent: Monday, December 03, 2001 6:37 PM
Subject: Re: Problem with NSAutoreleasePool in GNUstep


>
> On Monday, December 3, 2001, at 12:00 PM, ramana rao wrote:
>
> >
> > JP,
> >
> > It seeme there is a problem with GNUstep NSAutorelease pools,
> > unless I am very much mistaken
>
> The perceived problem is really an artifact of the simplistic test used.
> In real systems you -
> a. wouldn't put huge numbers of objects in a single autorelease pool
> b. have lots of swap space, so don't care about the peak memory size
>
> Where there is a genuine problem with this behavior (ie your machine is
> very short
> of swap space) you can cure it by making better use of your autorelease
> pools
> (creating and destroying pools more frequently) or using the +freeCache
> method
> (see the GNUstep NSAutoreleasePool documentation).
>
>
> _______________________________________________
> Discuss-gnustep mailing list
> Discuss-gnustep@gnu.org
> http://mail.gnu.org/mailman/listinfo/discuss-gnustep




reply via email to

[Prev in Thread] Current Thread [Next in Thread]