|
From: | Dan Klebanov |
Subject: | Re: Problem with malloc on very large blocks of memory |
Date: | Mon, 24 Mar 2003 09:20:27 -0500 |
User-agent: | Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0.1) Gecko/20021003 |
What else do you expect? You also have error message in the kernel log saying that system run out of memory and killed biggest memory
What I expect is that malloc should behave according to spec. In other words, if there's not enough memory left in the system, then it should return NULL. Otherwise, what's the point of checking for NULL? The way it works now is that it hands you the memory, but barfs as soon as you try to use it. The little test program that I wrote was just meant as an extreme way of proving my point. I only became aware of this issue when our software was mysteriously crashing, and then it turned out that the cause was a broken swap partition on the offending machine. The program was trying to allocate a 40 meg chunk of memory and it would die during memset. (Granted, our software was written by monkeys typing at random, and there's all sorts of memory leaks to show for it.)
hog to stay alive. Just do not do that - disable memory overcommit if your working set may be larger than available memory and you cannot accept killing process. Petr Vandrovec address@hidden
Isn't it the operating system's job to tell me when I'm out of memory?
[Prev in Thread] | Current Thread | [Next in Thread] |