bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: need explanation ulimit -c for limiting core dumps


From: Chet Ramey
Subject: Re: need explanation ulimit -c for limiting core dumps
Date: Fri, 20 Oct 2006 21:58:17 -0400
User-agent: Thunderbird 1.5.0.7 (Macintosh/20060909)

Matthew Woehlke wrote:
> Chet Ramey wrote:
>> jason.roscoe@gmail.com wrote:
>>> I'm trying to limit the size of coredumps using 'ulimit -c'.  Can
>> someone please explain why a core file gets generated from the coretest
>> program (source is below)?
>>>
>>> Thanks for any help or suggestions.
>>>
>>> % ulimit -H -c
>>> 512
>>> % ./coretest 2048
>>> rlim_cur,rlim_max = 524288,524288
>>> malloced 2097152 bytes my pid is 21255
>>> Segmentation fault (core dumped)
>>> % ls -l core
>>> -rw-------  1 jacr swdvt 2265088 2006-10-19 14:24 core
>>
>> Are you sure that's not an old core file?  My Linux testing indicates
>> that
>> the coredump bit is set in the exit status, but no core file is actually
>> created:
>>
>> $ ulimit -c 512
>> $ ./xcore 2048
>> rlim_cur,rlim_max = 524288,524288
>> malloced 2097152 bytes my pid is 7661
>> Segmentation fault (core dumped)
>> $ ls -ls core
>> /bin/ls: core: No such file or directory
> 
> You sure your Linux makes 'core' and not 'core.<pid>', right? You might
> want to do 'ls -ls core*' instead...

You are correct.  Man, I'm having a bad day.  The sparse core file
has fewer blocks than the limit, though, so it's not truncated.

Chet

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
                       Live Strong.  No day but today.
Chet Ramey, ITS, CWRU    chet@case.edu    http://cnswww.cns.cwru.edu/~chet/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]