qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] fix the memory leak for share hugepage


From: Linhaifeng
Subject: Re: [Qemu-devel] [PATCH] fix the memory leak for share hugepage
Date: Sat, 18 Oct 2014 11:20:13 +0800
User-agent: Mozilla/5.0 (Windows NT 6.1; rv:31.0) Gecko/20100101 Thunderbird/31.1.0


On 2014/10/17 21:26, Daniel P. Berrange wrote:
> On Fri, Oct 17, 2014 at 04:57:27PM +0800, Linhaifeng wrote:
>>
>>
>> On 2014/10/17 16:33, Daniel P. Berrange wrote:
>>> On Fri, Oct 17, 2014 at 04:27:17PM +0800, address@hidden wrote:
>>>> From: linhaifeng <address@hidden>
>>>>
>>>> The VM start with share hugepage should close the hugefile fd
>>>> when exit.Because the hugepage fd may be send to other process
>>>> e.g vhost-user If qemu not close the fd the other process can
>>>> not free the hugepage otherwise exit process,this is ugly,so
>>>> qemu should close all shared fd when exit.
>>>>
>>>> Signed-off-by: linhaifeng <address@hidden>
>>>
>>> Err, all file descriptors are closed automatically when a process
>>> exits. So manually calling close(fd) before exit can't have any
>>> functional effect on a resource leak.
>>>
>>> If QEMU has sent the FD to another process, that process has a
>>> completely separate copy of the FD. Closing the FD in QEMU will
>>> not close the FD in the other process. You need the other process
>>> to exit for the copy to be closed.
>>>
>>> Regards,
>>> Daniel
>>>
>> Hi,daniel
>>
>> QEMU send the fd by unix domain socket.unix domain socket just install the 
>> fd to
>> other process and inc the f_count,if qemu not close the fd the f_count is 
>> not dec.
>> Then the other process even close the fd the hugepage would not freed whise 
>> the other process exit.
> 
> The kernel always closes all FDs when a process exits. So if this FD is
> not being correctly closed then it is a kernel bug. There should never
> be any reason for an application to do close(fd) before exiting.
> 
> Regards,
> Daniel
> 
Hi,daniel

I don't think this is kernel's bug.May be this a problem about usage.
If you open a file you should close it too.

This is <<linux man page>>about how to free resource of file.
http://linux.die.net/man/2/close


I'm trying to describe my problem.

For example, there are 2 VMs run with hugepage and the hugepage only for QEMU 
to use.

Before run VM the meminfo is :
HugePages_Total:    4096
HugePages_Free:     4096
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

Run the two VMs.QEMU deal with hugepage as follow steps:
1.open
2.unlink
3.mmap
4.use memory of hugepage.After this step the meminfo is :
HugePages_Total:    4096
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
5.shutdown VM with signal 15 without close(fd).After this step the meminfo is :
HugePages_Total:    4096
HugePages_Free:     4096
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

Yes,it works well,like you said the kernel recycle all resources.

For another example,there are 2 VMs run with hugepage and share the hugepage 
with vapp(a vhost-user application).

Before run VM the meminfo is :
HugePages_Total:    4096
HugePages_Free:     4096
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

Run the first VM.QEMU deal with hugepage as follow steps:
1.open
2.unlink
3.mmap
4.use memory of hugepage and send the fd to vapp with unix domain socket.After 
this step the meminfo is:
HugePages_Total:    4096
HugePages_Free:     2048
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

Run the second VM.After this step the meminfo is:
HugePages_Total:    4096
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

Then I want to close the first VM and run another VM.After close the first VM 
and close the fd in vapp the meminfo is :
HugePages_Total:    4096
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

So failed to run the third VM because the first VM have not free the 
hugepage.After apply this patch the meminfo is:
HugePages_Total:    4096
HugePages_Free:     2048
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
So i can run the third VM success.

-- 
Regards,
Haifeng




reply via email to

[Prev in Thread] Current Thread [Next in Thread]