qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Fwd: [Users] oVirt Node (HyperVisor) - Memory Usage


From: Anthony Liguori
Subject: Re: [Qemu-devel] Fwd: [Users] oVirt Node (HyperVisor) - Memory Usage
Date: Fri, 18 Jan 2013 07:49:36 -0600
User-agent: Notmuch/0.13.2+93~ged93d79 (http://notmuchmail.org) Emacs/23.3.1 (x86_64-pc-linux-gnu)

Stefan Hajnoczi <address@hidden> writes:

> On Thu, Jan 17, 2013 at 10:10:48AM +0000, Alex Leonhardt wrote:
>> I dont have the original VM running anymore - but here is another one, with
>> full command line :
>> 
>> qemu     23663  6.2  0.6 *4131312* 641092 ?      Sl   Jan16 109:49
>> /usr/libexec/qemu-kvm -S -M rhel6.3.0 -cpu Conroe -enable-kvm -m 1024 -smp
>> 2,sockets=1,cores=2,threads=1 -name VMNAME -uuid
>> 859efb65-9b27-460a-92eb-19be6ca57017 -smbios type=1,manufacturer=Red
>> Hat,product=RHEV
>> Hypervisor,version=6-3.el6.centos.9,serial=EE720BB5-44FD-331A-AEAB-A371127DC672_e4:1f:13:b3:07:78,uuid=859efb65-9b27-460a-92eb-19be6ca57017
>> -nodefconfig -nodefaults -chardev
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/VMNAME.monitor,server,nowait
>> -mon chardev=charmonitor,id=monitor,mode=control -rtc
>> base=2013-01-16T03:57:52,driftfix=slew -no-shutdown -device
>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
>> if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=
>> -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
>> file=/rhev/data-center/eb01e934-6413-4ef6-8736-7e9e56af8ed2/9bd4735c-a02b-403a-8e66-c5679b70e137/images/e7f9c304-9111-4cfd-a32b-f0034878f731/19c61206-38b6-4470-827e-e6a549b08dc3,if=none,id=drive-virtio-disk0,format=raw,serial=e7f9c304-9111-4cfd-a32b-f0034878f731,cache=none,werror=stop,rerror=stop,aio=threads
>> -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>> -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=52 -device
>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:10:3c:1a,bus=pci.0,addr=0x3
>> -chardev
>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/VMNAME.com.redhat.rhevm.vdsm,server,nowait
>> -device
>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
>> -chardev spicevmc,id=charchannel1,name=vdagent -device
>> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0
>> -chardev pty,id=charconsole0 -device
>> virtconsole,chardev=charconsole0,id=console0 -spice
>> port=5952,tls-port=5953,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record
>> -k en-us -vga cirrus -device
>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
>> 
>> 
>> This example has a VSZ allocation of ~4Gig although it's max memory is set
>> to 1024MB ?
>> 
>> 
>> The maps' file content for this one is :
>
> I sorted the maps file.  The biggest single map is 1024 MB of guest RAM (as 
> expected):
>
> [40000000] > 7f9777e00000-7f97b7e00000 rw-p 00000000 00:00 0
>
> The following regions are suspicious.  They are ~63 MB each.  In total they
> make up around 2709 MB.  Notice they are non-readable, non-writeable,
> non-executable private memory.
>
> To track them down you could try reducing the qemu-kvm command-line
> until they no longer appear.  For example, start by disabling spice.
>
> Another approach is to use tools like gdb or perf to find who is mapping
> 63 MB regions.


It's glibc:

https://www.ibm.com/developerworks/mydeveloperworks/blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage?lang=en

It's nothing to be concerned about.  VSS has nothing do with actual
memory usage in practice.

Regards,

Anthony Liguori



reply via email to

[Prev in Thread] Current Thread [Next in Thread]