qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-discuss] Too much memory overhead?


From: Shuichiro MAKIGAKI
Subject: Re: [Qemu-discuss] Too much memory overhead?
Date: Thu, 25 Jun 2015 19:48:06 +0900
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Thunderbird/40.0a2

This thread may have to move to OpenStack mailing list.

Did you set reserved_host_memory_mb in nova.conf to reserve 32GB for hypervisor?
If so, the variable doesn't care about actual memory usage:
http://lists.openstack.org/pipermail/openstack/2014-November/010548.html

Regards,
Makkie

On 2015/06/25 2:52, Mike Leong wrote:
We use Openstack.  Openstack is configured to reserve 32GB of memory for
the hypervisor OS, day to day operation, etc.  However, even w/ 32G
reserved, the system as very little free memory available.  Can someone
help me figure out where is my memory used?  I'm suspecting
qemu-system-x86_64 is using a lot more memory that allocated.

Here's my setup:
Openstack Release: Ice House
Server mem: 256G
Qemu version: 2.0.0+dfsg-2ubuntu1.1
Networking: Contrail 1.20
Block storage: Ceph 0.80.7
Hypervisor OS: Ubuntu 12.04
memory over-provisioning is disabled
kernel version: 3.11.0-26-generic

Info on instances:
- root volume is file backed (qcow2) on the hypervisor local storage
- each instance has a rbd volume mounted from Ceph
- no swap file/partition

Each hypervisor hosts about 45-50 instances.

address@hidden:/etc/libvirt/qemu# free -g
              total       used       free     shared    buffers     cached
Mem:           251        250          1          0          0          1
-/+ buffers/cache:        248          2  <------------ this number
Swap:           82         25         56

RSS sum of all the qemu processes:
address@hidden:/etc/libvirt/qemu# ps -eo rss,cmd|grep qemu|awk '{ sum+=$1}
END {print sum}'
204191112

RSS sum of the non qemu processes:
address@hidden:/etc/libvirt/qemu# ps -eo rss,cmd|grep -v qemu|awk '{
sum+=$1} END {print sum}'
2017328

As you can see, the RSS total is only 196G.

slabtop usage:
  Active / Total Objects (% used)    : 473924562 / 480448557 (98.6%)
  Active / Total Slabs (% used)      : 19393475 / 19393475 (100.0%)
  Active / Total Caches (% used)     : 87 / 127 (68.5%)
  Active / Total Size (% used)       : 10482413.81K / 11121675.57K (94.3%)
  Minimum / Average / Maximum Object : 0.01K / 0.02K / 15.69K

   OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
420153856 420153856   7%    0.02K 18418442      256  73673768K kmalloc-16
55345344 49927985  12%    0.06K 864771       64   3459084K kmalloc-64
593551 238401  40%    0.55K  22516       28    360256K radix_tree_node
1121400 1117631  99%    0.19K  26700       42    213600K dentry
680784 320298  47%    0.10K  17456       39     69824K buffer_head
  10390   9998  96%    5.86K   2078        5     66496K task_struct
1103385 901181  81%    0.05K  12981       85     51924K shared_policy_node
  48992  48377  98%    1.00K   1531       32     48992K ext4_inode_cache
   4856   4832  99%    8.00K   1214        4     38848K kmalloc-8192
  58336  33664  57%    0.50K   1823       32     29168K kmalloc-512
  13552  11480  84%    2.00K    847       16     27104K kmalloc-2048
146256  81149  55%    0.18K   3324       44     26592K vm_area_struct
113424 109581  96%    0.16K   2667       48     21336K kvm_mmu_page_header
  18447  13104  71%    0.81K    473       39     15136K task_xstate
  26124  26032  99%    0.56K    933       28     14928K inode_cache
   3096   3011  97%    4.00K    387        8     12384K kmalloc-4096
106416 102320  96%    0.11K   2956       36     11824K sysfs_dir_cache


Using virsh dommemstat, I'm only using 194GB:
rss:
address@hidden:/etc/libvirt/qemu# for i in instance-0000*.xml; do
inst=$(echo $i|sed s,\.xml,,); virsh dommemstat $inst; done|awk '/rss/ {
sum+=$2} END {print sum}'
204193676

allocated:
address@hidden:/etc/libvirt/qemu# for i in instance-0000*.xml; do
inst=$(echo $i|sed s,\.xml,,); virsh dommemstat $inst; done|awk
'/actual/ { sum+=$2} END {print sum}'
229111808

Basically, the math doesn't add up.  The qemu processes are using less
that what's allocated to them.  In the example above, node-2 has 250G,
with 2G free.
qemu has been allocated 218G, w/ 194G used in RSS.  That means 24G is
not used yet (218 - 194) and I only have 2G free.  You can guess what
would happen if the instances decided to use that 24G...

thx



reply via email to

[Prev in Thread] Current Thread [Next in Thread]