qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-discuss] When use rbd block, vm migration downtime increases to 20


From: Acewind
Subject: [Qemu-discuss] When use rbd block, vm migration downtime increases to 2000ms above, much more than expected!
Date: Tue, 15 Jan 2019 14:30:44 +0800

I use ceph *luminous* pool as openstack cinder backend.
When I live migrate an instance, the final downtime will be always 2000ms
above:

*virsh qemu-monitor-command instance-00000015 --hmp info migrate*
*...*
capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks:
off compress: off events: on postcopy-ram: off
Migration status: active
total time: 3443 milliseconds
expected downtime: 10 milliseconds
setup: 16 milliseconds
transferred ram: 173340 kbytes
throughput: 419.47 mbps
remaining ram: 0 kbytes
total ram: 4326224 kbytes
duplicate: 1041002 pages
skipped: 0 pages
normal: 42246 pages
normal bytes: 168984 kbytes
dirty sync count: 3
dirty pages rate: 76 pages

capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks:
off compress: off events: on postcopy-ram: off
Migration status: completed
total time: 5463 milliseconds
downtime: 2020 milliseconds
setup: 16 milliseconds
transferred ram: 173348 kbytes
throughput: 260.44 mbps
remaining ram: 0 kbytes
total ram: 4326224 kbytes
duplicate: 1041002 pages
skipped: 0 pages
normal: 42248 pages
normal bytes: 168992 kbytes
dirty sync count: 4

But when I live migrate another instance with block device of NFS backend
between the same two compute hosts, the final downtime is always ok:

*virsh qemu-monitor-command instance-0000000f --hmp info migrate*
*...*
capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks:
off compress: off events: on postcopy-ram: off
Migration status: active
total time: 3442 milliseconds
expected downtime: 10 milliseconds
setup: 17 milliseconds
transferred ram: 173432 kbytes
throughput: 415.32 mbps
remaining ram: 0 kbytes
total ram: 4326224 kbytes
duplicate: 1040792 pages
skipped: 0 pages
normal: 40991 pages
normal bytes: 163964 kbytes
dirty sync count: 3
dirty pages rate: 66 pages

capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks:
off compress: off events: on postcopy-ram: off
Migration status: completed
total time: 3448 milliseconds
downtime: 7 milliseconds
setup: 17 milliseconds
transferred ram: 173440 kbytes
throughput: 412.85 mbps
remaining ram: 0 kbytes
total ram: 4326224 kbytes
duplicate: 1040792 pages
skipped: 0 pages
normal: 40993 pages
normal bytes: 163972 kbytes
dirty sync count: 4

This is my host kernel&ceph version:
address@hidden /]# uname -a
Linux host01 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016
x86_64 x86_64 x86_64 GNU/Linux
address@hidden /]# cat /etc/centos-release
*CentOS Linux release 7.3.1611 (Core)*
address@hidden /]# rpm -qa | grep librbd
*librbd1-12.2.8-0.el7.x86_64*
address@hidden /]# rpm -qa | grep ceph
ceph-base-12.2.8-0.el7.x86_64
*ceph-12.2.8-0.el7.x86_64*
ceph-selinux-12.2.8-0.el7.x86_64
python-cephfs-12.2.8-0.el7.x86_64
ceph-mds-12.2.8-0.el7.x86_64
ceph-common-12.2.8-0.el7.x86_64
ceph-mgr-12.2.8-0.el7.x86_64
ceph-osd-12.2.8-0.el7.x86_64
ceph-ansible-3.1.6-4.1.el7.noarch
libcephfs2-12.2.8-0.el7.x86_64
ceph-mon-12.2.8-0.el7.x86_64

*Is there any problem with ceph rbd? Thanks!*


reply via email to

[Prev in Thread] Current Thread [Next in Thread]