qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Issue Report: When VM memory is extremely large, downtime for RDMA migra


From: LIZHAOXIN1 [李照鑫]
Subject: Issue Report: When VM memory is extremely large, downtime for RDMA migration is high. (64G mem --> extra 400ms)
Date: Thu, 15 Apr 2021 01:54:19 +0000

Hi:
When I tested RDMA live migration, I found that the downtime increased as the 
VM's memory increased.

My Mellanox network card is [ConnectX-4 LX] and the driver is MLNX-5.2, My VM 
memory size is 64GB, downtime is 430ms when I migrate using the following 
parameters:
virsh migrate --live --p2p --persistent --copy-storage-inc --auto-converge 
--verbose --listen-address 0.0.0.0 --rdma-pin-all --migrateuri 
rdma://192.168.0.2 [VM] qemu+tcp://192.168.0.2/system

The extra time, about 400ms, which is how long it takes RDMA to deregister 
memory (the function: ibv_dereg_mr) after memory migration is complete, is 
before qmp_cont and therefore part of downtime.

How do we reduce this downtime? Like deregister memory somewhere else?

If anything wrong, Please point out.
Thanks!

reply via email to

[Prev in Thread] Current Thread [Next in Thread]