[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Issue Report: When VM memory is extremely large, downtime for RDMA migra
Issue Report: When VM memory is extremely large, downtime for RDMA migration is high. (64G mem --> extra 400ms)
Thu, 15 Apr 2021 01:54:19 +0000
When I tested RDMA live migration, I found that the downtime increased as the
VM's memory increased.
My Mellanox network card is [ConnectX-4 LX] and the driver is MLNX-5.2, My VM
memory size is 64GB, downtime is 430ms when I migrate using the following
virsh migrate --live --p2p --persistent --copy-storage-inc --auto-converge
--verbose --listen-address 0.0.0.0 --rdma-pin-all --migrateuri
rdma://192.168.0.2 [VM] qemu+tcp://192.168.0.2/system
The extra time, about 400ms, which is how long it takes RDMA to deregister
memory (the function: ibv_dereg_mr) after memory migration is complete, is
before qmp_cont and therefore part of downtime.
How do we reduce this downtime? Like deregister memory somewhere else?
If anything wrong, Please point out.
|[Prev in Thread]
||[Next in Thread]|
- Issue Report: When VM memory is extremely large, downtime for RDMA migration is high. (64G mem --> extra 400ms),
LIZHAOXIN1 [李照鑫] <=