qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-discuss] [Problem] RDMA live migration failed with Emulex rdma nic


From: Shi, Xiao-Lei (Bruce, HP Servers-PSC-CQ)
Subject: [Qemu-discuss] [Problem] RDMA live migration failed with Emulex rdma nics
Date: Mon, 12 Jan 2015 02:52:02 +0000

Hi,

 

I’m trying to do rdma live migration in qemu with Emulex OneConnect nics, but it failed.

Here is my environment:

OS – RHEL 6.5

QEMU – 2.2.0

Libvirt – 1.2.11

 

First, I think my rdma nics’ configuration are correct. I tried the following rdma connection verification:

address@hidden qemu]# ib_send_bw -d ocrdma0 -i 1 -F --report_gbits 192.168.6.58

---------------------------------------------------------------------------------------

                    Send BW Test

Dual-port       : OFF          Device         : ocrdma0

Number of qps   : 1            Transport type : IB

Connection type : RC           Using SRQ      : OFF

TX depth        : 128

CQ Moderation   : 100

Mtu             : 1024[B]

Link type       : Ethernet

Gid index       : 0

Max inline data : 0[B]

rdma_cm QPs     : OFF

Data ex. method : Ethernet

---------------------------------------------------------------------------------------

local address: LID 0000 QPN 0x00a0 PSN 0x1e6324

GID: 254:128:00:00:00:00:00:00:198:52:107:255:254:254:252:48

remote address: LID 0000 QPN 0x008d PSN 0x343bbc

GID: 254:128:00:00:00:00:00:00:198:52:107:255:254:254:236:224

---------------------------------------------------------------------------------------

#bytes     #iterations    BW peak[Gb/sec]    BW average[Gb/sec]   MsgRate[Mpps]

65536      1000           9.16               9.16                 0.017474

 

Then, I follow the guidelines at the wiki page: http://wiki.qemu.org/Features/RDMALiveMigration

When I perform the migration, I got this error on destination machine:

dest_init RDMA Device opened: kernel name ocrdma0 uverbs device name uverbs0, infiniband_verbs class device path /sys/class/infiniband_verbs/uverbs0, infiniband class device path /sys/class/infiniband/ocrdma0, transport: (2) Ethernet

qemu-kvm: Length mismatch: pc.ram: 0x80000000 in != 0x8000000

qemu: warning: error while loading state for instance 0x0 of device 'ram'

Segmentation fault (core dumped)

 

Also I tried to do live migration without rdma in the same environment, it succeed without any error.

 

Are there any faults in my configurations or operations? Or does it now support Emulex rdma nics (Since Emulex nics use ocrdma driver and I’m not sure if it is supported by qemu now)?

If you need more logs and information, please let me know.

 

Thanks & Best Regards

Shi, Xiao-Lei (Bruce)

 

Hewlett-Packard Co., Ltd.
HP Servers Core Platform Software China

Telephone +86 23 65683093

Mobile +86 18696583447

Email address@hidden

 


reply via email to

[Prev in Thread] Current Thread [Next in Thread]