qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH RDMA support v4: 03/10] more verbose documen


From: Michael R. Hines
Subject: Re: [Qemu-devel] [RFC PATCH RDMA support v4: 03/10] more verbose documentation of the RDMA transport
Date: Mon, 18 Mar 2013 19:23:53 -0400
User-agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130106 Thunderbird/17.0.2

On 03/18/2013 05:26 PM, Michael S. Tsirkin wrote:

Probably but I haven't mentioned ballooning at all.

memory overcommit != ballooning

Sure, then setting ballooning aside for the moment,
then let's just consider regular (unused) virtual memory.

In this case, what's wrong with the destination mapping
and pinning all the memory if it is not being ballooned?

If the guest touches all the memory during normal operation
before migration begins (which would be the common case),
then overcommit is irrelevant, no?

This is already handled by the RDMA connection manager (librdmacm).

The library already has functions like listen() and accept() the same
way that TCP does.

Once these functions return success, we have a gaurantee that both
sides of the connection have already posted the appropriate work
requests sufficient for driving the migration.
Not if you don't post anything. librdmacm does not post requests.  So
everyone posts 1 buffer on RQ during connection setup?
OK though this is not what the document said, I was under the impression
this is done after connection setup.

Sorry, I wasn't being clear. Here's the existing sequence
that I've already coded and validated:

1. Receiver and Sender are started (command line):
     (The receiver has to be running before QMP migrate
      can connect, of course or this all falls apart.)

2. Both sides post RQ work requests (or multiple ones)
3. Receiver does listen()
4. Sender does connect()
        At this point both sides have already posted
        work requests as stated before.
5. Receiver accept() => issue first SEND message

At this point the sequence of events I describe in the
documentation for put_buffer() / get_buffer() all kick
in and everything is normal.

I'll be sure to post an extra few work requests as suggested.


So # of buffers goes 0 -> 1 -> 0 -> 1.
What I am saying is you should have an extra buffer
so it goes 1 -> 2 -> 1 -> 2
otherwise you keep hitting slow path in RQ processing:
each time you consume the last buffer, IIRC receiver sends
and ACK to sender saying "hey this is the last buffer, slow down".
You don't want that.

No problem - I'll take care of it.......





reply via email to

[Prev in Thread] Current Thread [Next in Thread]