qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v2] Replication agent module


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [RFC PATCH v2] Replication agent module
Date: Tue, 10 Apr 2012 12:29:28 +0100

On Wed, Apr 4, 2012 at 2:23 PM, Ori Mamluk <address@hidden> wrote:
> Hi Stephan,
> Thanks again for the thorough and detailed review.
> Some answers embedded below, generally I took all the points you mentioned.
> Main changes were:
> - Use qemu-thread
>    * WRT to this - I didn't find any recv() function there. Am I missing
> anything? Should I call the native recv?
>        Same for send, even though there's a send_all - should I use it?
> - Use qemu_socket
> - Use QTAILQ for volumes list
>
> I'll shortly send patch V3 with the changes.

Sorry for the late reply, I was on vacation.

> I also attached a PDF with a design sketch following your request to
> describe the main flows.

Great, thanks for sharing.

>>> * I wrote a stand-alone Rephub for actively reading the protected volume
>>> and copying it - for test purposes.
>>
>> Cool, would be great to see a rephub implementation.
>
> It's still very basic and only for testing. Once I establish the repagent
> design I'll add it.
> Should it go into the same repository?

I suggest putting it into the QEMU repository for now.  During code
review it will be useful to have the patches easily available.  It may
also be necessary to keep it in QEMU to make testing easy.  But if
people ask that you maintain the sample rephub externally that should
work too.

>>> +#ifdef CONFIG_REPAGENT
>>> +    if (bs->device_name[0] != '\0') {
>>> +        /* We split the IO only at the highest stack driver layer.
>>> +           Currently we know that by checking device_name - only
>>> +           highest level (closest to the guest) has that name.
>>> +           */
>>> +/*           repagent_handle_protected_write(bs, sector_num,
>>> +                nb_sectors, qiov, ret);
>>
>> The top-level BlockDriverState requirement can be handled if you do
>> something like this in blockdev.c:drive_init():
>>
>>   if (use_repagent) {
>>       repagent_start(bs);
>>   }
>>
>> Only the top-level BlockDriverState (bs) would be tracked.
>
> I see. Sounds good.
> There's still the problem of additional drivers being added above - e.g.
> live snapshot, but as Kevin advised I'll worry about that later on.

Yes, that can be addressed in bdrv_append().

>>> +        RepCmdDataStartProtect *pcmd_dat
>>>
>>> +static void repagent_client_read(void *opaque)
>>> +{
>>> +    printf("repagent_client_read\n");
>>> +    int bytes_read =
>>> repcmd_listener_socket_read_next_buf(g_client_state.hsock);
>>
>> Did I miss where the socket is set to nonblocking?  This function should
>> not block if used as an fd handler in QEMU's main loop.
>
> You didn't miss. Good point. You're referencing the risk of the read from
> the socket getting blocked, right?

Or the send becoming blocked due to missing TCP acks from the rephub.

>>> +    if (g_cl            printf("Connected (%d)...\n", c++);
>>>
>>> +            usleep(1 * 1000 * 1000);
>>> +        }
>>
>> This thread isn't actually doing anything!  I think this function could
>> be integrated into the QEMU event loop instead of having a thread.
>
> Right. The thread is just a heartbeat responsible for reconnecting the
> socket.
> Any periodic event would suffice.
> Can you give me a pointer in the code that registers a similar event?

I'm not sure you even need a connect/disconnect heartbeat loop.  If
socket read or write fail the code could initiate reconnect.  This
would all be event-driven and no explicit loop would be necessary.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]