qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/7] block/nbd: decouple reconnect from drain


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [PATCH 0/7] block/nbd: decouple reconnect from drain
Date: Wed, 7 Apr 2021 13:13:07 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0

07.04.2021 10:45, Roman Kagan wrote:
On Wed, Mar 17, 2021 at 11:35:31AM +0300, Vladimir Sementsov-Ogievskiy wrote:
15.03.2021 09:06, Roman Kagan wrote:
The reconnection logic doesn't need to stop while in a drained section.
Moreover it has to be active during the drained section, as the requests
that were caught in-flight with the connection to the server broken can
only usefully get drained if the connection is restored.  Otherwise such
requests can only either stall resulting in a deadlock (before
8c517de24a), or be aborted defeating the purpose of the reconnection
machinery (after 8c517de24a).

This series aims to just stop messing with the drained section in the
reconnection code.

While doing so it undoes the effect of 5ad81b4946 ("nbd: Restrict
connection_co reentrance"); as I've missed the point of that commit I'd
appreciate more scrutiny in this area.

Roman Kagan (7):
    block/nbd: avoid touching freed connect_thread
    block/nbd: use uniformly nbd_client_connecting_wait
    block/nbd: assert attach/detach runs in the proper context
    block/nbd: transfer reconnection stuff across aio_context switch
    block/nbd: better document a case in nbd_co_establish_connection
    block/nbd: decouple reconnect from drain
    block/nbd: stop manipulating in_flight counter

   block/nbd.c  | 191 +++++++++++++++++++++++----------------------------
   nbd/client.c |   2 -
   2 files changed, 86 insertions(+), 107 deletions(-)



Hmm. The huge source of problems for this series is weird logic around
drain and aio context switch in NBD driver.

Why do we have all these too complicated logic with abuse of in_flight
counter in NBD? The answer is connection_co. NBD differs from other
drivers, it has a coroutine independent of request coroutines. And we
have to move this coroutine carefully to new aio context. We can't
just enter it from the new context, we want to be sure that
connection_co is in one of yield points that supports reentering.

I have an idea of how to avoid this thing: drop connection_co at all.

1. nbd negotiation goes to connection thread and becomes independent
of any aio context.

2. waiting for server reply goes to request code. So, instead of
reading the replay from socket always in connection_co, we read in the
request coroutine, after sending the request. We'll need a CoMutex for
it (as only one request coroutine should read from socket), and be
prepared to coming reply is not for _this_ request (in this case we
should wake another request and continue read from socket).

The problem with this approach is that it would change the reconnect
behavior.

Currently connection_co purpose is three-fold:

1) receive the header of the server response, identify the request it
    pertains to, and wake the resective request coroutine

2) take on the responsibility to reestablish the connection when it's
    lost

3) monitor the idle connection and initiate the reconnect as soon as the
    connection is lost

Points 1 and 2 can be moved to the request coroutines indeed.  However I
don't see how to do 3 without an extra ever-running coroutine.
Sacrificing it would mean that a connection loss wouldn't be noticed and
the recovery wouldn't be attempted until a request arrived.

This change looks to me like a degradation compared to the current
state.


For 3 we can check the connection by timeout:

 - getsockopt( .. SO_ERROR .. ), which could be done from bs aio context, or 
even from reconnect-thread context

 - or, we can create a PING request: just use some request with parameters for 
which we are sure that NBD server should do no action but report some expected 
error. We can create such request by timeout when there no more requests, just 
to check that connection still works.

Note, that neither of this (and nor current [3] which is just endless read from 
socket) will work only with keep-alive set, which is not by default for now.

Anyway I think first step is splitting connect-thread out of nbd.c which is 
overcomplicated now, I'm going to send a refactoring series for this.

--
Best regards,
Vladimir



reply via email to

[Prev in Thread] Current Thread [Next in Thread]