[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Nbd] [PATCH v2 3/3] doc: Propose Structured Read exten

From: Wouter Verhelst
Subject: Re: [Qemu-devel] [Nbd] [PATCH v2 3/3] doc: Propose Structured Read extension
Date: Wed, 30 Mar 2016 23:26:14 +0200
User-agent: Mutt/1.5.24 (2015-08-30)

On Wed, Mar 30, 2016 at 02:54:41PM -0600, Eric Blake wrote:
> On 03/30/2016 01:51 PM, Wouter Verhelst wrote:
> > (side note: this seems to be mostly an NBD discussion at this point.
> > Does qemu-devel need to be in the loop before we've finished that? I
> > don't care either way, but then I don't want to bore or annoy people...)
> Well, it stemmed out of qemu's desires to implement more efficient ways
> to both push and pull sparse files across NBD; and qemu will ultimately
> want to implement what we discuss.  But I'm okay doing most of the churn
> off of the qemu list, and only adding qemu back in the loop when it is
> actually time to implement the final design, unless someone else speaks
> up asking to remain in on the conversation.

Sure. I was just asking the question...

> Well, I'm worried about the opposite - if the client does not negotiate
> structured replies, but does negotiate NBD_CMD_GET_LBA_STATUS (which has
> a variable-length payload to reply), then the server has three choices:
> 0) refuse the client (we already said this is a bit undesirable, as it
> may lead to the client retrying in an infloop - having a way to return
> an error message is better than dropping the connection); 1) the server
> must find a way to shoehorn the same data that would be sent in a
> structured reply to fit in the old-style unstructured reply (one nice
> thing about the structured reply is that we get a length for free; that
> length would have to be shoehorned into the old-style reply, different
> from CMD_READ where a length was implied from the request), 2) the
> server must fail the message
> Actually, having typed that, maybe choice 2 is not all that bad.

It isn't.

> It's fairly easy for a server to ALWAYS fail NBD_CMD_GET_LBA_STATUS
> with EINVAL if structured replies were not negotiated, and to document
> that a client that negotiates GET_LBA_STATUS MUST also negotiate
> structured replies for the command to be useful.  For that matter, we
> just documented that servers SHOULD use EINVAL for an unrecognized (or
> out-of-context) command.  The client can enable the two options in
> either order.  And we'd have the following table:
> enabled       enabled
> structured    GET_LBA            result of:
> replies                     read         GET_LBA       write
> ----------------------------------------------------------------
> no            no            old reply    EINVAL        old reply
> yes           no            new reply    EINVAL        [*]
> no            yes           old reply    EINVAL        old reply
> yes           yes           new reply    new reply     [*]


It would also be reasonable to say that if you don't negotiate an option
but do end up using it, you end up squarely in undefined behaviour-land.
The server could send EINVAL, it could honour your request in ways that
you may not expect (including structured replies when you didn't ask for
them), or it could cause nasal demons.

> [*] Here, we're still debating whether it makes sense to allow/require a
> server to send new replies everywhere (and clients must handle both
> styles if they negotiate structured replies), or to require a server to
> send old replies (so that read is the only command where clients have to
> handle two styles, but where the results of negotiating pinpoint which
> style).

I'm still in favour of having it be "old reply" for the whole of that
"write" column. I'm just not convinced there's a downside to that, while
there is an upside.

> > Since only the read reply has a payload at this point in time, you don't
> > really need to special-case it, anyway.
> Okay, so in v1 I tried to negotiate STRUCTURED_REPLY; in v2 I was
> specific to STRUCTURED_READ; it sounds like we are leaning back towards
> STRUCTURED_REPLY and just a caveat that any new command that sends
> payload SHOULD/MUST fail if STRUCTURED_REPLY was not also negotiated.

You could formulate it that way. Alternatively, you could formulate it
so that any command that sends payload may fail if STRUCTURED_REPLY was
not also negotiated, with caveat that there is this backwards
compatibility thing for READ.

(i.e., make READ be the exception rather than the rule)

> I guess that also makes it easier to argue for a server that uses a
> structured reply for ALL messages (REPLY_TYPE_NONE for success changes
> 16 bytes into 20, and REPLY_TYPE_ERROR for errors changes 16 bytes
> into 24).


> > The client will already need to keep state to reassemble a fragmented
> > and out-of-order read reply, anyway. If that's already the case, adding
> > in the requirement to *also* keep track of error state (which could in
> > the simplest case be a single bit for a client which doesn't care about
> > offsets or error count) isn't that much more of an issue.
> For a client that is copying NBD_CMD_READ into a local file, dumping
> directly via pwrite() as each chunk comes in doesn't require the client
> track any state (the client can just assume that by the end of the
> command, all the bytes will have been covered); while a client using
> pwritev() will have to construct an iovec that visits the chunks in the
> correct order (not necessarily in the order received).

Ah yes, good point.

> Clients that really don't want to do much work have the DF flag to
> forbid fragmentation.  But I think you've swayed me - I will make sure
> v3 allows an error at any point in the chain of chunks, and that the
> wording on the final TYPE_NONE chunk does NOT imply success unless no
> earlier error chunks were sent.

Okay, thanks.

< ron> I mean, the main *practical* problem with C++, is there's like a dozen
       people in the world who think they really understand all of its rules,
       and pretty much all of them are just lying to themselves too.
 -- #debian-devel, OFTC, 2016-02-12

reply via email to

[Prev in Thread] Current Thread [Next in Thread]