[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Nbd] [PATCH v2] doc: Add NBD_CMD_BLOCK_STATUS extensio

From: Eric Blake
Subject: Re: [Qemu-devel] [Nbd] [PATCH v2] doc: Add NBD_CMD_BLOCK_STATUS extension
Date: Tue, 5 Apr 2016 08:14:01 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.1

On 04/05/2016 03:24 AM, Markus Pargmann wrote:

>> +        requested.
>> +
>> +    The client SHOULD NOT read from an area that has both
>> +    `NBD_STATE_HOLE` set and `NBD_STATE_ZERO` clear.
> Why not? If we don't execute CMD_BLOCK_STATUS we wouldn't know about the
> situation and would simply read directly. To fulfill this statement we
> would need to receive the block status before every read operation.

Because we already state that for NBD_CMD_TRIM, the client SHOULD NOT
read an area where a trim was requested without an intervening write,
precisely because the server MAY (but not MUST) cause the trim to create
bogus reads for that area until another write happens.  I was just
trying to explain that the representation of HOLE and not ZERO
represents the state created by a successful NBD_CMD_TRIM that the
server honors, without being able to guarantee zero reads.

> Also something that is kind of missing from the document so far is
> concurrency with other NBD clients. Certainly most users do not use NBD
> for concurrent access to the backend storage. But for example the
> sentence above ignores the fact that another client may work on the
> backend and that the state may change after some time so that it may
> still be necessary to read from an area with NBD_STATE_HOLE and

That's missing from NBD in general, and I don't think this is the patch
to add it.  We already have concurrency with self issues, because NBD
server can handle requests out of order (within the bounds of FLUSH and
FUA modifiers).

> Also it is uncertain if these status bits may change over time through
> reorganization of backend storage, for example holes may be removed in
> the backend and so on. Is it safe to cache this stuff?

If the client is the only thing modifying the drive, maybe we want to
make that additional constraint on the server.  But how best to word it,
or is it too tight of a specification?

> Until now something like READ and WRITE where somehow atomic operations
> in the protocol.

Not really.

Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]