qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Nbd] [PATCH v3] doc: Add NBD_CMD_BLOCK_STATUS extensio


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-devel] [Nbd] [PATCH v3] doc: Add NBD_CMD_BLOCK_STATUS extension
Date: Tue, 29 Nov 2016 14:34:25 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.0

29.11.2016 13:18, Kevin Wolf wrote:
Am 27.11.2016 um 20:17 hat Wouter Verhelst geschrieben:
3. Q: selecting of dirty bitmap to export
    A: several variants:
       1: id of bitmap is in flags field of request
           pros: - simple
           cons: - it's a hack. flags field is for other uses.
                 - we'll have to map bitmap names to these "ids"
       2: introduce extended nbd requests with variable length and exploit this
          feature for BLOCK_STATUS command, specifying bitmap identifier.
          pros: - looks like a true way
          cons: - we have to create additional extension
                - possible we have to create a map,
                  {<QEMU bitmap name> <=> <NBD bitmap id>}
       3: exteranl tool should select which bitmap to export. So, in case of 
Qemu
          it should be something like qmp command block-export-dirty-bitmap.
          pros: - simple
                - we can extend it to behave like (2) later
          cons: - additional qmp command to implement (possibly, the lesser 
evil)
          note: Hmm, external tool can make chose between allocated/dirty data 
too,
                so, we can remove 'NBD_FLAG_STATUS_DIRTY' flag at all.
Downside of 3, though, is that it moves the definition of what the
different states mean outside of the NBD protocol (i.e., the protocol
messages are not entirely defined anymore, and their meaning depends on
the clients and servers in use).
Another point to consider is that option 3 doesn't allow you to access
two (or more) different bitmaps from the client without using the side
channel all the time to switch back and forth between them and having to
drain the request queue each time to avoid races.

In general, if we have something that "looks like the true way", I'd
advocate to choose this option. Experience tells that we'd regret
anything simpler in a year or two, and then we'll have to do the real
thing anyway, but still need to support the quick hack for
compatibility.

5. Number of status descriptors, sent by server, should be restricted
    variants:
    1: just allow server to restrict this as it wants (which was done in v3)
    2: (not excluding 1). Client specifies somehow the maximum for number
       of descriptors.
       2.1: add command flag, which will request only one descriptor
            (otherwise, no restrictions from the client)
       2.2: again, introduce extended nbd requests, and add field to
            specify this maximum
I think having a flag which requests just one descriptor can be useful,
but I'm hesitant to add it unless it's actually going to be used; so in
other words, I'll leave the decision on that bit to you.
The native qemu block layer interface returns the status of only one
contiguous chunks, so the easiest way to implement the NBD block driver
in qemu would always use that bit.

On the other hand, it would be possible for the NBD block driver in qemu
to cache the rest of the response internally and answer the next request
from the cache instead of sending a new request over the network. Maybe
that's what it should be doing anyway for good performance.

Also, an idea on 2-4:

     As we say, that dirtiness is unknown for NBD, and external tool
     should specify, manage and understand, which data is actually
     transmitted, why not just call it user_data and leave status field
     of reply chunk unspecified in this case?

     So, I propose one flag for NBD_CMD_BLOCK_STATUS:
     NBD_FLAG_STATUS_USER. If it is clear, than behaviour is defined by
     Eric's 'Block provisioning status' paragraph.  If it is set, we just
     leave status field for some external... protocol? Who knows, what is
     this user data.
Yes, this sounds like a reasonable approach.
Makes sense to me, too.

However, if we have use for a single NBD_FLAG_STATUS_USER bit, we also
have use for a second one. If we go with one of the options where the
bitmap is selected per command, we're fine because you can simply move
the second bit to a different bitmap and do two requests. If we have
only a single active bitmap at a time, though, this feels like an actual
problem.

Another idea, about backups themselves:

     Why do we need allocated/zero status for backup? IMHO we don't.
Well, I've been thinking so all along, but then I don't really know what
it is, in detail, that you want to do :-)
I think we do. A block can be dirtied by discarding/write_zero-ing it.
Then we want the dirty bit to know that we need to include this block in
the incremental backup, but we also want to know that we don't actually
have to transfer the data in it.

And we will know that automatically, by using structured read command, so separate call to get_block_status is not needed.


Kevin


--
Best regards,
Vladimir




reply via email to

[Prev in Thread] Current Thread [Next in Thread]