qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] BDRV request fragmentation and virtio-blk write submiss


From: Kevin Wolf
Subject: Re: [Qemu-devel] BDRV request fragmentation and virtio-blk write submission guarantees (2nd try)
Date: Thu, 18 Jul 2019 16:59:31 +0200
User-agent: Mutt/1.11.3 (2019-02-01)

Am 18.07.2019 um 15:52 hat Евгений Яковлев geschrieben:
> Hi everyone,
> 
> My previous message was misformatted, so here's another one. Sorry about
> that.
> 
> We're currently working on implementing a qemu BDRV format driver which we
> are using with virtio-blk devices.
> 
> I have a question concerning BDRV request fragmentation and virtio-blk write
> request submission which is not entirely clear to me by only reading virtio
> spec. Could you please consider the following case and give some additional
> guidance?
> 
> 1. Our BDRV format driver has a notion of max supported transfer size. So we
> implement BlockDriver::bdrv_refresh_limits where we fill out
> BlockLimits::max_transfer and opt_transfer fields.
> 
> 2. virtio-blk exposes max_transfer as a virtio_blk_config::opt_io_size
> field, which (according to spec 1.1) is a **suggested** maximum. We read
> "suggested" as "guest driver may still send requests that don't fit into
> opt_io_size and we should handle those"...
> 
> 3. ... and judging by code in block/io.c qemu block layer handles such
> requests by fragmenting them into several BDRV requests if request size is >
> max_transfer
> 
> 4. Guest will see request completion only after all fragments are handled.
> However each fragment submission path can call qemu_coroutine_yield and move
> on to submitting next request available in virtq before completely
> submitting the rest of the fragments. Which means the following situation is
> possible where BDRV sees 2 write requests in virtq, both of which are larger
> than max_transfer:
> 
> Blocks: -----------------------------
> 
> Write1: ------xxxxxxxx
> 
> Write2: ------yyyyyyyy
> 
> Write1Chunk1: xxxx
> 
> Write2Chunk1: yyyy
> 
> Write2Chunk2: ----yyyy
> 
> Write1Chunk1: ----xxxx
> 
> Blocks: ------yyyyxxxx-----------------
> 
> 
> In above scenario guest virtio-blk driver decided to submit 2 intersecting
> write requests, both of which are larger than ||max_transfer, and then call
> hypervisor.
> 
> I understand that virtio-blk may handle requests out of order, so guest must
> not make any assumptions on relative order in which those requests will be
> handled.
> 
> However, can guest driver expect that whatever the submission order will be,
> the actual intersecting writes will be atomic?
> 
> In other words, will it be correct for conforming virtio-blk driver to
> expect only "xxxxxxxx" or "yyyyyyyy" but not anything else in between, after
> both requests are reported as completed?
> 
> Because i think that is something that may happen in qemu right now, if i
> understood correctly.

I don't think atomicity is promised anywhere in the virtio
specification, and I agree with you that this case can happen (it
probably happens much more frequently when you use image formats instead
of raw files).

On the other hand, there is no good reason for a guest OS to submit two
write request to the same blocks in parallel. Even if it could expect
that one of the requests wins, the end result would still be undefined,
so I don't think this could ever be a useful thing to do. (Well, I guess
it could replace flipping a coin...)

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]