qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V2 0/4] *virtio-blk: add multiread support


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH V2 0/4] *virtio-blk: add multiread support
Date: Thu, 18 Dec 2014 11:34:45 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

Am 16.12.2014 um 17:00 hat Peter Lieven geschrieben:
> On 16.12.2014 16:48, Kevin Wolf wrote:
> >Am 16.12.2014 um 16:21 hat Peter Lieven geschrieben:
> >>this series adds the long missing multiread support to virtio-blk.
> >>
> >>some remarks:
> >>  - i introduced rd_merged and wr_merged block accounting stats to
> >>    blockstats as a generic interface which can be set from any
> >>    driver that will introduce multirequest merging in the future.
> >>  - the knob to disable request merging is not yet there. I would
> >>    add it to the device properties also as a generic interface
> >>    to have the same switch for any driver that might introduce
> >>    request merging in the future. As there has been no knob in
> >>    the past I would post this as a seperate series as it needs
> >>    some mangling in parameter parsing which might lead to further
> >>    discussions.
> >>  - the old multiwrite interface is still there and might be removed.
> >>
> >>v1->v2:
> >>  - add overflow checking for nb_sectors [Kevin]
> >>  - do not change the name of the macro of max mergable requests. [Fam]
> >Diff to v1 looks good. Now I just need to check what it does to the
> >performance. Did you run any benchmarks yourself?
> 
> I ran several installs of Debian/Ubuntu, Booting of Windows and Linux
> systems. I looked at rd_total_time_ns and wr_total_time_ns and saw
> no increase. Ofter I even saw even a decrease.
> 
> {rd,wr}_total_time_ns measures the time from virtio_blk_handle_request
> to virtio_blk_rw_complete. So it seems to be a good indicator for the time
> spent with I/O.
> 
> What you could to is post it on the top of your fio testing stack and
> look at the throughput. Sequential Reads should be faster. The rest
> not worse.

So I finally ran some fio benchmark on the series. The result for small
sequential reads (4k) is quite noisy, but it seems to be improved a bit.
Larger sequenial reads (64k) and random reads seem to be mostly
unaffected.

For writes, however, I can see a degradation. Perhaps running multiple
jobs in parallel means that we don't detect and merge sequential
requests any more when they are interleaved with another sequential job.
Or do you have an idea what else could have changed for writes?

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]