qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Change in qemu 2.12 causes qemu-img convert to NBD to w


From: Nir Soffer
Subject: Re: [Qemu-devel] Change in qemu 2.12 causes qemu-img convert to NBD to write more data
Date: Sun, 11 Nov 2018 17:25:21 +0200

On Wed, Nov 7, 2018 at 6:42 PM Eric Blake <address@hidden> wrote:

> On 11/7/18 6:13 AM, Richard W.M. Jones wrote:
> > (I'm not going to claim this is a bug, but it causes a large, easily
> > measurable performance regression in virt-v2v).
>
> I haven't closely looked at at this email thread yet, but a quick first
> impression:
>
>
> > In qemu 2.12 this behaviour changed:
> >
> >    $ nbdkit --filter=log memory size=6G logfile=/tmp/log \
> >        --run './qemu-img convert ./fedora-28.img -n $nbd'
> >    $ grep '\.\.\.$' /tmp/log | sed 's/.*\([A-Z][a-z]*\).*/\1/' | uniq -c
> >        193 Zero
> >       1246 Write
> >
> > It now zeroes the whole disk up front and then writes data over the
> > top of the zeroed blocks.
> >
> > The reason for the performance regression is that in the first case we
> > write 6G in total.  In the second case we write 6G of zeroes up front,
> > followed by the amount of data in the disk image (in this case the
> > test disk image contains 1G of non-sparse data, so we write about 7G
> > in total).
>
> There was talk on the NBD list a while ago about the idea of letting the
> server advertise to the client when the image is known to start in an
> all-zero state, so that the client doesn't have to waste time writing
> zeroes (or relying on repeated NBD_CMD_BLOCK_STATUS calls to learn the
> same).  This may be justification for reviving that topic.
>

This is a good idea in general, since in some cases we know that
a volume is already zeroed (e.g. new file on NFS/Gluster storage). But with
block storage, we typically don't have any guarantee about storage content,
and qemu need to zero or write the entire device, so this does not solve the
issue discussed in this thread.

Nir


reply via email to

[Prev in Thread] Current Thread [Next in Thread]