qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] qcow2: Reduce write_zeroes size in handle_alloc_space()


From: Kevin Wolf
Subject: Re: [PATCH] qcow2: Reduce write_zeroes size in handle_alloc_space()
Date: Tue, 9 Jun 2020 17:18:10 +0200

Am 09.06.2020 um 16:46 hat Eric Blake geschrieben:
> On 6/9/20 9:28 AM, Vladimir Sementsov-Ogievskiy wrote:
> > 09.06.2020 17:08, Kevin Wolf wrote:
> > > Since commit c8bb23cbdbe, handle_alloc_space() is called for newly
> > > allocated clusters to efficiently initialise the COW areas with zeros if
> > > necessary. It skips the whole operation if both start_cow nor end_cow
> > > are empty. However, it requests zeroing the whole request size (possibly
> > > multiple megabytes) even if only one end of the request actually needs
> > > this.
> > > 
> > > This patch reduces the write_zeroes request size in this case so that we
> > > don't unnecessarily zero-initialise a region that we're going to
> > > overwrite immediately.
> > > 
> 
> > 
> > Hmm, I'm afraid, that this may make things worse in some cases, as with
> > one big write-zero request
> > we preallocate data-region in the protocol file, so we have better
> > locality for the clusters we
> > are going to write. And, in the same time, with BDRV_REQ_NO_FALLBACK
> > flag write-zero must be
> > fast anyway (especially in comparison with the following write request).
> > 
> > >           /*
> > >            * instead of writing zero COW buffers,
> > >            * efficiently zero out the whole clusters
> > >            */
> > > -        ret = qcow2_pre_write_overlap_check(bs, 0, 
> > > m->alloc_offset,
> > > -                                    
> > >         m->nb_clusters *
> > > s->cluster_size,
> > > -                                    
> > >         true);
> > > +        ret = qcow2_pre_write_overlap_check(bs, 0, start, len, 
> > > true);
> > >           if (ret < 0) {
> > >               return ret;
> > >           }
> > >           BLKDBG_EVENT(bs->file, BLKDBG_CLUSTER_ALLOC_SPACE);
> > > -        ret = bdrv_co_pwrite_zeroes(s->data_file, m->alloc_offset,
> > > -                                    
> > > m->nb_clusters * s->cluster_size,
> > > +        ret = bdrv_co_pwrite_zeroes(s->data_file, start, len,
> > >                                      
> > >  BDRV_REQ_NO_FALLBACK);
> 
> Good point.  If we weren't using BDRV_REQ_NO_FALLBACK, then avoiding a
> pre-zero pass over the middle is essential.  But since we are insisting that
> the pre-zero pass be fast or else immediately fail, the time spent in
> pre-zeroing should not be a concern.  Do you have benchmark numbers stating
> otherwise?

I stumbled across this behaviour (write_zeros for 2 MB, then overwrite
almost everything) in the context of a different bug, and it just didn't
make much sense to me. Is there really a file system where fragmentation
is introduced by not zeroing the area first and then overwriting it?

I'm not insisting on making this change because the behaviour is
harmless if odd, but if we think that writing twice to some blocks is an
optimisation, maybe we should actually measure and document this.


Anyway, let's talk about the reported bug that made me look at the
strace that showed this behaviour because I feel it supports my last
point. It's a bit messy, but anyway:

    https://bugzilla.redhat.com/show_bug.cgi?id=1666864

So initially, bad performance on a fragmented image file was reported.
Not much to do there, but then in comment 16, QA reported a performance
regression in this case between 4.0 and 4.2. And this change caused by
c8bb23cbdbe, i.e. the commit that introduced handle_alloc_space().

Turns out that BDRV_REQ_NO_FALLBACK doesn't always guarantee that it's
_really_ fast. fallocate(FALLOC_FL_ZERO_RANGE) causes some kind of flush
on XFS and buffered writes don't. So with the old code, qemu-img convert
to a file on a very full filesystem that will cause fragmentation, was
much faster with writing a zero buffer than with write_zeroes (because
it didn't flush the result).

I don't fully understand why this is and hope that XFS can do something
about it. I also don't really think we should revert the change in QEMU,
though I'm not completely sure. But I just wanted to share this to show
that "obvious" characteristics of certain types of requests aren't
always true and doing obscure optimisations based on what we think
filesystems may do can actually achieve the opposite in some cases.

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]