qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH 06/17] block: use bdrv_{co, aio}_discard for


From: Richard Laager
Subject: Re: [Qemu-devel] [RFC PATCH 06/17] block: use bdrv_{co, aio}_discard for write_zeroes operations
Date: Tue, 13 Mar 2012 14:13:10 -0500

On Mon, 2012-03-12 at 10:34 +0100, Paolo Bonzini wrote:
> To be completely correct, I suggest the following behavior:
> >      1. Add a discard boolean option to the disk layer.
> >      2. If discard is not specified:
> >               * For files, detect a true/false value by comparing
> >                 stat.st_blocks != stat.st_size>>9.
> >               * For devices, assume a fixed value (true?).
> >      3. If discard is true, issue discards.
> >      4. If discard is false, do not issue discards.
> 
> The problem is, who will use this interface?

I'm a libvirt and virt-manager user; virt-manager already differentiates
between thin and thick provisioning. So I'm envisioning passing that
information to libvirt, which would save it in a config file and use
that to set discard=true vs. discard=false when using QEMU.

On Mon, 2012-03-12 at 13:27 +0100, Paolo Bonzini wrote:
> Il 10/03/2012 19:02, Richard Laager ha scritto:
> > I propose adding the following behaviors in any event:
> >       * If a QEMU block device reports a discard_granularity > 0, it
> >         must be equal to 2^n (n >= 0), or QEMU's block core will change
> >         it to 0. (Non-power-of-two granularities are not likely to exist
> >         in the real world, and this assumption greatly simplifies
> >         ensuring correctness.)
> 
> Yeah, I was considering this to be simply a bug in the block device.
> 
> >       * For SCSI, report an unmap_granularity to the guest as follows:
> >       max(logical_block_size, discard_granularity) / logical_block_size
> 
> This is more or less already in place later in the series.

I didn't see it. Which patch number?

> > Note, I'm assuming fallocate() actually
> > guarantees that it zeros the data when punching holes.
> 
> It does, that's pretty much the definition of a hole.

Agreed. I verified this fact after sending that email. At the time, I
just wanted to be very clear on what I knew for sure vs. what I had not
yet verified.

> If you have a new kernel that supports SEEK_HOLE/SEEK_DATA, it can also
> be done by skipping the zero write on known holes.
> 
> This could even be done at the block layer level using bdrv_is_allocated.

Would we want to make all write_zeros operations check for and skip
holes, or is write_zeros different from a discard in that it SHOULD/MUST
allocate space?

> > If we could probe for FALLOC_FL_PUNCH_HOLE support, then we could avoid
> > advertising discard support based on FALLOC_FL_PUNCH_HOLE when it is not
> > going to work. This would side step these problems. 
> 
> ... and introduce others when migrating if your datacenter doesn't have
> homogeneous kernel versions and/or filesystems. :(

I hadn't thought of the migration issues. Thanks for bringing that up.

Worst case, you end up doing a bunch of zero writing if and only if you
migrate from a discard_zeros_data host to one that doesn't (or doesn't
do discard at all). But this only lasts until the guest reboots
(assuming we also add a behavior of re-probing on guest reboot--or until
it shuts down if we don't or can't). As far as I can see, this is
unavoidable, though. And this is no worse than writing zeros ALL of the
time that fallocate() fails, which is the behavior of your patch series,
right?

This might be another use case for a discard option on the disk. If some
but not all of one's hosts support discard, a system administrator might
want to set discard=0 to avoid this.

> Do you know if non-Linux operating systems have something similar to
> BLKDISCARDZEROES?

As far as I know, no. The SunOS one is only on Illumos (the open source
kernel forked from the now dead OpenSolaris) and only implemented for
ZFS zvols. So currently, it's roughly equivalent to fallocate() on Linux
in that it's happening at the filesystem level. (It doesn't actually
reach the platters yet. But even if it did, that's unrelated to the
guarantees provided by ZFS.) Thus, it always zeros, so we could set
discard_zeros_data = 1 unconditionally there. I should probably run that
by the Illumos developers, though, to ensure they're comfortable with
that ioctl() guaranteeing zeroing.

I haven't looked into the FreeBSD one as much yet. Worst case, we
unconditionally set discard_zeros_data = 0.

-- 
Richard

Attachment: signature.asc
Description: This is a digitally signed message part


reply via email to

[Prev in Thread] Current Thread [Next in Thread]