[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2 2/6] block: block-status cache for data regions
From: |
Kevin Wolf |
Subject: |
Re: [PATCH v2 2/6] block: block-status cache for data regions |
Date: |
Tue, 6 Jul 2021 19:04:07 +0200 |
Am 23.06.2021 um 17:01 hat Max Reitz geschrieben:
> As we have attempted before
> (https://lists.gnu.org/archive/html/qemu-devel/2019-01/msg06451.html,
> "file-posix: Cache lseek result for data regions";
> https://lists.nongnu.org/archive/html/qemu-block/2021-02/msg00934.html,
> "file-posix: Cache next hole"), this patch seeks to reduce the number of
> SEEK_DATA/HOLE operations the file-posix driver has to perform. The
> main difference is that this time it is implemented as part of the
> general block layer code.
>
> The problem we face is that on some filesystems or in some
> circumstances, SEEK_DATA/HOLE is unreasonably slow. Given the
> implementation is outside of qemu, there is little we can do about its
> performance.
>
> We have already introduced the want_zero parameter to
> bdrv_co_block_status() to reduce the number of SEEK_DATA/HOLE calls
> unless we really want zero information; but sometimes we do want that
> information, because for files that consist largely of zero areas,
> special-casing those areas can give large performance boosts. So the
> real problem is with files that consist largely of data, so that
> inquiring the block status does not gain us much performance, but where
> such an inquiry itself takes a lot of time.
>
> To address this, we want to cache data regions. Most of the time, when
> bad performance is reported, it is in places where the image is iterated
> over from start to end (qemu-img convert or the mirror job), so a simple
> yet effective solution is to cache only the current data region.
>
> (Note that only caching data regions but not zero regions means that
> returning false information from the cache is not catastrophic: Treating
> zeroes as data is fine. While we try to invalidate the cache on zero
> writes and discards, such incongruences may still occur when there are
> other processes writing to the image.)
>
> We only use the cache for nodes without children (i.e. protocol nodes),
> because that is where the problem is: Drivers that rely on block-status
> implementations outside of qemu (e.g. SEEK_DATA/HOLE).
>
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/307
> Signed-off-by: Max Reitz <mreitz@redhat.com>
Since you indicated that you'll respin the patch, I'll add my minor
comments:
> @@ -2442,9 +2445,58 @@ static int coroutine_fn
> bdrv_co_block_status(BlockDriverState *bs,
> aligned_bytes = ROUND_UP(offset + bytes, align) - aligned_offset;
>
> if (bs->drv->bdrv_co_block_status) {
> - ret = bs->drv->bdrv_co_block_status(bs, want_zero, aligned_offset,
> - aligned_bytes, pnum, &local_map,
> - &local_file);
> + bool from_cache = false;
> +
> + /*
> + * Use the block-status cache only for protocol nodes: Format
> + * drivers are generally quick to inquire the status, but protocol
> + * drivers often need to get information from outside of qemu, so
> + * we do not have control over the actual implementation. There
> + * have been cases where inquiring the status took an unreasonably
> + * long time, and we can do nothing in qemu to fix it.
> + * This is especially problematic for images with large data areas,
> + * because finding the few holes in them and giving them special
> + * treatment does not gain much performance. Therefore, we try to
> + * cache the last-identified data region.
> + *
> + * Second, limiting ourselves to protocol nodes allows us to assume
> + * the block status for data regions to be DATA | OFFSET_VALID, and
> + * that the host offset is the same as the guest offset.
> + *
> + * Note that it is possible that external writers zero parts of
> + * the cached regions without the cache being invalidated, and so
> + * we may report zeroes as data. This is not catastrophic,
> + * however, because reporting zeroes as data is fine.
> + */
> + if (QLIST_EMPTY(&bs->children)) {
> + if (bdrv_bsc_is_data(bs, aligned_offset, pnum)) {
> + ret = BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID;
> + local_file = bs;
> + local_map = aligned_offset;
> +
> + from_cache = true;
> + }
> + }
> +
> + if (!from_cache) {
Is having a separate variable from_cache really useful? This looks like
it could just be:
if (QLIST_EMPTY() && bdrv_bsc_is_data()) {
// The code above
} else {
// The code below
}
> + ret = bs->drv->bdrv_co_block_status(bs, want_zero,
> aligned_offset,
> + aligned_bytes, pnum,
> &local_map,
> + &local_file);
> +
> + /*
> + * Note that checking QLIST_EMPTY(&bs->children) is also done
> when
> + * the cache is queried above. Technically, we do not need to
> check
> + * it here; the worst that can happen is that we fill the cache
> for
> + * non-protocol nodes, and then it is never used. However,
> filling
> + * the cache requires an RCU update, so double check here to
> avoid
> + * such an update if possible.
> + */
> + if (ret == (BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID) &&
> + QLIST_EMPTY(&bs->children))
> + {
Would it be worth asserting that local_map == aligned_offset, because
otherwise with a buggy protocol driver, the result from the cache could
be different from the first call without us noticing?
> + bdrv_bsc_fill(bs, aligned_offset, *pnum);
> + }
> + }
> } else {
> /* Default code for filters */
Kevin
- Re: [PATCH v2 2/6] block: block-status cache for data regions,
Kevin Wolf <=