[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH] file-posix: Cache lseek result for data regions

From: Eric Blake
Subject: Re: [Qemu-block] [PATCH] file-posix: Cache lseek result for data regions
Date: Tue, 29 Jan 2019 15:03:07 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0

On 1/29/19 4:56 AM, Kevin Wolf wrote:

>> gluster copies heavily from file-posix's implementation; should it also
>> copy this cache of known-data?  Should NBD also cache known-data when
>> NBD_CMD_BLOCK_STATUS is available?
> This almost suggests that we should do the caching in generic block
> layer code.
> It would require that we can return a *pnum from the block driver that
> is larger than the requested bytes from, but it looks like
> raw_co_block_status() already handles this? We just don't seem to do
> this yet in the block drivers.

The code in io.c bdrv_co_block_status() currently does one final
clamp-down to limit the answer to the caller's maximum request, but I
don't know if any drivers actually take advantage of passing back larger
than the request. I _do_ know that the NBD protocol took pains to ensure
that NBD_CMD_BLOCK_STATUS is permitted to return a value beyond the
caller's request if such information was easily obtained, precisely
because the idea of letting the caller cache the knowledge of a data
section that extends beyond the current query's area of interest may be
useful to minimize the need to make future block status calls.

> If we want to cache for all drivers, however, the question is whether
> there are drivers that can transition a block from data to hole without
> a discard operation, so that we would have to invalidate the cache in
> more places. One thing that comes to mind is loading an internal
> snapshot for qcow2.

Oh, good point - switching to a different L1 table (due to loading an
internal snapshot) can indeed make a hole appear that used to read as
data, so if the block layer caches data ranges, it also needs to provide
a hook for drivers to invalidate the cache when doing unusual actions.
Still, I can't think of any place where a hole spontaneously appears
unless a specific driver action is taken (so the driver should have the
opportunity to invalidate the cache during that action), or if an image
is in active use by more than just the qemu process.  And if the driver
knows that an image might be shared with external processes modifying
the image, then yes, maybe having a way to opt out of caching altogether
is also appropriate.

> Or maybe we need to make this opt-in for drivers, with a bool flag in
> BlockDriver?
> Kevin

Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]