On 15.03.21 15:40, Vladimir Sementsov-Ogievskiy wrote:
15.03.2021 12:58, Max Reitz wrote:
[...]
The question is whether it really makes sense to even have a
seqcache_read() path when in reality it’s probably never accessed.
I mean, besides the fact that it seems based purely on chance
whether a read might fetch something from the cache even while we’re
writing, in practice I don’t know any case where we’d write to and
read from a compressed qcow2 image at the same time. (I don’t know
what you’re doing with the 'compress' filter, though.)
Note, that for user that's not a parallel write and read to the same
cluster:
1. user writes cluster A, request succeeded, data is in the cache
2. user writes some other clusters, cache filled, flush started
3. in parallel to [2] user reads cluster A. From the POV of user,
cluster A is written already, and should be read successfully
Yes, but when would that happen?
And seqcache_read() gives a simple non-blocking way to support read
operation.
OK, that makes sense. We’d need to flush the cache before we can read
anything from the disk, so we should have a read-from-cache branch here.
But rewriting compressed clusters is sensible only when we run real
guest on compressed image.. Can it be helpful? Maybe for scenarios
with low disk usage ratio..
I’m not sure, but the point is that rewrites are currently not
supported. The whole compression implementation is mainly tailored
towards just writing a complete image (e.g. by qemu-img convert or the
backup job), so that’s where my question is coming from: It’s
difficult for me to see a currently working use case where you’d read
from and write to a compressed image at the same time.