On Fri, Jun 12, 2015 at 12:58:35PM +0300, Denis V. Lunev wrote:
On 11/06/15 23:06, Stefan Hajnoczi wrote:
The load/store API is not scalable when bitmaps are 1 MB or larger.
For example, a 500 GB disk image with 64 KB granularity requires a 1 MB
bitmap. If a guest has several disk images of this size, then multiple
megabytes must be read to start the guest and written out to shut down
the guest.
By comparison, the L1 table for the 500 GB disk image is less than 8 KB.
I think something like qcow2-cache.c or metabitmaps should be used to
lazily read/write persistent bitmaps. That way only small portions need
to be read/written at a time.
Stefan
for the first iteration we could open the image, start tracking,
read bitmap as one entity in the background and or read
and collected data.
partial read could be done in the next step
Making bitmap load/store fully lazy will require changes to the
load/store API, so it's worth thinking about a little upfront.
Otherwise there will be a lot of code churn when the fully lazy patches
are posted. As a reviewer it's in my interest to only spend time
reviewing the final version instead of code that gets thrown out :-),
but I understand.
If you can make the read lazy to some extent that's a good start.