qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] kvm: Take into account the unaligned section size when prepa


From: Peter Xu
Subject: Re: [PATCH] kvm: Take into account the unaligned section size when preparing bitmap
Date: Tue, 8 Dec 2020 10:16:54 -0500

Hi, Zenghui,

On Tue, Dec 08, 2020 at 07:40:13PM +0800, Zenghui Yu wrote:
> The kernel KVM_CLEAR_DIRTY_LOG interface has align requirement on both the
> start and the size of the given range of pages. We have been careful to
> handle the unaligned cases when performing CLEAR on one slot. But it seems
> that we forget to take the unaligned *size* case into account when
> preparing bitmap for the interface, and we may end up clearing dirty status
> for pages outside of [start, start + size).

Thanks for the patch, though my understanding is that this is not a bug.

Please have a look at kvm_memslot_init_dirty_bitmap() where we'll allocate the
dirty bitmap to be aligned to 8 bytes (assuming that's the possible max of the
value sizeof(unsigned long)).  That exactly covers 64 pages.

So here as long as start_delta==0 (so the value of "bmap_npages - size / psize"
won't really matter a lot, imho), then we'll definitely have KVMSlot.dirty_bmap
long enough to cover the range we'd like to clear.

Note that the size of KVMSlot.dirty_bmap can be bigger than the actually size
of the kvm memslot, however since kvm_memslot_init_dirty_bitmap() initialized
it to all zero so the extra bits will always be zero for the whole lifecycle of
the vm/bitmap.

Thanks,

> 
> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
> ---
>  accel/kvm/kvm-all.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index bed2455ca5..05d323ba1f 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -747,7 +747,7 @@ static int kvm_log_clear_one_slot(KVMSlot *mem, int 
> as_id, uint64_t start,
>      assert(bmap_start % BITS_PER_LONG == 0);
>      /* We should never do log_clear before log_sync */
>      assert(mem->dirty_bmap);
> -    if (start_delta) {
> +    if (start_delta || bmap_npages - size / psize) {
>          /* Slow path - we need to manipulate a temp bitmap */
>          bmap_clear = bitmap_new(bmap_npages);
>          bitmap_copy_with_src_offset(bmap_clear, mem->dirty_bmap,
> @@ -760,7 +760,10 @@ static int kvm_log_clear_one_slot(KVMSlot *mem, int 
> as_id, uint64_t start,
>          bitmap_clear(bmap_clear, 0, start_delta);
>          d.dirty_bitmap = bmap_clear;
>      } else {
> -        /* Fast path - start address aligns well with BITS_PER_LONG */
> +        /*
> +         * Fast path - both start and size align well with BITS_PER_LONG
> +         * (or the end of memory slot)
> +         */
>          d.dirty_bitmap = mem->dirty_bmap + BIT_WORD(bmap_start);
>      }
>  
> -- 
> 2.19.1
> 
> 

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]