qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-ppc] [PATCH] kvm: fix incorrect length in a loop


From: Alexander Graf
Subject: Re: [Qemu-devel] [Qemu-ppc] [PATCH] kvm: fix incorrect length in a loop over kvm dirty pages map
Date: Tue, 20 Nov 2012 10:06:02 +0100

On 20.11.2012, at 02:40, Alexey Kardashevskiy wrote:

> QEMU allocates a map enough for 4k pages. However the system page size
> can be 64K (for example on POWER) and the host kernel uses only a small
> part of it as one big stores a dirty flag for 16 pages 4K each,
> the hpratio variable stores this ratio and
> the kvm_get_dirty_pages_log_range function handles it correctly.
> 
> However kvm_get_dirty_pages_log_range still goes beyond the data
> provided by the host kernel which is not correct. It does not cause
> errors at the moment as the whole bitmap is zeroed before doing KVM ioctl.
> 
> The patch reduces number of iterations over the map.
> 
> Signed-off-by: Alexey Kardashevskiy <address@hidden>

While at at, could you please also double-check whether the coalesced mmio code 
does the right thing? It also uses TARGET_PAGE_SIZE, which looks bogus to me. 
Since we don't support coalesced mmio (yet), it's not too big of a deal, but 
it'd be nice to get right.

Thanks, applied to ppc-next.


Alex

> ---
> kvm-all.c |    2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kvm-all.c b/kvm-all.c
> index b6d0483..c99997f 100644
> --- a/kvm-all.c
> +++ b/kvm-all.c
> @@ -364,7 +364,7 @@ static int 
> kvm_get_dirty_pages_log_range(MemoryRegionSection *section,
>     unsigned int i, j;
>     unsigned long page_number, c;
>     hwaddr addr, addr1;
> -    unsigned int len = ((section->size / TARGET_PAGE_SIZE) + HOST_LONG_BITS 
> - 1) / HOST_LONG_BITS;
> +    unsigned int len = ((section->size / getpagesize()) + HOST_LONG_BITS - 
> 1) / HOST_LONG_BITS;
>     unsigned long hpratio = getpagesize() / TARGET_PAGE_SIZE;
> 
>     /*
> -- 
> 1.7.10.4
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]