qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] migration: Count new_dirty instead of real_dirty


From: Dr. David Alan Gilbert
Subject: Re: [PATCH v2] migration: Count new_dirty instead of real_dirty
Date: Tue, 16 Jun 2020 10:35:51 +0100
User-agent: Mutt/1.14.0 (2020-05-02)

* Keqian Zhu (zhukeqian1@huawei.com) wrote:
> real_dirty_pages becomes equal to total ram size after dirty log sync
> in ram_init_bitmaps, the reason is that the bitmap of ramblock is
> initialized to be all set, so old path counts them as "real dirty" at
> beginning.
> 
> This causes wrong dirty rate and false positive throttling at the end
> of first ram save iteration.
> 
> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>

Since this function already returns num_dirty, why not just change the
caller to increment a counter based off the return value?

Can you point to the code which is using this value that triggers the
throttle?

Dave


> ---
> Changelog:
> 
> v2:
>  - use new_dirty_pages instead of accu_dirty_pages.
>  - adjust commit messages.
> 
> ---
>  include/exec/ram_addr.h | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index 7b5c24e928..a95e2e7c25 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -443,7 +443,7 @@ static inline
>  uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
>                                                 ram_addr_t start,
>                                                 ram_addr_t length,
> -                                               uint64_t *real_dirty_pages)
> +                                               uint64_t *new_dirty_pages)
>  {
>      ram_addr_t addr;
>      unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS);
> @@ -469,7 +469,6 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock 
> *rb,
>              if (src[idx][offset]) {
>                  unsigned long bits = atomic_xchg(&src[idx][offset], 0);
>                  unsigned long new_dirty;
> -                *real_dirty_pages += ctpopl(bits);
>                  new_dirty = ~dest[k];
>                  dest[k] |= bits;
>                  new_dirty &= bits;
> @@ -502,7 +501,6 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock 
> *rb,
>                          start + addr + offset,
>                          TARGET_PAGE_SIZE,
>                          DIRTY_MEMORY_MIGRATION)) {
> -                *real_dirty_pages += 1;
>                  long k = (start + addr) >> TARGET_PAGE_BITS;
>                  if (!test_and_set_bit(k, dest)) {
>                      num_dirty++;
> @@ -511,6 +509,7 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock 
> *rb,
>          }
>      }
>  
> +    *new_dirty_pages += num_dirty;
>      return num_dirty;
>  }
>  #endif
> -- 
> 2.19.1
> 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]