qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/4] migration: set dirty_pages_rate before auto


From: Peter Xu
Subject: Re: [Qemu-devel] [PATCH 2/4] migration: set dirty_pages_rate before autoconverge logic
Date: Thu, 25 May 2017 08:40:07 +0800
User-agent: Mutt/1.5.24 (2015-08-30)

On Wed, May 24, 2017 at 05:10:01PM +0100, Felipe Franciosi wrote:
> Currently, a "period" in the RAM migration logic is at least a second
> long and accounts for what happened since the last period (or the
> beginning of the migration). The dirty_pages_rate counter is calculated
> at the end this logic.
> 
> If the auto convergence capability is enabled from the start of the
> migration, it won't be able to use this counter the first time around.
> This calculates dirty_pages_rate as soon as a period is deemed over,
> which allows for it to be used immediately.
> 
> Signed-off-by: Felipe Franciosi <address@hidden>

You fixed the indents as well, but imho it's okay.

Reviewed-by: Peter Xu <address@hidden>

> ---
>  migration/ram.c | 17 ++++++++++-------
>  1 file changed, 10 insertions(+), 7 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 36bf720..495ecbe 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -694,6 +694,10 @@ static void migration_bitmap_sync(RAMState *rs)
>  
>      /* more than 1 second = 1000 millisecons */
>      if (end_time > rs->time_last_bitmap_sync + 1000) {
> +        /* calculate period counters */
> +        rs->dirty_pages_rate = rs->num_dirty_pages_period * 1000
> +            / (end_time - rs->time_last_bitmap_sync);
> +
>          if (migrate_auto_converge()) {
>              /* The following detection logic can be refined later. For now:
>                 Check to see if the dirtied bytes is 50% more than the approx.
> @@ -702,15 +706,14 @@ static void migration_bitmap_sync(RAMState *rs)
>                 throttling */
>              bytes_xfer_now = ram_bytes_transferred();
>  
> -            if (rs->dirty_pages_rate &&
> -               (rs->num_dirty_pages_period * TARGET_PAGE_SIZE >
> +            if ((rs->num_dirty_pages_period * TARGET_PAGE_SIZE >
>                     (bytes_xfer_now - rs->bytes_xfer_prev) / 2) &&
> -               (rs->dirty_rate_high_cnt++ >= 2)) {
> +                (rs->dirty_rate_high_cnt++ >= 2)) {
>                      trace_migration_throttle();
>                      rs->dirty_rate_high_cnt = 0;
>                      mig_throttle_guest_down();
> -             }
> -             rs->bytes_xfer_prev = bytes_xfer_now;
> +            }
> +            rs->bytes_xfer_prev = bytes_xfer_now;
>          }
>  
>          if (migrate_use_xbzrle()) {
> @@ -723,8 +726,8 @@ static void migration_bitmap_sync(RAMState *rs)
>              rs->iterations_prev = rs->iterations;
>              rs->xbzrle_cache_miss_prev = rs->xbzrle_cache_miss;
>          }
> -        rs->dirty_pages_rate = rs->num_dirty_pages_period * 1000
> -            / (end_time - rs->time_last_bitmap_sync);
> +
> +        /* reset period counters */
>          rs->time_last_bitmap_sync = end_time;
>          rs->num_dirty_pages_period = 0;
>      }
> -- 
> 1.9.5
> 

-- 
Peter Xu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]