qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] stop the iteration when too many pages is trans


From: Anthony Liguori
Subject: Re: [Qemu-devel] [PATCH] stop the iteration when too many pages is transferred
Date: Fri, 19 Nov 2010 20:23:55 -0600
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.15) Gecko/20101027 Lightning/1.0b1 Thunderbird/3.0.10

On 11/17/2010 08:32 PM, Wen Congyang wrote:
> When the total sent page size is larger than max_factor
> times of the size of guest OS's memory, stop the
> iteration.
> The default value of max_factor is 3.
>
> This is similar to XEN.
>
>
> Signed-off-by: Wen Congyang
>   

I'm strongly opposed to doing this. I think Xen gets this totally wrong.

Migration is a contract. When you set the stop time, you're saying that
you want only want the guest to experience a fixed amount of downtime.
Stopping the guest after some arbitrary number of iterations makes the
downtime non-deterministic. With a very large guest, this could wreak
havoc causing dropped networking connections, etc.

It's totally unsafe.

If a management tool wants this behavior, they can set a timeout and
explicitly stop the guest during the live migration. IMHO, such a
management tool is not doing it's job properly but it still can be
implemented.

Regards,

Anthony Liguori

> ---
>  arch_init.c |   13 ++++++++++++-
>  1 files changed, 12 insertions(+), 1 deletions(-)
>
> diff --git a/arch_init.c b/arch_init.c
> index 4486925..67e90f8 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -212,6 +212,14 @@ uint64_t ram_bytes_total(void)
>      return total;
>  }
>  
> +static uint64_t ram_blocks_total(void)
> +{
> +    return ram_bytes_total() / TARGET_PAGE_SIZE;
> +}
> +
> +static uint64_t blocks_transferred = 0;
> +static int max_factor = 3;
> +
>  int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
>  {
>      ram_addr_t addr;
> @@ -234,6 +242,7 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, 
> void *opaque)
>          bytes_transferred = 0;
>          last_block = NULL;
>          last_offset = 0;
> +        blocks_transferred = 0;
>  
>          /* Make sure all dirty bits are set */
>          QLIST_FOREACH(block, &ram_list.blocks, next) {
> @@ -266,6 +275,7 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, 
> void *opaque)
>  
>          bytes_sent = ram_save_block(f);
>          bytes_transferred += bytes_sent;
> +        blocks_transferred += !!bytes_sent;
>          if (bytes_sent == 0) { /* no more blocks */
>              break;
>          }
> @@ -295,7 +305,8 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, 
> void *opaque)
>  
>      expected_time = ram_save_remaining() * TARGET_PAGE_SIZE / bwidth;
>  
> -    return (stage == 2) && (expected_time <= migrate_max_downtime());
> +    return (stage == 2) && ((expected_time <= migrate_max_downtime())
> +            || (blocks_transferred > ram_blocks_total() * max_factor));
>  }
>  
>  static inline void *host_from_stream_offset(QEMUFile *f,
>   




reply via email to

[Prev in Thread] Current Thread [Next in Thread]