qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bul


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
Date: Fri, 15 Jan 2016 12:39:11 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.4.0


On 15/01/2016 10:48, Liang Li wrote:
> Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
> with the mmap() and MAP_ANONYMOUS option, or mmap() without MAP_SHARED
> if hugetlbfs is used.) so there is no need to send the zero page header
> to destination.
> 
> For guest just uses a small portions of RAM, this change can avoid
> allocating all the guest's RAM pages in the destination node after
> live migration. Another benefit is destination QEMU can save lots of
> CPU cycles for zero page checking.
> 
> Signed-off-by: Liang Li <address@hidden>

This does not work.  Depending on the board, some pages are written by
QEMU before the guest starts.  If the guest rewrites them with zeroes,
this change breaks migration.

Paolo

> ---
>  migration/ram.c | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 4e606ab..c4821d1 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f, RAMBlock *block, 
> ram_addr_t offset,
>  
>      if (is_zero_range(p, TARGET_PAGE_SIZE)) {
>          acct_info.dup_pages++;
> -        *bytes_transferred += save_page_header(f, block,
> -                                               offset | 
> RAM_SAVE_FLAG_COMPRESS);
> -        qemu_put_byte(f, 0);
> -        *bytes_transferred += 1;
> +        if (!ram_bulk_stage) {
> +            *bytes_transferred += save_page_header(f, block, offset |
> +                                                   RAM_SAVE_FLAG_COMPRESS);
> +            qemu_put_byte(f, 0);
> +            *bytes_transferred += 1;
> +        }
>          pages = 1;
>      }
>  
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]