qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCHv3 7/9] migration: do not sent zero pages in bulk


From: Eric Blake
Subject: Re: [Qemu-devel] [PATCHv3 7/9] migration: do not sent zero pages in bulk stage
Date: Thu, 21 Mar 2013 13:26:35 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130311 Thunderbird/17.0.4

On 03/21/2013 09:57 AM, Peter Lieven wrote:
> during bulk stage of ram migration if a page is a
> zero page do not send it at all.
> the memory at the destination reads as zero anyway.
> 
> even if there is an madvise with QEMU_MADV_DONTNEED
> at the target upon receipt of a zero page I have observed
> that the target starts swapping if the memory is overcommitted.
> it seems that the pages are dropped asynchronously.
> 
> Signed-off-by: Peter Lieven <address@hidden>
> ---
>  arch_init.c |   10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)

>              if (is_zero_page(p)) {
>                  acct_info.dup_pages++;
> -                bytes_sent = save_block_hdr(f, block, offset, cont,
> -                                            RAM_SAVE_FLAG_COMPRESS);
> -                qemu_put_byte(f, *p);
> -                bytes_sent += 1;
> +                if (!ram_bulk_stage) {
> +                    bytes_sent = save_block_hdr(f, block, offset, cont,
> +                                                RAM_SAVE_FLAG_COMPRESS);
> +                    qemu_put_byte(f, 0);
> +                }
> +                bytes_sent++;

Logic is STILL wrong.  I pointed out in v2 that bytes_sent should not be
incremented if you are not sending the page, so it needs to be inside
the 'if (!ram_bulk_stage)'.

Do we want to add a new migration statistic counter of how many zero
pages we omitted sending during the bulk stage?

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]