qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 6/8] migration: move handle of zero page to t


From: Peter Xu
Subject: Re: [Qemu-devel] [PATCH v2 6/8] migration: move handle of zero page to the thread
Date: Mon, 23 Jul 2018 13:03:25 +0800
User-agent: Mutt/1.10.0 (2018-05-17)

On Thu, Jul 19, 2018 at 08:15:18PM +0800, address@hidden wrote:

[...]

> @@ -1950,12 +1971,16 @@ retry:
>              set_compress_params(&comp_param[idx], block, offset);
>              qemu_cond_signal(&comp_param[idx].cond);
>              qemu_mutex_unlock(&comp_param[idx].mutex);
> -            pages = 1;
> -            /* 8 means a header with RAM_SAVE_FLAG_CONTINUE. */
> -            compression_counters.reduced_size += TARGET_PAGE_SIZE -
> -                                                 bytes_xmit + 8;
> -            compression_counters.pages++;
>              ram_counters.transferred += bytes_xmit;
> +            pages = 1;

(moving of this line seems irrelevant; meanwhile more duplicated codes
so even better to have a helper now)

> +            if (comp_param[idx].zero_page) {
> +                ram_counters.duplicate++;
> +            } else {
> +                /* 8 means a header with RAM_SAVE_FLAG_CONTINUE. */
> +                compression_counters.reduced_size += TARGET_PAGE_SIZE -
> +                                                     bytes_xmit + 8;
> +                compression_counters.pages++;
> +            }
>              break;
>          }
>      }

[...]

> @@ -2249,15 +2308,8 @@ static int ram_save_target_page(RAMState *rs, 
> PageSearchStatus *pss,
>          return res;
>      }
>  
> -    /*
> -     * When starting the process of a new block, the first page of
> -     * the block should be sent out before other pages in the same
> -     * block, and all the pages in last block should have been sent
> -     * out, keeping this order is important, because the 'cont' flag
> -     * is used to avoid resending the block name.
> -     */
> -    if (block != rs->last_sent_block && save_page_use_compression(rs)) {
> -            flush_compressed_data(rs);
> +    if (save_compress_page(rs, block, offset)) {
> +        return 1;

It's a bit tricky (though it seems to be a good idea too) to move the
zero detect into the compression thread, though I noticed that we also
do something else for zero pages:

    res = save_zero_page(rs, block, offset);
    if (res > 0) {
        /* Must let xbzrle know, otherwise a previous (now 0'd) cached
         * page would be stale
         */
        if (!save_page_use_compression(rs)) {
            XBZRLE_cache_lock();
            xbzrle_cache_zero_page(rs, block->offset + offset);
            XBZRLE_cache_unlock();
        }
        ram_release_pages(block->idstr, offset, res);
        return res;
    }

I'd guess that the xbzrle update of the zero page is not needed for
compression since after all xbzrle is not enabled when compression is
enabled, however do we need to do ram_release_pages() somehow?

>      }
>  
>      res = save_zero_page(rs, block, offset);
> @@ -2275,18 +2327,10 @@ static int ram_save_target_page(RAMState *rs, 
> PageSearchStatus *pss,
>      }
>  
>      /*
> -     * Make sure the first page is sent out before other pages.
> -     *
> -     * we post it as normal page as compression will take much
> -     * CPU resource.
> -     */
> -    if (block == rs->last_sent_block && save_page_use_compression(rs)) {
> -        res = compress_page_with_multi_thread(rs, block, offset);
> -        if (res > 0) {
> -            return res;
> -        }
> -        compression_counters.busy++;
> -    } else if (migrate_use_multifd()) {
> +    * do not use multifd for compression as the first page in the new
> +    * block should be posted out before sending the compressed page
> +    */
> +    if (!save_page_use_compression(rs) && migrate_use_multifd()) {
>          return ram_save_multifd_page(rs, block, offset);
>      }
>  
> -- 
> 2.14.4
> 

Regards,

-- 
Peter Xu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]