qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments


From: Peter Xu
Subject: Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments
Date: Fri, 24 Mar 2017 17:55:17 +0800
User-agent: Mutt/1.5.24 (2015-08-30)

Hi, Juan,

Got several nitpicks below... (along with some questions)

On Thu, Mar 23, 2017 at 09:44:54PM +0100, Juan Quintela wrote:

[...]

>  static void xbzrle_cache_zero_page(ram_addr_t current_addr)
>  {
> @@ -459,8 +474,8 @@ static void xbzrle_cache_zero_page(ram_addr_t 
> current_addr)
>   *          -1 means that xbzrle would be longer than normal
>   *
>   * @f: QEMUFile where to send the data
> - * @current_data:
> - * @current_addr:
> + * @current_data: contents of the page

Since current_data is a double pointer, so... maybe "pointer to the
address of page content"?

Btw, a question not related to this series... Why here in
save_xbzrle_page() we need to update *current_data to be the newly
created page cache? I see that we have:

    /* update *current_data when the page has been
       inserted into cache */
    *current_data = get_cached_data(XBZRLE.cache, current_addr);

What would be the difference if we just use the old pointer in
RAMBlock.host?

[...]

> @@ -1157,11 +1186,12 @@ static bool get_queued_page(MigrationState *ms, 
> PageSearchStatus *pss,
>  }
>  
>  /**
> - * flush_page_queue: Flush any remaining pages in the ram request queue
> - *    it should be empty at the end anyway, but in error cases there may be
> - *    some left.
> + * flush_page_queue: flush any remaining pages in the ram request queue

Here the comment says (just like mentioned in function name) that we
will "flush any remaining pages in the ram request queue", however in
the implementation, we should be only freeing everything in
src_page_requests. The problem is "flush" let me think about "flushing
the rest of the pages to the other side"... while it's not.

Would it be nice we just rename the function into something else, like
migration_page_queue_free()? We can tune the comments correspondingly
as well.

[...]

> -/*
> - * Helper for postcopy_chunk_hostpages; it's called twice to cleanup
> - *   the two bitmaps, that are similar, but one is inverted.
> +/**
> + * postcopy_chuck_hostpages_pass: canocalize bitmap in hostpages
                  ^ should be n?     ^^^^^^^^^^ canonicalize?

>   *
> - * We search for runs of target-pages that don't start or end on a
> - * host page boundary;
> - * unsent_pass=true: Cleans up partially unsent host pages by searching
> - *                 the unsentmap
> - * unsent_pass=false: Cleans up partially dirty host pages by searching
> - *                 the main migration bitmap
> + * Helper for postcopy_chunk_hostpages; it's called twice to
> + * canonicalize the two bitmaps, that are similar, but one is
> + * inverted.
>   *
> + * Postcopy requires that all target pages in a hostpage are dirty or
> + * clean, not a mix.  This function canonicalizes the bitmaps.
> + *
> + * @ms: current migration state
> + * @unsent_pass: if true we need to canonicalize partially unsent host pages
> + *               otherwise we need to canonicalize partially dirty host pages
> + * @block: block that contains the page we want to canonicalize
> + * @pds: state for postcopy
>   */
>  static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool 
> unsent_pass,
>                                            RAMBlock *block,

[...]

> +/**
> + * ram_save_setup: iterative stage for migration
      ^^^^^^^^^^^^^^ should be ram_save_iterate()?

> + *
> + * Returns zero to indicate success and negative for error
> + *
> + * @f: QEMUFile where to send the data
> + * @opaque: RAMState pointer
> + */
>  static int ram_save_iterate(QEMUFile *f, void *opaque)
>  {
>      int ret;
> @@ -2091,7 +2169,16 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>      return done;
>  }

[...]

> -/*
> - * Allocate data structures etc needed by incoming migration with 
> postcopy-ram
> - * postcopy-ram's similarly names postcopy_ram_incoming_init does the work
> +/**
> + * ram_postococpy_incoming_init: allocate postcopy data structures
> + *
> + * Returns 0 for success and negative if there was one error
> + *
> + * @mis: current migration incoming state
> + *
> + * Allocate data structures etc needed by incoming migration with
> + * postcopy-ram postcopy-ram's similarly names
> + * postcopy_ram_incoming_init does the work

This sentence is slightly hard to understand... But I think the
function name explained itself enough though. :)

Thanks,

-- peterx



reply via email to

[Prev in Thread] Current Thread [Next in Thread]