qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RESEND v2 08/18] ram/COLO: Record the dirty page


From: Juan Quintela
Subject: Re: [Qemu-devel] [PATCH RESEND v2 08/18] ram/COLO: Record the dirty pages that SVM received
Date: Mon, 24 Apr 2017 20:29:38 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/25.1 (gnu/linux)

zhanghailiang <address@hidden> wrote:
> We record the address of the dirty pages that received,
> it will help flushing pages that cached into SVM.
>
> Here, it is a trick, we record dirty pages by re-using migration
> dirty bitmap. In the later patch, we will start the dirty log
> for SVM, just like migration, in this way, we can record both
> the dirty pages caused by PVM and SVM, we only flush those dirty
> pages from RAM cache while do checkpoint.
>
> Cc: Juan Quintela <address@hidden>
> Signed-off-by: zhanghailiang <address@hidden>
> Reviewed-by: Dr. David Alan Gilbert <address@hidden>
> ---
>  migration/ram.c | 29 +++++++++++++++++++++++++++++
>  1 file changed, 29 insertions(+)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index 05d1b06..0653a24 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -2268,6 +2268,9 @@ static inline void *host_from_ram_block_offset(RAMBlock 
> *block,
>  static inline void *colo_cache_from_block_offset(RAMBlock *block,
>                                                   ram_addr_t offset)
>  {
> +    unsigned long *bitmap;
> +    long k;
> +
>      if (!offset_in_ramblock(block, offset)) {
>          return NULL;
>      }
> @@ -2276,6 +2279,17 @@ static inline void 
> *colo_cache_from_block_offset(RAMBlock *block,
>                       __func__, block->idstr);
>          return NULL;
>      }
> +
> +    k = (memory_region_get_ram_addr(block->mr) + offset) >> TARGET_PAGE_BITS;
> +    bitmap = atomic_rcu_read(&ram_state.ram_bitmap)->bmap;
> +    /*
> +    * During colo checkpoint, we need bitmap of these migrated pages.
> +    * It help us to decide which pages in ram cache should be flushed
> +    * into VM's RAM later.
> +    */
> +    if (!test_and_set_bit(k, bitmap)) {
> +        ram_state.migration_dirty_pages++;
> +    }
>      return block->colo_cache + offset;
>  }
>  
> @@ -2752,6 +2766,15 @@ int colo_init_ram_cache(void)
>          memcpy(block->colo_cache, block->host, block->used_length);
>      }
>      rcu_read_unlock();
> +    /*
> +    * Record the dirty pages that sent by PVM, we use this dirty bitmap 
> together
> +    * with to decide which page in cache should be flushed into SVM's RAM. 
> Here
> +    * we use the same name 'ram_bitmap' as for migration.
> +    */
> +    ram_state.ram_bitmap = g_new0(RAMBitmap, 1);
> +    ram_state.ram_bitmap->bmap = bitmap_new(last_ram_page());
> +    ram_state.migration_dirty_pages = 0;
> +
>      return 0;
>  
>  out_locked:
> @@ -2770,6 +2793,12 @@ out_locked:
>  void colo_release_ram_cache(void)
>  {
>      RAMBlock *block;
> +    RAMBitmap *bitmap = ram_state.ram_bitmap;
> +
> +    atomic_rcu_set(&ram_state.ram_bitmap, NULL);
> +    if (bitmap) {
> +        call_rcu(bitmap, migration_bitmap_free, rcu);
> +    }
>  
>      rcu_read_lock();
>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {

You can see my Split bitmap patches, I am splitting the dirty bitmap per
block, I think that it shouldn't make your life more difficult, but
please take a look.

I am wondering if it is faster/easier to use the page_cache.c that
xbzrle uses to store the dirty pages instead of copying the whole
RAMBlocks, but I don't really know.


Thanks, Juan.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]