qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 3/4] A separate thread for the VM migration


From: Marcelo Tosatti
Subject: Re: [Qemu-devel] [RFC 3/4] A separate thread for the VM migration
Date: Wed, 20 Jul 2011 16:02:46 -0300
User-agent: Mutt/1.5.21 (2010-09-15)

On Wed, Jul 20, 2011 at 12:00:44AM -0400, Umesh Deshpande wrote:
> This patch creates a separate thread for the guest migration on the source 
> side. The migration routine is called from the migration clock.
> 
> Signed-off-by: Umesh Deshpande <address@hidden>
> ---
>  arch_init.c      |    8 +++++++
>  buffered_file.c  |   10 ++++-----
>  migration-tcp.c  |   18 ++++++++---------
>  migration-unix.c |    7 ++----
>  migration.c      |   56 
> +++++++++++++++++++++++++++++--------------------------
>  migration.h      |    4 +--
>  6 files changed, 57 insertions(+), 46 deletions(-)
> 
> diff --git a/arch_init.c b/arch_init.c
> index f81a729..6d44b72 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -260,6 +260,10 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, 
> void *opaque)
>          return 0;
>      }
>  
> +    if (stage != 3) {
> +        qemu_mutex_lock_iothread();
> +    }
> +
>      if (cpu_physical_sync_dirty_bitmap(0, TARGET_PHYS_ADDR_MAX) != 0) {
>          qemu_file_set_error(f);
>          return 0;
> @@ -267,6 +271,10 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, 
> void *opaque)
>  
>      sync_migration_bitmap(0, TARGET_PHYS_ADDR_MAX);
>  
> +    if (stage != 3) {
> +        qemu_mutex_unlock_iothread();
> +    }
> +

Many data structures shared by vcpus/iothread and migration thread are
accessed simultaneously without protection. Instead of simply moving
the entire migration routines to a thread, i'd suggest moving only the
time consuming work in ram_save_block (dup_page and put_buffer), after
properly audit for shared access. And send more than one page a time, of
course.

A separate lock for ram_list is probably necessary, so that it can
be accessed from the migration thread.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]