qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 3/4] A separate thread for the VM migration


From: Umesh Deshpande
Subject: Re: [Qemu-devel] [RFC 3/4] A separate thread for the VM migration
Date: Thu, 21 Jul 2011 19:28:08 -0400 (EDT)


----- Original Message -----
From: "Marcelo Tosatti" <address@hidden>
To: "Umesh Deshpande" <address@hidden>
Cc: address@hidden, address@hidden
Sent: Wednesday, July 20, 2011 3:02:46 PM
Subject: Re: [RFC 3/4] A separate thread for the VM migration

On Wed, Jul 20, 2011 at 12:00:44AM -0400, Umesh Deshpande wrote:
> This patch creates a separate thread for the guest migration on the source 
> side. The migration routine is called from the migration clock.
> 
> Signed-off-by: Umesh Deshpande <address@hidden>
> ---
>  arch_init.c      |    8 +++++++
>  buffered_file.c  |   10 ++++-----
>  migration-tcp.c  |   18 ++++++++---------
>  migration-unix.c |    7 ++----
>  migration.c      |   56 
> +++++++++++++++++++++++++++++--------------------------
>  migration.h      |    4 +--
>  6 files changed, 57 insertions(+), 46 deletions(-)
> 
> diff --git a/arch_init.c b/arch_init.c
> index f81a729..6d44b72 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -260,6 +260,10 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, 
> void *opaque)
>          return 0;
>      }
>  
> +    if (stage != 3) {
> +        qemu_mutex_lock_iothread();
> +    }
> +
>      if (cpu_physical_sync_dirty_bitmap(0, TARGET_PHYS_ADDR_MAX) != 0) {
>          qemu_file_set_error(f);
>          return 0;
> @@ -267,6 +271,10 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, 
> void *opaque)
>  
>      sync_migration_bitmap(0, TARGET_PHYS_ADDR_MAX);
>  
> +    if (stage != 3) {
> +        qemu_mutex_unlock_iothread();
> +    }
> +

Many data structures shared by vcpus/iothread and migration thread are
accessed simultaneously without protection. Instead of simply moving
the entire migration routines to a thread, i'd suggest moving only the
time consuming work in ram_save_block (dup_page and put_buffer), after
properly audit for shared access. And send more than one page a time, of
course.

The group of migration routines moved into the thread needs to be executed 
sequentially, because of the way protocol is designed.
Currently, migration is performed in sections, and we cannot proceed to the 
next section
until current section has been written to the QEMUFile. A thread for any 
sub-part would introduce parallelism, breaking the sequential semantics.
(Condition variables will have to be used to ensure sequentiality across new 
thread and iothread)

Secondly, put_buffer is called from iohandler and timers, currently both are 
called from iothread.
With a separate thread for dup_page and put_buffer, it will also be called from 
inside the thread.

Another option with the current implementation could be to hold the qemu_mutex 
inside the thread for most of the part and releasing it for time consuming part 
in ram_save_block.

A separate lock for ram_list is probably necessary, so that it can
be accessed from the migration thread.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]