qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] migration: flush the bdrv before stopping VM


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH] migration: flush the bdrv before stopping VM
Date: Thu, 19 Mar 2015 14:40:18 +0000
User-agent: Mutt/1.5.23 (2014-03-12)

* Li, Liang Z (address@hidden) wrote:
> > * Li, Liang Z (address@hidden) wrote:
> > > > > > First explanation, why I think this don't fix the full problem.
> > > > > > Whith this patch, we fix the problem where we have a dirty block
> > > > > > layer but basically nothing dirtying the memory on the guest (we
> > > > > > are moving the 20 seconds from max_downtime for the blocklayer
> > > > > > flush), to 20 seconds until we have decided that the amount of
> > > > > > dirty memory is small enough to be transferred during
> > > > > > max_downtime.  But it is still going to take 20 seconds to flush
> > > > > > the block layer, and during that 20 seconds, the amount of memory
> > that can be dirty is HUGE.
> > > > >
> > > > > It's true.
> > > >
> > > > What kind of cache is it actually that takes 20s to flush here?
> > > >
> > >
> > > I run a script in the guest which do a dd operation,  like this:
> > >
> > > #!/bin/sh
> > > for i in {1..1000000}
> > > do
> > >   time dd if=/dev/zero of=/time.bdf bs=4k count=200000
> > >   rm /time.bdf
> > > done
> > >
> > > It's an extreme  case.
> > 
> > With what qemu options for the device, and what was your device backed by?
> 
> Very simple:
> ./qemu-system-x86_64 -enable-kvm -smp 4 -m 4096  -net none rhel6u5.img 
> -monitor stdio
> 
> And it's a local migration.  I will do the test between two physical machines 
> later.

OK, but for shared storage you would have to add cache=none (or something like 
that),
so that would change the behaviour anyway.

Dave
> 
> 
> Liang
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]