qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_d


From: Peter Xu
Subject: Re: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty()
Date: Tue, 6 Jul 2021 14:37:11 -0400

On Sun, Jul 04, 2021 at 04:14:57PM +0200, Lukas Straub wrote:
> On Sat, 3 Jul 2021 18:31:15 +0200
> Lukas Straub <lukasstraub2@web.de> wrote:
> 
> > On Wed, 30 Jun 2021 16:08:05 -0400
> > Peter Xu <peterx@redhat.com> wrote:
> > 
> > > Taking the mutex every time for each dirty bit to clear is too slow, 
> > > especially
> > > we'll take/release even if the dirty bit is cleared.  So far it's only 
> > > used to
> > > sync with special cases with qemu_guest_free_page_hint() against migration
> > > thread, nothing really that serious yet.  Let's move the lock to be upper.
> > > 
> > > There're two callers of migration_bitmap_clear_dirty().
> > > 
> > > For migration, move it into ram_save_iterate().  With the help of MAX_WAIT
> > > logic, we'll only run ram_save_iterate() for no more than 50ms-ish time, 
> > > so
> > > taking the lock once there at the entry.  It also means any call sites to
> > > qemu_guest_free_page_hint() can be delayed; but it should be very rare, 
> > > only
> > > during migration, and I don't see a problem with it.
> > > 
> > > For COLO, move it up to colo_flush_ram_cache().  I think COLO forgot to 
> > > take
> > > that lock even when calling ramblock_sync_dirty_bitmap(), where another 
> > > example
> > > is migration_bitmap_sync() who took it right.  So let the mutex cover 
> > > both the
> > > ramblock_sync_dirty_bitmap() and migration_bitmap_clear_dirty() calls.  
> > 
> > Hi,
> > I don't think COLO needs it, colo_flush_ram_cache() only runs on
> > the secondary (incoming) side and AFAIK the bitmap is only set in
> > ram_load_precopy() and they don't run in parallel.
> > 
> > Although I'm not sure what ramblock_sync_dirty_bitmap() does. I guess
> > it's only there to make the rest of the migration code happy?
> 
> To answer myself, it syncs the dirty bitmap of the guest itself with
> the ramblock. Of course not only changed pages on the primary need to
> be overwritten from the cache, but also changed pages on the secondary
> so the ram content exactly matches the primary's.
> 
> Now, I still don't know what would run concurrently there since the
> guest is stopped when colo_flush_ram_cache() runs.

Indeed I know little on COLO so I don't know whether it's needed in practise.
It's just easier to always take the mutex as long as those protected fields are
modified; mutexes always work with single threaded apps anyways.

Or do you prefer me to drop it?  I'll need to rely on your colo knowledge to
know whether it's safe..  I don't think common migration code will be run
during colo, then would qemu_guest_free_page_hint() be called for a colo SVM?
If not, it looks safe to drop the mutex indeed.

Thanks,

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]