qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_d


From: Lukas Straub
Subject: Re: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty()
Date: Sun, 4 Jul 2021 16:14:57 +0200

On Sat, 3 Jul 2021 18:31:15 +0200
Lukas Straub <lukasstraub2@web.de> wrote:

> On Wed, 30 Jun 2021 16:08:05 -0400
> Peter Xu <peterx@redhat.com> wrote:
> 
> > Taking the mutex every time for each dirty bit to clear is too slow, 
> > especially
> > we'll take/release even if the dirty bit is cleared.  So far it's only used 
> > to
> > sync with special cases with qemu_guest_free_page_hint() against migration
> > thread, nothing really that serious yet.  Let's move the lock to be upper.
> > 
> > There're two callers of migration_bitmap_clear_dirty().
> > 
> > For migration, move it into ram_save_iterate().  With the help of MAX_WAIT
> > logic, we'll only run ram_save_iterate() for no more than 50ms-ish time, so
> > taking the lock once there at the entry.  It also means any call sites to
> > qemu_guest_free_page_hint() can be delayed; but it should be very rare, only
> > during migration, and I don't see a problem with it.
> > 
> > For COLO, move it up to colo_flush_ram_cache().  I think COLO forgot to take
> > that lock even when calling ramblock_sync_dirty_bitmap(), where another 
> > example
> > is migration_bitmap_sync() who took it right.  So let the mutex cover both 
> > the
> > ramblock_sync_dirty_bitmap() and migration_bitmap_clear_dirty() calls.  
> 
> Hi,
> I don't think COLO needs it, colo_flush_ram_cache() only runs on
> the secondary (incoming) side and AFAIK the bitmap is only set in
> ram_load_precopy() and they don't run in parallel.
> 
> Although I'm not sure what ramblock_sync_dirty_bitmap() does. I guess
> it's only there to make the rest of the migration code happy?

To answer myself, it syncs the dirty bitmap of the guest itself with
the ramblock. Of course not only changed pages on the primary need to
be overwritten from the cache, but also changed pages on the secondary
so the ram content exactly matches the primary's.

Now, I still don't know what would run concurrently there since the
guest is stopped when colo_flush_ram_cache() runs.

Regards,
Lukas Straub

> Regards,
> Lukas Straub
> 
> > It's even possible to drop the lock so we use atomic operations upon 
> > rb->bmap
> > and the variable migration_dirty_pages.  I didn't do it just to still be 
> > safe,
> > also not predictable whether the frequent atomic ops could bring overhead 
> > too
> > e.g. on huge vms when it happens very often.  When that really comes, we can
> > keep a local counter and periodically call atomic ops.  Keep it simple for 
> > now.
> > 
> > Cc: Wei Wang <wei.w.wang@intel.com>
> > Cc: David Hildenbrand <david@redhat.com>
> > Cc: Hailiang Zhang <zhang.zhanghailiang@huawei.com>
> > Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > Cc: Juan Quintela <quintela@redhat.com>
> > Cc: Leonardo Bras Soares Passos <lsoaresp@redhat.com>
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  migration/ram.c | 13 +++++++++++--
> >  1 file changed, 11 insertions(+), 2 deletions(-)
> > 
> > ...



-- 

Attachment: pgpJ9ElIYMtPP.pgp
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]