qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_d


From: Wang, Wei W
Subject: RE: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty()
Date: Fri, 2 Jul 2021 02:29:41 +0000

On Thursday, July 1, 2021 8:51 PM, Peter Xu wrote:
> On Thu, Jul 01, 2021 at 04:42:38AM +0000, Wang, Wei W wrote:
> > On Thursday, July 1, 2021 4:08 AM, Peter Xu wrote:
> > > Taking the mutex every time for each dirty bit to clear is too slow,
> > > especially we'll take/release even if the dirty bit is cleared.  So
> > > far it's only used to sync with special cases with
> > > qemu_guest_free_page_hint() against migration thread, nothing really that
> serious yet.  Let's move the lock to be upper.
> > >
> > > There're two callers of migration_bitmap_clear_dirty().
> > >
> > > For migration, move it into ram_save_iterate().  With the help of
> > > MAX_WAIT logic, we'll only run ram_save_iterate() for no more than
> > > 50ms-ish time, so taking the lock once there at the entry.  It also
> > > means any call sites to
> > > qemu_guest_free_page_hint() can be delayed; but it should be very
> > > rare, only during migration, and I don't see a problem with it.
> > >
> > > For COLO, move it up to colo_flush_ram_cache().  I think COLO forgot
> > > to take that lock even when calling ramblock_sync_dirty_bitmap(),
> > > where another example is migration_bitmap_sync() who took it right.
> > > So let the mutex cover both the
> > > ramblock_sync_dirty_bitmap() and migration_bitmap_clear_dirty() calls.
> > >
> > > It's even possible to drop the lock so we use atomic operations upon
> > > rb->bmap and the variable migration_dirty_pages.  I didn't do it
> > > just to still be safe, also not predictable whether the frequent atomic 
> > > ops
> could bring overhead too e.g.
> > > on huge vms when it happens very often.  When that really comes, we
> > > can keep a local counter and periodically call atomic ops.  Keep it 
> > > simple for
> now.
> > >
> >
> > If free page opt is enabled, 50ms waiting time might be too long for 
> > handling
> just one hint (via qemu_guest_free_page_hint)?
> > How about making the lock conditionally?
> > e.g.
> > #define QEMU_LOCK_GUARD_COND (lock, cond) {
> >     if (cond)
> >             QEMU_LOCK_GUARD(lock);
> > }
> > Then in migration_bitmap_clear_dirty:
> > QEMU_LOCK_GUARD_COND(&rs->bitmap_mutex, rs->fpo_enabled);
> 
> Yeah that's indeed some kind of comment I'd like to get from either you or 
> David
> when I add the cc list.. :)
> 
> I was curious how that would affect the guest when the free page hint helper 
> can
> stuck for a while.  Per my understanding it's fully async as the blocked 
> thread
> here is asynchronously with the guest since both virtio-balloon and virtio-mem
> are fully async. If so, would it really affect the guest a lot?  Is it still 
> tolerable if it
> only happens during migration?

Yes, it is async and won't block the guest. But it will make the optimization 
doesn’t run as expected.
The intention is to have the migration thread skip the transfer of the free 
pages, but now the migration
thread is kind of using the 50ms lock to prevent the clearing of free pages 
while it is likely just sending free pages inside the lock.
(the reported free pages are better to be cleared in the bitmap in time in case 
they have already sent)

> 
> Taking that mutex for each dirty bit is still an overkill to me, irrelevant 
> of whether
> it's "conditional" or not.  

With that, if free page opt is off, the mutex is skipped, isn't it?

> If I'm the cloud admin, I would more prefer migration
> finishes earlier, imho, rather than freeing some more pages on the host (after
> migration all pages will be gone!).  If it still blocks the guest in some 
> unhealthy
> way I still prefer to take the lock here, however maybe make it shorter than
> 50ms.
> 

Yes, with the optimization, migration will be finished earlier.
Why it needs to free pages on the host?
(just skip sending the page)

Best,
Wei




reply via email to

[Prev in Thread] Current Thread [Next in Thread]