qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_d


From: Wang, Wei W
Subject: RE: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty()
Date: Thu, 8 Jul 2021 02:49:51 +0000

On Thursday, July 8, 2021 12:44 AM, Peter Xu wrote:
> > > Not to mention the hard migration issues are mostly with non-idle
> > > guest, in that case having the balloon in the guest will be
> > > disastrous from this pov since it'll start to take mutex for each
> > > page, while balloon would hardly report anything valid since most guest 
> > > pages
> are being used.
> >
> > If no pages are reported, migration thread wouldn't wait on the lock then.
> 
> Yes I think this is the place I didn't make myself clear.  It's not about 
> sleeping, it's
> about the cmpxchg being expensive already when the vm is huge.

OK.
How did you root cause that it's caused by cmpxchg, instead of lock contention 
(i.e. syscall and sleep) or
some other code inside pthread_mutex_lock(). Do you have cycles about cmpxchg 
v.s. cycles of pthread_mutex_lock()?

I check the implementation of pthread_mutex_lock(). The code path for lock 
acquire is long. QemuSpin looks more efficient.
(probably we also don’t want migration thread to sleep in any case)

I think it's also better to see the comparison of migration throughput data 
(i.e. pages per second) in the following cases, before we make a decision:
- per-page mutex
- per-page spinlock
- 50-ms mutex

Best,
Wei


reply via email to

[Prev in Thread] Current Thread [Next in Thread]