qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] migration: Count new_dirty instead of real_dirty


From: Dr. David Alan Gilbert
Subject: Re: [PATCH v2] migration: Count new_dirty instead of real_dirty
Date: Tue, 16 Jun 2020 10:58:46 +0100
User-agent: Mutt/1.14.0 (2020-05-02)

* zhukeqian (zhukeqian1@huawei.com) wrote:
> Hi Dave,
> 
> On 2020/6/16 17:35, Dr. David Alan Gilbert wrote:
> > * Keqian Zhu (zhukeqian1@huawei.com) wrote:
> >> real_dirty_pages becomes equal to total ram size after dirty log sync
> >> in ram_init_bitmaps, the reason is that the bitmap of ramblock is
> >> initialized to be all set, so old path counts them as "real dirty" at
> >> beginning.
> >>
> >> This causes wrong dirty rate and false positive throttling at the end
> >> of first ram save iteration.
> >>
> >> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
> > 
> > Since this function already returns num_dirty, why not just change the
> > caller to increment a counter based off the return value?
> Yes, that would be better :-) .
> 
> > 
> > Can you point to the code which is using this value that triggers the
> > throttle?
> > 
> In migration_trigger_throttle(), rs->num_dirty_pages_period is used.
> And it corresponds to real_dirty_pages here.

OK; so is the problem not the same as the check that's in there for
blk_mig_bulk_activate - don't we need to do the same trick for ram bulk
migration (i.e. the first pass).

Dave

> Thanks,
> Keqian
> 
> > Dave
> > 
> > 
> [...]
> >>
> >>
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> > 
> > .
> > 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]