qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v2] migration: clear the memory region dirty bitmap when skip


From: Wang, Wei W
Subject: RE: [PATCH v2] migration: clear the memory region dirty bitmap when skipping free pages
Date: Fri, 16 Jul 2021 06:15:06 +0000

On Thursday, July 15, 2021 5:29 PM, David Hildenbrand wrote:
> On 15.07.21 09:53, Wei Wang wrote:
> > When skipping free pages to send, their corresponding dirty bits in
> > the memory region dirty bitmap need to be cleared. Otherwise the
> > skipped pages will be sent in the next round after the migration
> > thread syncs dirty bits from the memory region dirty bitmap.
> >
> > Cc: David Hildenbrand <david@redhat.com>
> > Cc: Peter Xu <peterx@redhat.com>
> > Cc: Michael S. Tsirkin <mst@redhat.com>
> > Reported-by: David Hildenbrand <david@redhat.com>
> > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > ---
> >   migration/ram.c | 72
> ++++++++++++++++++++++++++++++++++++-------------
> >   1 file changed, 54 insertions(+), 18 deletions(-)
> >
> > v1->v2 changelog:
> > - move migration_clear_memory_region_dirty_bitmap under bitmap_mutex
> as
> >    we lack confidence to have it outside the lock for now.
> > - clean the unnecessary subproject commit.
> >
> > diff --git a/migration/ram.c b/migration/ram.c index
> > b5fc454b2f..69e06b55ec 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -789,6 +789,51 @@ unsigned long
> migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
> >       return find_next_bit(bitmap, size, start);
> >   }
> >
> > +static void migration_clear_memory_region_dirty_bitmap(RAMState *rs,
> > +
> RAMBlock *rb,
> > +
> unsigned long
> > +page) {
> > +    uint8_t shift;
> > +    hwaddr size, start;
> > +
> > +    if (!rb->clear_bmap || !clear_bmap_test_and_clear(rb, page)) {
> > +        return;
> > +    }
> > +
> > +    shift = rb->clear_bmap_shift;
> 
> You could initialize this right at the beginning of the function without 
> doing any
> harm.
> 
> > +    /*
> > +     * CLEAR_BITMAP_SHIFT_MIN should always guarantee this... this
> > +     * can make things easier sometimes since then start address
> > +     * of the small chunk will always be 64 pages aligned so the
> > +     * bitmap will always be aligned to unsigned long. We should
> > +     * even be able to remove this restriction but I'm simply
> > +     * keeping it.
> > +     */
> > +    assert(shift >= 6);
> > +
> > +    size = 1ULL << (TARGET_PAGE_BITS + shift);
> > +    start = (((ram_addr_t)page) << TARGET_PAGE_BITS) & (-size);
> 
> these as well as.

Is there any coding style requirement for this?
My thought was that those operations could mostly be avoided if they don't pass 
the
above if condition (e.g. just once per 1GB chunk).

> 
> > +    trace_migration_bitmap_clear_dirty(rb->idstr, start, size, page);
> > +    memory_region_clear_dirty_bitmap(rb->mr, start, size); }
> > +
> > +static void
> > +migration_clear_memory_region_dirty_bitmap_range(RAMState *rs,
> > +                                                 RAMBlock *rb,
> > +                                                 unsigned long
> start,
> > +                                                 unsigned long
> > +npages) {
> > +    unsigned long page_to_clear, i, nchunks;
> > +    unsigned long chunk_pages = 1UL << rb->clear_bmap_shift;
> > +
> > +    nchunks = (start + npages) / chunk_pages - start / chunk_pages +
> > + 1;
> 
> Wouldn't you have to align the start and the end range up/down to properly
> calculate the number of chunks?

No, divide will round it to the integer (beginning of the chunk to clear).

> 
> The following might be better and a little easier to grasp:
> 
> unsigned long chunk_pages = 1ULL << rb->clear_bmap_shift; unsigned long
> aligned_start = QEMU_ALIGN_DOWN(start, chunk_pages); unsigned long
> aligned_end = QEMU_ALIGN_UP(start + npages, chunk_pages)
> 
> /*
>   * Clear the clar_bmap of all covered chunks. It's sufficient to call it for
>   * one page within a chunk.
>   */
> for (start = aligned_start, start != aligned_end, start += chunk_pages) {

What if "aligned_end == start + npages"?
i.e the above start + npages is aligned by itself without QEMU_ALIGN_UP().
For example, chunk size is 1GB, and start+npages=2GB, which is right at the 
beginning of [2GB,3GB) chunk.
Then aligned_end is also 2GB, but we need to clear the [2GB, 3GB) chunk, right?

Best,
Wei

reply via email to

[Prev in Thread] Current Thread [Next in Thread]