[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v3 6/7] migration/postcopy: Handle RAMBlocks with a RamDiscar
From: |
Peter Xu |
Subject: |
Re: [PATCH v3 6/7] migration/postcopy: Handle RAMBlocks with a RamDiscardManager on the destination |
Date: |
Thu, 5 Aug 2021 08:52:01 -0400 |
On Thu, Aug 05, 2021 at 10:10:38AM +0200, David Hildenbrand wrote:
> On 05.08.21 02:04, Peter Xu wrote:
> > On Fri, Jul 30, 2021 at 10:52:48AM +0200, David Hildenbrand wrote:
> > > Currently, when someone (i.e., the VM) accesses discarded parts inside a
> > > RAMBlock with a RamDiscardManager managing the corresponding mapped memory
> > > region, postcopy will request migration of the corresponding page from the
> > > source. The source, however, will never answer, because it refuses to
> > > migrate such pages with undefined content ("logically unplugged"): the
> > > pages are never dirty, and get_queued_page() will consequently skip
> > > processing these postcopy requests.
> > >
> > > Especially reading discarded ("logically unplugged") ranges is supposed to
> > > work in some setups (for example with current virtio-mem), although it
> > > barely ever happens: still, not placing a page would currently stall the
> > > VM, as it cannot make forward progress.
> > >
> > > Let's check the state via the RamDiscardManager (the state e.g.,
> > > of virtio-mem is migrated during precopy) and avoid sending a request
> > > that will never get answered. Place a fresh zero page instead to keep
> > > the VM working. This is the same behavior that would happen
> > > automatically without userfaultfd being active, when accessing virtual
> > > memory regions without populated pages -- "populate on demand".
> > >
> > > For now, there are valid cases (as documented in the virtio-mem spec)
> > > where
> > > a VM might read discarded memory; in the future, we will disallow that.
> > > Then, we might want to handle that case differently, e.g., warning the
> > > user that the VM seems to be mis-behaving.
> > >
> > > Signed-off-by: David Hildenbrand <david@redhat.com>
> > > ---
> > > migration/postcopy-ram.c | 31 +++++++++++++++++++++++++++----
> > > migration/ram.c | 21 +++++++++++++++++++++
> > > migration/ram.h | 1 +
> > > 3 files changed, 49 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
> > > index 2e9697bdd2..38cdfc09c3 100644
> > > --- a/migration/postcopy-ram.c
> > > +++ b/migration/postcopy-ram.c
> > > @@ -671,6 +671,29 @@ int postcopy_wake_shared(struct PostCopyFD *pcfd,
> > > return ret;
> > > }
> > > +static int postcopy_request_page(MigrationIncomingState *mis, RAMBlock
> > > *rb,
> > > + ram_addr_t start, uint64_t haddr)
> > > +{
> > > + void *aligned = (void *)(uintptr_t)(haddr & -qemu_ram_pagesize(rb));
> > > +
> > > + /*
> > > + * Discarded pages (via RamDiscardManager) are never migrated. On
> > > unlikely
> > > + * access, place a zeropage, which will also set the relevant bits
> > > in the
> > > + * recv_bitmap accordingly, so we won't try placing a zeropage twice.
> > > + *
> > > + * Checking a single bit is sufficient to handle pagesize > TPS as
> > > either
> > > + * all relevant bits are set or not.
> > > + */
> > > + assert(QEMU_IS_ALIGNED(start, qemu_ram_pagesize(rb)));
> >
> > Is this check for ramblock_page_is_discarded()? If so, shall we move this
> > into
> > it, e.g. after memory_region_has_ram_discard_manager() returned true?
> >
>
> It has to hold true also when calling migrate_send_rp_req_pages().
>
> Both callers -- postcopy_request_shared_page() and
> postcopy_ram_fault_thread() properly align the offset down (but not the host
> address). This check is just to make sure we don't mess up in the future.
OK.
Reviewed-by: Peter Xu <peterx@redhat.com>
--
Peter Xu