qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 6/7] migration/postcopy: Handle RAMBlocks with a RamDiscar


From: David Hildenbrand
Subject: Re: [PATCH v3 6/7] migration/postcopy: Handle RAMBlocks with a RamDiscardManager on the destination
Date: Thu, 5 Aug 2021 10:20:58 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0

On 05.08.21 10:17, Philippe Mathieu-Daudé wrote:
On 8/5/21 10:07 AM, David Hildenbrand wrote:
On 05.08.21 09:48, Philippe Mathieu-Daudé wrote:
On 7/30/21 10:52 AM, David Hildenbrand wrote:
Currently, when someone (i.e., the VM) accesses discarded parts inside a
RAMBlock with a RamDiscardManager managing the corresponding mapped
memory
region, postcopy will request migration of the corresponding page
from the
source. The source, however, will never answer, because it refuses to
migrate such pages with undefined content ("logically unplugged"): the
pages are never dirty, and get_queued_page() will consequently skip
processing these postcopy requests.

Especially reading discarded ("logically unplugged") ranges is
supposed to
work in some setups (for example with current virtio-mem), although it
barely ever happens: still, not placing a page would currently stall the
VM, as it cannot make forward progress.

Let's check the state via the RamDiscardManager (the state e.g.,
of virtio-mem is migrated during precopy) and avoid sending a request
that will never get answered. Place a fresh zero page instead to keep
the VM working. This is the same behavior that would happen
automatically without userfaultfd being active, when accessing virtual
memory regions without populated pages -- "populate on demand".

For now, there are valid cases (as documented in the virtio-mem spec)
where
a VM might read discarded memory; in the future, we will disallow that.
Then, we might want to handle that case differently, e.g., warning the
user that the VM seems to be mis-behaving.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
   migration/postcopy-ram.c | 31 +++++++++++++++++++++++++++----
   migration/ram.c          | 21 +++++++++++++++++++++
   migration/ram.h          |  1 +
   3 files changed, 49 insertions(+), 4 deletions(-)

diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 2e9697bdd2..38cdfc09c3 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -671,6 +671,29 @@ int postcopy_wake_shared(struct PostCopyFD *pcfd,
       return ret;
   }
   +static int postcopy_request_page(MigrationIncomingState *mis,
RAMBlock *rb,
+                                 ram_addr_t start, uint64_t haddr)
+{
+    void *aligned = (void *)(uintptr_t)(haddr &
-qemu_ram_pagesize(rb));

    void *aligned = QEMU_ALIGN_PTR_DOWN(haddr, qemu_ram_pagesize(rb)));


Does not compile as haddr is not a pointer.

I suppose the typeof() fails?

/* n-byte align pointer down */
#define QEMU_ALIGN_PTR_DOWN(p, n) \
     ((typeof(p))QEMU_ALIGN_DOWN((uintptr_t)(p), (n)))


void *aligned = QEMU_ALIGN_PTR_DOWN((void *)haddr, qemu_ram_pagesize(rb)));

should work.

What about

void *aligned = QEMU_ALIGN_DOWN(haddr, qemu_ram_pagesize(rb)));

else

void *aligned = (void *)QEMU_ALIGN_DOWN(haddr, qemu_ram_pagesize(rb)));

That works as well and will use that for now.

At one point we should just pass a pointer instead of uint64_t for the host address.


I don't mind much the style you prefer, as long as it compiles :p
But these QEMU_ALIGN_DOWN() macros make the review easier than (a & -b).


Yes, absolutely. I'll add a patch converting a bunch of such alignments in migration code.

--
Thanks,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]