qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[RFC] virtio_pmem: enable live migration support


From: Pankaj Gupta
Subject: [RFC] virtio_pmem: enable live migration support
Date: Fri, 31 Dec 2021 13:01:27 +0100

From: Pankaj Gupta <pankaj.gupta.linux@gmail.com>>

Enable live migration support for virtio-pmem device.
Tested this: with live migration on same host.

Need suggestion on below points to support virtio-pmem live migration
between two separate host systems:

- There is still possibility of stale page cache page at the
  destination host which we cannot invalidate currently as done in 1]
  for write-back mode because virtio-pmem memory backend file is mmaped
  in guest address space and invalidating corresponding page cache pages
  would also fault all the other userspace process mappings on the same file.
  Or we make it strict no other process would mmap this backing file?

  -- In commit 1] we first fsync and then invalidate all the pages from 
destination
     page cache. fsync would sync the stale dirty page cache page, Is this the 
right
     thing to do as we might end up in data discrepency?

- Thinking, alternatively if we transfer active corresponding guest page cache
  pages information from active LRU list source to destination host and refault
  those pages. This would also help to enable hot page cache in destination host
  for the guest and solve stale page cache issue as well. How we can achieve 
this
  so that we make sure we get rid of all the stale page cache pages in 
destination
  host?

  Looking for suggestions on recommended and feasible solution we can implement?
  Thank you!

1] dd577a26ff ("block/file-posix: implement bdrv_co_invalidate_cache() on 
Linux")

Signed-off-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
---
 hw/virtio/virtio-pmem.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/hw/virtio/virtio-pmem.c b/hw/virtio/virtio-pmem.c
index d1aeb90a31..a19619a387 100644
--- a/hw/virtio/virtio-pmem.c
+++ b/hw/virtio/virtio-pmem.c
@@ -123,6 +123,7 @@ static void virtio_pmem_realize(DeviceState *dev, Error 
**errp)
     }
 
     host_memory_backend_set_mapped(pmem->memdev, true);
+    vmstate_register_ram(&pmem->memdev->mr, DEVICE(pmem));
     virtio_init(vdev, TYPE_VIRTIO_PMEM, VIRTIO_ID_PMEM,
                 sizeof(struct virtio_pmem_config));
     pmem->rq_vq = virtio_add_queue(vdev, 128, virtio_pmem_flush);
@@ -133,6 +134,7 @@ static void virtio_pmem_unrealize(DeviceState *dev)
     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
     VirtIOPMEM *pmem = VIRTIO_PMEM(dev);
 
+    vmstate_unregister_ram(&pmem->memdev->mr, DEVICE(pmem));
     host_memory_backend_set_mapped(pmem->memdev, false);
     virtio_delete_queue(pmem->rq_vq);
     virtio_cleanup(vdev);
@@ -157,6 +159,16 @@ static MemoryRegion 
*virtio_pmem_get_memory_region(VirtIOPMEM *pmem,
     return &pmem->memdev->mr;
 }
 
+static const VMStateDescription vmstate_virtio_pmem = {
+    .name = "virtio-pmem",
+    .minimum_version_id = 1,
+    .version_id = 1,
+    .fields = (VMStateField[]) {
+        VMSTATE_VIRTIO_DEVICE,
+        VMSTATE_END_OF_LIST()
+    },
+};
+
 static Property virtio_pmem_properties[] = {
     DEFINE_PROP_UINT64(VIRTIO_PMEM_ADDR_PROP, VirtIOPMEM, start, 0),
     DEFINE_PROP_LINK(VIRTIO_PMEM_MEMDEV_PROP, VirtIOPMEM, memdev,
@@ -171,6 +183,7 @@ static void virtio_pmem_class_init(ObjectClass *klass, void 
*data)
     VirtIOPMEMClass *vpc = VIRTIO_PMEM_CLASS(klass);
 
     device_class_set_props(dc, virtio_pmem_properties);
+    dc->vmsd = &vmstate_virtio_pmem;
 
     vdc->realize = virtio_pmem_realize;
     vdc->unrealize = virtio_pmem_unrealize;
-- 
2.25.1




reply via email to

[Prev in Thread] Current Thread [Next in Thread]