[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH] virtio pmem: user document
From: |
Cornelia Huck |
Subject: |
Re: [Qemu-devel] [PATCH] virtio pmem: user document |
Date: |
Tue, 30 Jul 2019 11:45:48 +0200 |
On Tue, 30 Jul 2019 12:16:57 +0530
Pankaj Gupta <address@hidden> wrote:
> This patch documents the steps to use virtio pmem.
> It also documents other useful information about
> virtio pmem e.g use-case, comparison with Qemu NVDIMM
> backend and current limitations.
>
> Signed-off-by: Pankaj Gupta <address@hidden>
> ---
> docs/virtio-pmem.txt | 65 ++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 65 insertions(+)
> create mode 100644 docs/virtio-pmem.txt
>
> diff --git a/docs/virtio-pmem.txt b/docs/virtio-pmem.txt
Maybe make this ReST from the start? Should be trivial enough.
> new file mode 100644
> index 0000000000..fc61eebb20
> --- /dev/null
> +++ b/docs/virtio-pmem.txt
> @@ -0,0 +1,65 @@
> +
> +QEMU virtio pmem
> +===================
> +
> + This document explains the usage of virtio pmem device
"setup and usage" ?
> + which is available since QEMU v4.1.0.
> +
> + The virtio pmem is paravirtualized persistent memory device
"The virtio pmem device is a paravirtualized..."
> + on regular(non-NVDIMM) storage.
> +
> +Usecase
> +--------
> + Allows to bypass the guest page cache and directly use host page cache.
> + This reduces guest memory footprint as host can make efficient memory
s/as host/,as the host/
> + reclaim decisions under memory pressure.
> +
> +o How does virtio-pmem compare to the nvdimm emulation supported by QEMU?
> +
> + NVDIMM emulation on regular(non-NVDIMM) host storage does not persists
s/regular(non-NVDIMM)/regular (i.e. non-NVDIMM)/ ?
s/persists/persist/
> + the guest writes as there are no defined semantecs in the device
> specification.
s/semantecs/semantics/
> + With virtio pmem device, guest write persistence on non-NVDIMM storage is
> + supported.
"The virtio pmem device provides a way to support guest write
persistence on non-NVDIMM storage." ?
> +
> +virtio pmem usage
> +-----------------
> + virtio pmem device is created with a memory-backend-file with the below
> + options:
"A virtio pmem device backed by a memory-backend-file can be created on
the QEMU command line as in the following example:" ?
> +
> + -machine pc -m 8G,slots=$N,maxmem=$MAX_SIZE
I'm not sure you should explicitly specify the machine type in this
example. I think it is fine to say that something is only supported on
a subset of machine types, but it should not make its way into an
example on how to configure a device and its backing.
Also, maybe fill in more concrete values here? Or split it into a part
specifying the syntax (where I'd use <max_size> instead of $MAX_SIZE
etc.), and a more concrete example?
> + -object memory-backend-file,id=mem1,share,mem-path=$PATH,size=$SIZE
> + -device virtio-pmem-pci,memdev=mem1,id=nv1
> +
> + where:
> + - "object
> memory-backend-file,id=mem1,share,mem-path=$PATH,size=$VIRTIO_PMEM_SIZE"
> + creates a backend storage of size $SIZE on a file $PATH. All
> + accesses to the virtio pmem device go to the file $PATH.
> +
> + - "device virtio-pmem-pci,id=nvdimm1,memdev=mem1" creates a virtio pmem
> + device whose storage is provided by above memory backend device.
"a virtio pmem PCI device" ?
> +
> + Multiple virtio pmem devices can be created if multiple pairs of "-object"
> + and "-device" are provided.
> +
> +Hotplug
> +-------
> +Accomplished by two monitor commands "object_add" and "device_add".
Hm... what about the following instead:
"Virtio pmem devices can be hotplugged via the QEMU monitor. First, the
memory backing has to be added via 'object_add'; afterwards, the virtio
pmem device can be added via 'device_add'."
> +
> +For example, the following commands add another 4GB virtio pmem device to
> +the guest:
> +
> + (qemu) object_add
> memory-backend-file,id=mem2,share=on,mem-path=virtio_pmem2.img,size=4G
> + (qemu) device_add virtio-pmem-pci,id=virtio_pmem2,memdev=mem2
> +
> +Guest Data Persistence
> +----------------------
> +Guest data persistence on non-NVDIMM requires guest userspace application to
s/application/applications/ ?
> +perform fsync/msync. This is different than real nvdimm backend where no
> additional
s/than/from a/ ?
> +fsync/msync is required for data persistence.
Should we be a bit more verbose on what which guest applications are
supposed to do? I.e., how do they know they need to do fsync/msync,
when should they do it, and what are the consequences if they don't?
> +
> +Limitations
> +------------
> +- Real nvdimm device backend is not supported.
> +- virtio pmem hotunplug is not supported.
> +- ACPI NVDIMM features like regions/namespaces are not supported.
> +- ndctl command is not supported.