[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [RFC QEMU] docs: vhost-user: Add custom memory mapping support
From: |
Alex Bennée |
Subject: |
Re: [RFC QEMU] docs: vhost-user: Add custom memory mapping support |
Date: |
Fri, 24 Feb 2023 18:20:31 +0000 |
User-agent: |
mu4e 1.9.21; emacs 29.0.60 |
Viresh Kumar <viresh.kumar@linaro.org> writes:
> The current model of memory mapping at the back-end works fine with
> Qemu, where a standard call to mmap() for the respective file
> descriptor, passed from front-end, is generally all we need to do before
> the front-end can start accessing the guest memory.
>
> There are other complex cases though, where we need more information at
> the backend and need to do more than just an mmap() call. For example,
> Xen, a type-1 hypervisor, currently supports memory mapping via two
> different methods, foreign-mapping (via /dev/privcmd) and grant-dev (via
> /dev/gntdev). In both these cases, the back-end needs to call mmap()
> followed by an ioctl() (or vice-versa), and need to pass extra
> information via the ioctl(), like the Xen domain-id of the guest whose
> memory we are trying to map.
>
> Add a new protocol feature, 'VHOST_USER_PROTOCOL_F_CUSTOM_MMAP', which
> lets the back-end know about the additional memory mapping requirements.
> When this feature is negotiated, the front-end can send the
> 'VHOST_USER_CUSTOM_MMAP' message type to provide the additional
> information to the back-end.
>
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> ---
> docs/interop/vhost-user.rst | 32 ++++++++++++++++++++++++++++++++
> 1 file changed, 32 insertions(+)
>
> diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
> index 3f18ab424eb0..f2b1d705593a 100644
> --- a/docs/interop/vhost-user.rst
> +++ b/docs/interop/vhost-user.rst
> @@ -258,6 +258,23 @@ Inflight description
>
> :queue size: a 16-bit size of virtqueues
>
> +Custom mmap description
> +^^^^^^^^^^^^^^^^^^^^^^^
> +
> ++-------+-------+
> +| flags | value |
> ++-------+-------+
> +
> +:flags: 64-bit bit field
> +
> +- Bit 0 is Xen foreign memory access flag - needs Xen foreign memory mapping.
> +- Bit 1 is Xen grant memory access flag - needs Xen grant memory mapping.
> +
> +:value: a 64-bit hypervisor specific value.
> +
> +- For Xen foreign or grant memory access, this is set with guest's xen domain
> + id.
> +
> C structure
> -----------
>
> @@ -867,6 +884,7 @@ Protocol features
> #define VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS 14
> #define VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS 15
> #define VHOST_USER_PROTOCOL_F_STATUS 16
> + #define VHOST_USER_PROTOCOL_F_CUSTOM_MMAP 17
>
> Front-end message types
> -----------------------
> @@ -1422,6 +1440,20 @@ Front-end message types
> query the back-end for its device status as defined in the Virtio
> specification.
>
> +``VHOST_USER_CUSTOM_MMAP``
> + :id: 41
> + :equivalent ioctl: N/A
> + :request payload: Custom mmap description
> + :reply payload: N/A
> +
> + When the ``VHOST_USER_PROTOCOL_F_CUSTOM_MMAP`` protocol feature has been
> + successfully negotiated, this message is submitted by the front-end to
> + notify the back-end of the special memory mapping requirements, that the
> + back-end needs to take care of, while mapping any memory regions sent
> + over by the front-end. The front-end must send this message before
> + any memory-regions are sent to the back-end via
> ``VHOST_USER_SET_MEM_TABLE``
> + or ``VHOST_USER_ADD_MEM_REG`` message types.
> +
>
> Back-end message types
> ----------------------
This looks good enough for me. We will see how it works in prototype.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
--
Alex Bennée
Virtualisation Tech Lead @ Linaro