[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v6 07/10] vdpa: Add asid parameter to vhost_vdpa_dma_map/unma
From: |
Eugenio Perez Martin |
Subject: |
Re: [PATCH v6 07/10] vdpa: Add asid parameter to vhost_vdpa_dma_map/unmap |
Date: |
Fri, 11 Nov 2022 14:02:09 +0100 |
On Fri, Nov 11, 2022 at 8:41 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> 在 2022/11/10 21:22, Eugenio Perez Martin 写道:
> > On Thu, Nov 10, 2022 at 6:51 AM Jason Wang <jasowang@redhat.com> wrote:
> >> On Wed, Nov 9, 2022 at 1:08 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> >>> So the caller can choose which ASID is destined.
> >>>
> >>> No need to update the batch functions as they will always be called from
> >>> memory listener updates at the moment. Memory listener updates will
> >>> always update ASID 0, as it's the passthrough ASID.
> >>>
> >>> All vhost devices's ASID are 0 at this moment.
> >>>
> >>> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> >>> ---
> >>> v5:
> >>> * Solve conflict, now vhost_vdpa_svq_unmap_ring returns void
> >>> * Change comment on zero initialization.
> >>>
> >>> v4: Add comment specifying behavior if device does not support _F_ASID
> >>>
> >>> v3: Deleted unneeded space
> >>> ---
> >>> include/hw/virtio/vhost-vdpa.h | 8 +++++---
> >>> hw/virtio/vhost-vdpa.c | 29 +++++++++++++++++++----------
> >>> net/vhost-vdpa.c | 6 +++---
> >>> hw/virtio/trace-events | 4 ++--
> >>> 4 files changed, 29 insertions(+), 18 deletions(-)
> >>>
> >>> diff --git a/include/hw/virtio/vhost-vdpa.h
> >>> b/include/hw/virtio/vhost-vdpa.h
> >>> index 1111d85643..6560bb9d78 100644
> >>> --- a/include/hw/virtio/vhost-vdpa.h
> >>> +++ b/include/hw/virtio/vhost-vdpa.h
> >>> @@ -29,6 +29,7 @@ typedef struct vhost_vdpa {
> >>> int index;
> >>> uint32_t msg_type;
> >>> bool iotlb_batch_begin_sent;
> >>> + uint32_t address_space_id;
> >> So the trick is let device specific code to zero this during allocation?
> >>
> > Yes, but I don't see how that is a trick :). All other parameters also
> > trust it to be 0 at allocation.
> >
> >>> MemoryListener listener;
> >>> struct vhost_vdpa_iova_range iova_range;
> >>> uint64_t acked_features;
> >>> @@ -42,8 +43,9 @@ typedef struct vhost_vdpa {
> >>> VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
> >>> } VhostVDPA;
> >>>
> >>> -int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
> >>> - void *vaddr, bool readonly);
> >>> -int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova, hwaddr size);
> >>> +int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
> >>> + hwaddr size, void *vaddr, bool readonly);
> >>> +int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr
> >>> iova,
> >>> + hwaddr size);
> >>>
> >>> #endif
> >>> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> >>> index 23efb8f49d..8fd32ba32b 100644
> >>> --- a/hw/virtio/vhost-vdpa.c
> >>> +++ b/hw/virtio/vhost-vdpa.c
> >>> @@ -72,22 +72,24 @@ static bool
> >>> vhost_vdpa_listener_skipped_section(MemoryRegionSection *section,
> >>> return false;
> >>> }
> >>>
> >>> -int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
> >>> - void *vaddr, bool readonly)
> >>> +int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
> >>> + hwaddr size, void *vaddr, bool readonly)
> >>> {
> >>> struct vhost_msg_v2 msg = {};
> >>> int fd = v->device_fd;
> >>> int ret = 0;
> >>>
> >>> msg.type = v->msg_type;
> >>> + msg.asid = asid; /* 0 if vdpa device does not support asid */
> >> The comment here is confusing. If this is a requirement, we need either
> >>
> >> 1) doc this
> >>
> >> or
> >>
> >> 2) perform necessary checks in the function itself.
> >>
> > I only documented it in vhost_vdpa_dma_unmap and now I realize it.
> > Would it work to just copy that comment here?
>
>
> Probably, and let's move the comment above the function definition.
>
>
> >
> >>> msg.iotlb.iova = iova;
> >>> msg.iotlb.size = size;
> >>> msg.iotlb.uaddr = (uint64_t)(uintptr_t)vaddr;
> >>> msg.iotlb.perm = readonly ? VHOST_ACCESS_RO : VHOST_ACCESS_RW;
> >>> msg.iotlb.type = VHOST_IOTLB_UPDATE;
> >>>
> >>> - trace_vhost_vdpa_dma_map(v, fd, msg.type, msg.iotlb.iova,
> >>> msg.iotlb.size,
> >>> - msg.iotlb.uaddr, msg.iotlb.perm,
> >>> msg.iotlb.type);
> >>> + trace_vhost_vdpa_dma_map(v, fd, msg.type, msg.asid, msg.iotlb.iova,
> >>> + msg.iotlb.size, msg.iotlb.uaddr,
> >>> msg.iotlb.perm,
> >>> + msg.iotlb.type);
> >>>
> >>> if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) {
> >>> error_report("failed to write, fd=%d, errno=%d (%s)",
> >>> @@ -98,18 +100,24 @@ int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr
> >>> iova, hwaddr size,
> >>> return ret;
> >>> }
> >>>
> >>> -int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova, hwaddr size)
> >>> +int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr
> >>> iova,
> >>> + hwaddr size)
> >>> {
> >>> struct vhost_msg_v2 msg = {};
> >>> int fd = v->device_fd;
> >>> int ret = 0;
> >>>
> >>> msg.type = v->msg_type;
> >>> + /*
> >>> + * The caller must set asid = 0 if the device does not support asid.
> >>> + * This is not an ABI break since it is set to 0 by the initializer
> >>> anyway.
> >>> + */
> >>> + msg.asid = asid;
> >>> msg.iotlb.iova = iova;
> >>> msg.iotlb.size = size;
> >>> msg.iotlb.type = VHOST_IOTLB_INVALIDATE;
> >>>
> >>> - trace_vhost_vdpa_dma_unmap(v, fd, msg.type, msg.iotlb.iova,
> >>> + trace_vhost_vdpa_dma_unmap(v, fd, msg.type, msg.asid, msg.iotlb.iova,
> >>> msg.iotlb.size, msg.iotlb.type);
> >>>
> >>> if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) {
> >>> @@ -229,7 +237,7 @@ static void
> >>> vhost_vdpa_listener_region_add(MemoryListener *listener,
> >>> }
> >>>
> >>> vhost_vdpa_iotlb_batch_begin_once(v);
> >>> - ret = vhost_vdpa_dma_map(v, iova, int128_get64(llsize),
> >>> + ret = vhost_vdpa_dma_map(v, 0, iova, int128_get64(llsize),
> >> Can we use v->address_space_id here? Then we don't need to modify this
> >> line when we support multiple asids logic in the future.
> >>
> > The registered memory listener is the one of the last vhost_vdpa, the
> > one that handles the last queue.
> >
> > If all data virtqueues are not shadowed but CVQ is,
> > v->address_space_id is 1 with the current code.
>
>
> Ok, right. So let's add a comment here. It would be even better to
> define the macro for data vq asid in this patch.
>
I agree that it must be changed to a macro.
However, currently the _ASID macros belong to net/ . Maybe to declare
VHOST_VDPA_GPA_ASID in include/hw/virtio/vhost-vdpa.h and just let
VHOST_VDPA_NET_CVQ_ASID in net/vhost-vdpa.c?
Thanks!
>
> Thanks
>
>
> > But the listener is
> > actually mapping the ASID 0, not 1.
> >
> > Another alternative is to register it to the last data virtqueue, not
> > the last queue of vhost_vdpa. But it is hard to express it in a
> > generic way at virtio/vhost-vdpa.c . To have a boolean indicating the
> > vhost_vdpa we want to register its memory listener?
> >
> > It seems easier to me to simply assign 0 at GPA translations. If SVQ
> > is enabled for all queues, then 0 is GPA to qemu's VA + SVQ stuff. If
> > it is not, 0 is always GPA to qemu's VA.
> >
> > Thanks!
> >
> >> Thanks
> >>
> >>> vaddr, section->readonly);
> >>> if (ret) {
> >>> error_report("vhost vdpa map fail!");
> >>> @@ -303,7 +311,7 @@ static void
> >>> vhost_vdpa_listener_region_del(MemoryListener *listener,
> >>> vhost_iova_tree_remove(v->iova_tree, *result);
> >>> }
> >>> vhost_vdpa_iotlb_batch_begin_once(v);
> >>> - ret = vhost_vdpa_dma_unmap(v, iova, int128_get64(llsize));
> >>> + ret = vhost_vdpa_dma_unmap(v, 0, iova, int128_get64(llsize));
> >>> if (ret) {
> >>> error_report("vhost_vdpa dma unmap error!");
> >>> }
> >>> @@ -884,7 +892,7 @@ static void vhost_vdpa_svq_unmap_ring(struct
> >>> vhost_vdpa *v, hwaddr addr)
> >>> }
> >>>
> >>> size = ROUND_UP(result->size, qemu_real_host_page_size());
> >>> - r = vhost_vdpa_dma_unmap(v, result->iova, size);
> >>> + r = vhost_vdpa_dma_unmap(v, v->address_space_id, result->iova, size);
> >>> if (unlikely(r < 0)) {
> >>> error_report("Unable to unmap SVQ vring: %s (%d)",
> >>> g_strerror(-r), -r);
> >>> return;
> >>> @@ -924,7 +932,8 @@ static bool vhost_vdpa_svq_map_ring(struct vhost_vdpa
> >>> *v, DMAMap *needle,
> >>> return false;
> >>> }
> >>>
> >>> - r = vhost_vdpa_dma_map(v, needle->iova, needle->size + 1,
> >>> + r = vhost_vdpa_dma_map(v, v->address_space_id, needle->iova,
> >>> + needle->size + 1,
> >>> (void *)(uintptr_t)needle->translated_addr,
> >>> needle->perm == IOMMU_RO);
> >>> if (unlikely(r != 0)) {
> >>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> >>> index fb35b17ab4..ca1acc0410 100644
> >>> --- a/net/vhost-vdpa.c
> >>> +++ b/net/vhost-vdpa.c
> >>> @@ -258,7 +258,7 @@ static void vhost_vdpa_cvq_unmap_buf(struct
> >>> vhost_vdpa *v, void *addr)
> >>> return;
> >>> }
> >>>
> >>> - r = vhost_vdpa_dma_unmap(v, map->iova, map->size + 1);
> >>> + r = vhost_vdpa_dma_unmap(v, v->address_space_id, map->iova,
> >>> map->size + 1);
> >>> if (unlikely(r != 0)) {
> >>> error_report("Device cannot unmap: %s(%d)", g_strerror(r), r);
> >>> }
> >>> @@ -298,8 +298,8 @@ static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa
> >>> *v, void *buf, size_t size,
> >>> return r;
> >>> }
> >>>
> >>> - r = vhost_vdpa_dma_map(v, map.iova,
> >>> vhost_vdpa_net_cvq_cmd_page_len(), buf,
> >>> - !write);
> >>> + r = vhost_vdpa_dma_map(v, v->address_space_id, map.iova,
> >>> + vhost_vdpa_net_cvq_cmd_page_len(), buf,
> >>> !write);
> >>> if (unlikely(r < 0)) {
> >>> goto dma_map_err;
> >>> }
> >>> diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
> >>> index 820dadc26c..0ad9390307 100644
> >>> --- a/hw/virtio/trace-events
> >>> +++ b/hw/virtio/trace-events
> >>> @@ -30,8 +30,8 @@ vhost_user_write(uint32_t req, uint32_t flags) "req:%d
> >>> flags:0x%"PRIx32""
> >>> vhost_user_create_notifier(int idx, void *n) "idx:%d n:%p"
> >>>
> >>> # vhost-vdpa.c
> >>> -vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint64_t iova,
> >>> uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type) "vdpa:%p fd:
> >>> %d msg_type: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" uaddr:
> >>> 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8
> >>> -vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint64_t
> >>> iova, uint64_t size, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32"
> >>> iova: 0x%"PRIx64" size: 0x%"PRIx64" type: %"PRIu8
> >>> +vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint32_t asid,
> >>> uint64_t iova, uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type)
> >>> "vdpa:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64"
> >>> size: 0x%"PRIx64" uaddr: 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8
> >>> +vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint32_t
> >>> asid, uint64_t iova, uint64_t size, uint8_t type) "vdpa:%p fd: %d
> >>> msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64"
> >>> type: %"PRIu8
> >>> vhost_vdpa_listener_begin_batch(void *v, int fd, uint32_t msg_type,
> >>> uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8
> >>> vhost_vdpa_listener_commit(void *v, int fd, uint32_t msg_type, uint8_t
> >>> type) "vdpa:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8
> >>> vhost_vdpa_listener_region_add(void *vdpa, uint64_t iova, uint64_t
> >>> llend, void *vaddr, bool readonly) "vdpa: %p iova 0x%"PRIx64" llend
> >>> 0x%"PRIx64" vaddr: %p read-only: %d"
> >>> --
> >>> 2.31.1
> >>>
>
- Re: [PATCH v6 05/10] vdpa: move SVQ vring features check to net/, (continued)
- Re: [PATCH v6 05/10] vdpa: move SVQ vring features check to net/, Eugenio Perez Martin, 2022/11/11
- Re: [PATCH v6 05/10] vdpa: move SVQ vring features check to net/, Jason Wang, 2022/11/11
- Re: [PATCH v6 05/10] vdpa: move SVQ vring features check to net/, Eugenio Perez Martin, 2022/11/11
- Re: [PATCH v6 05/10] vdpa: move SVQ vring features check to net/, Jason Wang, 2022/11/14
- Re: [PATCH v6 05/10] vdpa: move SVQ vring features check to net/, Eugenio Perez Martin, 2022/11/14
[PATCH v6 06/10] vdpa: Allocate SVQ unconditionally, Eugenio Pérez, 2022/11/08
[PATCH v6 07/10] vdpa: Add asid parameter to vhost_vdpa_dma_map/unmap, Eugenio Pérez, 2022/11/08
[PATCH v6 08/10] vdpa: Store x-svq parameter in VhostVDPAState, Eugenio Pérez, 2022/11/08
[PATCH v6 09/10] vdpa: Add listener_shadow_vq to vhost_vdpa, Eugenio Pérez, 2022/11/08
- Re: [PATCH v6 09/10] vdpa: Add listener_shadow_vq to vhost_vdpa, Jason Wang, 2022/11/10
- Re: [PATCH v6 09/10] vdpa: Add listener_shadow_vq to vhost_vdpa, Eugenio Perez Martin, 2022/11/10
- Re: [PATCH v6 09/10] vdpa: Add listener_shadow_vq to vhost_vdpa, Jason Wang, 2022/11/11
- Re: [PATCH v6 09/10] vdpa: Add listener_shadow_vq to vhost_vdpa, Eugenio Perez Martin, 2022/11/11
- Re: [PATCH v6 09/10] vdpa: Add listener_shadow_vq to vhost_vdpa, Jason Wang, 2022/11/14
- Re: [PATCH v6 09/10] vdpa: Add listener_shadow_vq to vhost_vdpa, Eugenio Perez Martin, 2022/11/14
- Re: [PATCH v6 09/10] vdpa: Add listener_shadow_vq to vhost_vdpa, Jason Wang, 2022/11/14