qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH-for-8.0? v1] hw/pvrdma: Protect against buggy or malicious gu


From: Philippe Mathieu-Daudé
Subject: Re: [PATCH-for-8.0? v1] hw/pvrdma: Protect against buggy or malicious guest driver
Date: Tue, 11 Apr 2023 12:33:49 +0200
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.9.1

On 1/3/23 15:29, Yuval Shaia wrote:
Guest driver allocates and initialize page tables to be used as a ring
of descriptors for CQ and async events.
The page table that represents the ring, along with the number of pages
in the page table is passed to the device.
Currently our device supports only one page table for a ring.

Let's make sure that the number of page table entries the driver
reports, do not exceeds the one page table size.


Fixes: CVE-2023-1544

Reported-by: Soul Chen <soulchen8650@gmail.com>
Signed-off-by: Yuval Shaia <yuval.shaia.ml@gmail.com>
---
v0 -> v1:
        * Take ring-state into account
        * Add Reported-by
---
  hw/rdma/vmw/pvrdma_main.c | 16 +++++++++++++++-
  1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/hw/rdma/vmw/pvrdma_main.c b/hw/rdma/vmw/pvrdma_main.c
index 4fc6712025..55b338046e 100644
--- a/hw/rdma/vmw/pvrdma_main.c
+++ b/hw/rdma/vmw/pvrdma_main.c
@@ -91,19 +91,33 @@ static int init_dev_ring(PvrdmaRing *ring, PvrdmaRingState 
**ring_state,
                           dma_addr_t dir_addr, uint32_t num_pages)
  {
      uint64_t *dir, *tbl;
-    int rc = 0;
+    int max_pages, rc = 0;
if (!num_pages) {
          rdma_error_report("Ring pages count must be strictly positive");
          return -EINVAL;
      }
+ /*
+     * Make sure we can satisfy the requested number of pages in a single
+     * TARGET_PAGE_SIZE sized page table (taking into account that first entry
+     * is reserved for ring-state)
+     */

Worth a definition? Or maybe better an enum.

+    max_pages = TARGET_PAGE_SIZE / sizeof(dma_addr_t) - 1;

Possibly clearer as a #define in pvrdma_dev_ring.h.

+    if (num_pages > max_pages) {
+        rdma_error_report("Maximum pages on a single directory must not exceed 
%d\n",
+                          max_pages);
+        return -EINVAL;
+    }
+
      dir = rdma_pci_dma_map(pci_dev, dir_addr, TARGET_PAGE_SIZE);
      if (!dir) {
          rdma_error_report("Failed to map to page directory (ring %s)", name);
          rc = -ENOMEM;
          goto out;
      }
+
+    /* We support only one page table for a ring */
      tbl = rdma_pci_dma_map(pci_dev, dir[0], TARGET_PAGE_SIZE);
      if (!tbl) {
          rdma_error_report("Failed to map to page table (ring %s)", name);

Now looking at the following pvrdma_ring_init() call, I see too many
magic values for my taste, so feel unsafe to review:

    /* RX ring is the second */
    (*ring_state)++;
    rc = pvrdma_ring_init(ring, name, pci_dev,
                          (PvrdmaRingState *)*ring_state,
                          (num_pages - 1) * TARGET_PAGE_SIZE /
                          sizeof(struct pvrdma_cqne),
                          sizeof(struct pvrdma_cqne),
                          (dma_addr_t *)&tbl[1],
                          (dma_addr_t)num_pages - 1);



reply via email to

[Prev in Thread] Current Thread [Next in Thread]