qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 04/16] multi-process: Fix pci_proxy_dev_realize() error handling


From: Markus Armbruster
Subject: [PATCH 04/16] multi-process: Fix pci_proxy_dev_realize() error handling
Date: Tue, 20 Jul 2021 14:53:56 +0200

The Error ** argument must be NULL, &error_abort, &error_fatal, or a
pointer to a variable containing NULL.  Passing an argument of the
latter kind twice without clearing it in between is wrong: if the
first call sets an error, it no longer points to NULL for the second
call.

pci_proxy_dev_realize() is wrong that way: it passes @errp to
qio_channel_new_fd() without checking for failure.  If it runs into
another failure, it trips error_setv()'s assertion.

Fix it to check for failure properly.

Fixes: 9f8112073aad8e485ac012ee18809457ab7f23a6
Cc: Elena Ufimtseva <elena.ufimtseva@oracle.com>
Cc: Jagannathan Raman <jag.raman@oracle.com>
Cc: John G Johnson <john.g.johnson@oracle.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
---
 hw/remote/proxy.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/hw/remote/proxy.c b/hw/remote/proxy.c
index 6dda705fc2..499f540c94 100644
--- a/hw/remote/proxy.c
+++ b/hw/remote/proxy.c
@@ -102,10 +102,18 @@ static void pci_proxy_dev_realize(PCIDevice *device, 
Error **errp)
     }
 
     dev->ioc = qio_channel_new_fd(fd, errp);
+    if (!dev->ioc) {
+        close(fd);
+        return;
+    }
 
     error_setg(&dev->migration_blocker, "%s does not support migration",
                TYPE_PCI_PROXY_DEV);
-    migrate_add_blocker(dev->migration_blocker, errp);
+    if (migrate_add_blocker(dev->migration_blocker, errp) < 0) {
+        error_free(dev->migration_blocker);
+        object_unref(dev->ioc);
+        return;
+    }
 
     qemu_mutex_init(&dev->io_mutex);
     qio_channel_set_blocking(dev->ioc, true, NULL);
-- 
2.31.1




reply via email to

[Prev in Thread] Current Thread [Next in Thread]