qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RFC v2 04/16] vfio-user: connect vfio proxy to remote server


From: Stefan Hajnoczi
Subject: Re: [PATCH RFC v2 04/16] vfio-user: connect vfio proxy to remote server
Date: Tue, 24 Aug 2021 15:15:20 +0100

On Mon, Aug 16, 2021 at 09:42:37AM -0700, Elena Ufimtseva wrote:
> @@ -3361,13 +3362,35 @@ static void vfio_user_pci_realize(PCIDevice *pdev, 
> Error **errp)
>      VFIOUserPCIDevice *udev = VFIO_USER_PCI(pdev);
>      VFIOPCIDevice *vdev = VFIO_PCI_BASE(pdev);
>      VFIODevice *vbasedev = &vdev->vbasedev;
> +    SocketAddress addr;
> +    VFIOProxy *proxy;
> +    Error *err = NULL;
>  
> +    /*
> +     * TODO: make option parser understand SocketAddress
> +     * and use that instead of having scaler options

s/scaler/scalar/

> +VFIOProxy *vfio_user_connect_dev(SocketAddress *addr, Error **errp)
> +{
> +    VFIOProxy *proxy;
> +    QIOChannelSocket *sioc;
> +    QIOChannel *ioc;
> +    char *sockname;
> +
> +    if (addr->type != SOCKET_ADDRESS_TYPE_UNIX) {
> +        error_setg(errp, "vfio_user_connect - bad address family");
> +        return NULL;
> +    }
> +    sockname = addr->u.q_unix.path;
> +
> +    sioc = qio_channel_socket_new();
> +    ioc = QIO_CHANNEL(sioc);
> +    if (qio_channel_socket_connect_sync(sioc, addr, errp)) {
> +        object_unref(OBJECT(ioc));
> +        return NULL;
> +    }
> +    qio_channel_set_blocking(ioc, true, NULL);
> +
> +    proxy = g_malloc0(sizeof(VFIOProxy));
> +    proxy->sockname = sockname;

sockname is addr->u.q_unix.path, so there's an assumption that the
lifetime of the addr argument is at least as long as the proxy object's
lifetime. This doesn't seem to be the case in vfio_user_pci_realize()
since the SocketAddress variable is declared on the stack.

I suggest making SocketAddress *addr const so it's obvious that this
function just reads it (doesn't take ownership of the pointer) and
copying the UNIX domain socket path with g_strdup() to avoid the
dangling pointer.

> +    proxy->ioc = ioc;
> +    proxy->flags = VFIO_PROXY_CLIENT;
> +    proxy->state = VFIO_PROXY_CONNECTED;
> +    qemu_cond_init(&proxy->close_cv);
> +
> +    if (vfio_user_iothread == NULL) {
> +        vfio_user_iothread = iothread_create("VFIO user", errp);
> +    }

Why is a dedicated IOThread needed for VFIO user?

> +
> +    qemu_mutex_init(&proxy->lock);
> +    QTAILQ_INIT(&proxy->free);
> +    QTAILQ_INIT(&proxy->pending);
> +    QLIST_INSERT_HEAD(&vfio_user_sockets, proxy, next);
> +
> +    return proxy;
> +}
> +

/* Called with the BQL */
> +void vfio_user_disconnect(VFIOProxy *proxy)
> +{
> +    VFIOUserReply *r1, *r2;
> +
> +    qemu_mutex_lock(&proxy->lock);
> +
> +    /* our side is quitting */
> +    if (proxy->state == VFIO_PROXY_CONNECTED) {
> +        vfio_user_shutdown(proxy);
> +        if (!QTAILQ_EMPTY(&proxy->pending)) {
> +            error_printf("vfio_user_disconnect: outstanding requests\n");
> +        }
> +    }
> +    object_unref(OBJECT(proxy->ioc));
> +    proxy->ioc = NULL;
> +
> +    proxy->state = VFIO_PROXY_CLOSING;
> +    QTAILQ_FOREACH_SAFE(r1, &proxy->pending, next, r2) {
> +        qemu_cond_destroy(&r1->cv);
> +        QTAILQ_REMOVE(&proxy->pending, r1, next);
> +        g_free(r1);
> +    }
> +    QTAILQ_FOREACH_SAFE(r1, &proxy->free, next, r2) {
> +        qemu_cond_destroy(&r1->cv);
> +        QTAILQ_REMOVE(&proxy->free, r1, next);
> +        g_free(r1);
> +    }
> +
> +    /*
> +     * Make sure the iothread isn't blocking anywhere
> +     * with a ref to this proxy by waiting for a BH
> +     * handler to run after the proxy fd handlers were
> +     * deleted above.
> +     */
> +    proxy->close_wait = 1;

Please use true. '1' is shorter but it's less obvious to the reader (I
had to jump to the definition to check whether this field was bool or
int).

> +    aio_bh_schedule_oneshot(iothread_get_aio_context(vfio_user_iothread),
> +                            vfio_user_cb, proxy);
> +
> +    /* drop locks so the iothread can make progress */
> +    qemu_mutex_unlock_iothread();

Why does the BQL needs to be dropped so vfio_user_iothread can make
progress?

> +    qemu_cond_wait(&proxy->close_cv, &proxy->lock);

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]