[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/3] Add dbus-vmstate

From: Marc-André Lureau
Subject: Re: [Qemu-devel] [PATCH 0/3] Add dbus-vmstate
Date: Tue, 9 Jul 2019 14:47:32 +0400


On Tue, Jul 9, 2019 at 1:02 PM Daniel P. Berrangé <address@hidden> wrote:
> On Tue, Jul 09, 2019 at 12:26:38PM +0400, Marc-André Lureau wrote:
> > Hi
> >
> > On Mon, Jul 8, 2019 at 8:04 PM Daniel P. Berrangé <address@hidden> wrote:
> > > > The D-Bus protocol can be made to work peer-to-peer, but the most
> > > > common and practical way is through a bus daemon. This also has the
> > > > advantage of increased debuggability (you can eavesdrop on the bus and
> > > > introspect it).
> > >
> > > The downside of using the bus daemon is that we have to spawn a new
> > > instance of dbus-daemon for every QEMU VM that's running on the host,
> > > which is yet more memory overhead for each VM & another process to
> > > manage, and yet another thing to go wrong.
> >
> > dbus-daemon (or dbus-broker) has been optimized to fit on many devices
> > and use cases, it doesn't take much memory (3mb for my session dbus
> > right now).
> >
> > More processes to manage is inevitable. In a near future, we may have
> > 5-10 processes running around qemu. I think dbus-daemon will be one of
> > the easiest to deal with. (as can be seen in the dbus-vmstate test, it
> > is very simple to start a private dbus-daemon)
> The increase in processes per-QEMU is a significant concern I have
> around complexity & manageability in general, hence a desire to avoid
> requiring processes unless they have a compelling reason to exist.

Fair enough, although when the job a bus is done by some other process
(libvirt, qemu or other external process), then I would much rather
have dbus-daemon doing it.

> > > QEMU already has a direct UNIX socket connection to the helper
> > > processes in question. I'd much rather we just had another direct
> > > UNIX socket  connection to that helper, using D-Bus peer-to-peer.
> > > The benefit of debugging doesn't feel compelling enough to justify
> > > running an extra daemon for each VM.
> >
> > I wouldn't minor the need for easier debugging. Debugging multiple
> > processes talking to each other is really hard. Having a bus is
> > awesome (if not required) in this case.
> >
> > There are other advantages of using a bus, those come to my mind:
> >
> > - less connections (bus topology)
> That applies to general use of DBus, but doesn't really apply to
> the proposed QEMU usage, as every single helper is talking to the
> same QEMU endpoint. So if we have 10 helpers, in p2p mode, we
> get 10 sockets open between the helper & QEMU. In bus mode, we
> get 10 sockets open between the helper & dbus and another socket
> open between dbus & QEMU. The bus is only a win in connections
> if you have a mesh-like connection topology not hub & spoke.

The mesh already exist, as it's not just QEMU that want to talk to the
helpers, but the management layer, and 3rd parties (debug tools,
audit, other management tools etc). There are also cases where helpers
may want to talk to each other. Taking networking as an example, 2
slirp interfaces may want to share the same DHCP, bootp/TFTP,
filter/service provider. Redirection/forwarding may be provided on
demand (chardev-like services). The same is probably true for block
layers, security, GPU/display etc. In this case, the bus topology
makes more sense than hiding it under.

> > - configuring/enforcing policies & limits
> I don't see that as an advantage. Rather it is addressing the
> decreased security that the bus model exposes. In peer2peer
> mode, the helpers can only talk to QEMU, so can't directly
> interact with each other. In bus mode, the helpers have a
> direct communications path to attack each other over, so we
> absolutely need policy to mitigate this increased risk. It
> would be better to remove that risk at any architectural
> level by not having a bus at all.

You can enforce security/policy at the bus level, in a single place
(including with selinux/apparmor context - although I am not sure how
much that gives you). If each helper process implements its own
protocol, you will probably never have that kind of central
enforcement. And if they exist, libvirt/management layer, qemu &
helpers will have to implement it for each case...

> > - on-demand service activation & discoverability
> Again useful for dbus in general, but I don't see any clear scenario
> in which this is relevant to QEMU's usage.

Perhaps not to QEMU itself, but helpers could benefit it, see examples
I listed above.

> > I also think D-Bus is the IPC of choice for multi-process. It's easier
> > to use than many other IPC due to the various tools and language
> > bindings available. Having a common bus is a good incentive to use a
> > common IPC, instead of a dozen of half-baked protocols.
> As I said, I don't have any objection to DBus as a protocol. I think it
> would serve our needs well, most especially because GIO has decent API
> bindings to using it, so we avoid having to depend on another 3rd party
> library for something else.
> I think from QEMU's POV, the only real alternative to DBus would be to
> build something on QMP. I prefer DBus, because JSON is a disaster for
> integer type handling, and DBus is more accessible for the helper apps
> which can easily use a DBus API of their choice.

I am glad we can agree on that!

> > Nevertheless, I also think we could use D-Bus in peer-to-peer mode,
> > and I did some investigation. The slirp-helper supports it. We could
> > teach dbus-vmstate to eastablish peer-to-peer connections. Instead of
> > receiving a bus address and list of Ids, it could have a list of dbus
> > peer socket path. Both approaches are not incompatible, but I think
> > the bus benefits outweigh the downside of running an extra process.
> As above I'm not seeing the compelling benefits of using a bus, so
> think we shoud stick to dbus in p2p mode.

As you can see, there are benefits in having a bus. But if there are
strong concerns about it, I can also work on the p2p mode.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]