qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] top(1) utility implementation in QEMU


From: Fam Zheng
Subject: Re: [Qemu-devel] top(1) utility implementation in QEMU
Date: Sat, 1 Oct 2016 20:12:56 +0800
User-agent: Mutt/1.7.0 (2016-08-17)

On Fri, 09/30 19:08, Markus Armbruster wrote:
> Fam Zheng <address@hidden> writes:
> 
> > On Mon, 09/26 17:28, Daniel P. Berrange wrote:
> >> On Mon, Sep 26, 2016 at 07:14:33PM +0530, prashanth sunder wrote:
> >> > Hi All,
> >> > 
> >> > Summary of the discussion and different approaches we had on IRC
> >> > regarding a top(1) tool in qemu
> >> > 
> >> > Implement unique naming for all event loop resources.  Sometimes a
> >> > string literal can be used but other times the unique name needs to be
> >> > generated at runtime (e.g. filename for an fd).
> >> > 
> >> > Approach 1)
> >> > For a built-in QMP implementation:
> >> > We have callbacks from fds, BHs and Timers
> >> > So everytime one of them is registered - we add them to the list(what
> >> > we see through QMP)
> >> > and when they are unregistered - we remove them from the list.
> >> > Ex: aio_set_fd_handler(fd, NULL, NULL, NULL) - unregistering an fd -
> >> > will remove the fd from the list.
> >> > 
> >> > QMP API:
> >> > set-event-loop-profiling enable=on/off
> >> > [interval=seconds] [iothread=name] and it emits a QMP event with
> >> > [{name, counter, time_elapsed}]
> >> > 
> >> > Pros:
> >> > It works on all systems.
> >> > Cons:
> >> > Information present inside glib is exposed only via systemtap tracing
> >> > - these will not be available via QMP.
> >> > For example - I/O in chardevs, network IO etc
> >> 
> >> 
> >> There's other downsides to QMP approach
> >> 
> >>  - Emitting data via QMP will change the behaviour of the system
> >>    itself, since QMP will trigger usage of the main event loop
> >>    which is the thing being traced. The degree of disturbance
> >>    will depend on the interval for emitting events
> >
> > Yes, but compared to a guest that is busy enough to be analyzed with 
> > qemu-top,
> > I don't think this can be a high degree, even it's at a few events per 
> > second.
> >
> >> 
> >>  - If the interval is small and you're monitoring more than one
> >>    guest at a time, then the overhead of QMP could start to get
> >>    quite significant across the host as a whole. This was
> >>    mentioned at the summit wrt existing I/O stats expose by
> >>    QEMU for block / net device backends.
> >
> > qemu-top is supposed to run only in foreground when human attends. So I'm 
> > not
> > concerned about the system wide overall overhead.
> >
> >> 
> >>  - The 'top' tool does not actually have direct access to
> >>    QMP for any libvirt guests and we've unlikely to want to
> >>    expose such QMP events via libvirt in any kind of supported
> >>    API, as they're very use-case specific in design. So at best
> >>    the app would have to use libvirt QMP passthrough which is
> >>    acceptable for developer / test environments, but not
> >>    something that's satisfactory for production deployments.
> >
> > Just another idea: my original though on how to send statistics to 
> > 'qemu-top',
> > was a specialized channel like a socket with a minimized protocol (e.g. a
> > mini-QMP, with only whitelisted commands, or an event-only QMP, or simply 
> > in an
> > ad-hoc format).
> 
> What's the advantage over simply using another QMP monitor?  Naturally,
> injecting arbitrary QMP commands behind libvirt's back isn't going to
> end well, but "don't do that then".  Information queries and listening
> to events should be safe.

In order to avoid a Libvirt "tainted" state at production env, of course
assuming qemu-top is useful there at all.

> Note that we could have a QMP command to spawn monitors.  Fun!

Cool, and how hard is it to implement a QMP command to kill monitors? :)

Fam



reply via email to

[Prev in Thread] Current Thread [Next in Thread]