qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Date: Tue, 19 Sep 2017 10:19:21 +0100
User-agent: Mutt/1.8.3 (2017-05-23)

* Peter Xu (address@hidden) wrote:
> On Mon, Sep 18, 2017 at 06:09:29PM +0200, Marc-André Lureau wrote:
> > On Mon, Sep 18, 2017 at 1:26 PM, Dr. David Alan Gilbert
> > <address@hidden> wrote:
> > > * Marc-André Lureau (address@hidden) wrote:
> > >> Hi
> > >>
> > >> On Mon, Sep 18, 2017 at 12:55 PM, Dr. David Alan Gilbert
> > >> <address@hidden> wrote:
> > >> > * Marc-André Lureau (address@hidden) wrote:
> > >> >> Hi
> > >> >>
> > >> >> On Mon, Sep 18, 2017 at 10:37 AM, Peter Xu <address@hidden> wrote:
> > >> >> > On Fri, Sep 15, 2017 at 01:14:47PM +0200, Marc-André Lureau wrote:
> > >> >> >> Hi
> > >> >> >>
> > >> >> >> On Thu, Sep 14, 2017 at 9:46 PM, Peter Xu <address@hidden> wrote:
> > >> >> >> > On Thu, Sep 14, 2017 at 07:53:15PM +0100, Dr. David Alan Gilbert 
> > >> >> >> > wrote:
> > >> >> >> >> * Marc-André Lureau (address@hidden) wrote:
> > >> >> >> >> > Hi
> > >> >> >> >> >
> > >> >> >> >> > On Thu, Sep 14, 2017 at 9:50 AM, Peter Xu <address@hidden> 
> > >> >> >> >> > wrote:
> > >> >> >> >> > > This series was born from this one:
> > >> >> >> >> > >
> > >> >> >> >> > >   
> > >> >> >> >> > > https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
> > >> >> >> >> > >
> > >> >> >> >> > > The design comes from Markus, and also the whole-bunch-of 
> > >> >> >> >> > > discussions
> > >> >> >> >> > > in previous thread.  My heartful thanks to Markus, Daniel, 
> > >> >> >> >> > > Dave,
> > >> >> >> >> > > Stefan, etc. on discussing the topic (...again!), providing 
> > >> >> >> >> > > shiny
> > >> >> >> >> > > ideas and suggestions.  Finally we got such a solution that 
> > >> >> >> >> > > seems to
> > >> >> >> >> > > satisfy everyone.
> > >> >> >> >> > >
> > >> >> >> >> > > I re-started the versioning since this series is totally 
> > >> >> >> >> > > different
> > >> >> >> >> > > from previous one.  Now it's version 1.
> > >> >> >> >> > >
> > >> >> >> >> > > In case new reviewers come along the way without reading 
> > >> >> >> >> > > previous
> > >> >> >> >> > > discussions, I will try to do a summary on what this is all 
> > >> >> >> >> > > about.
> > >> >> >> >> > >
> > >> >> >> >> > > What is OOB execution?
> > >> >> >> >> > > ======================
> > >> >> >> >> > >
> > >> >> >> >> > > It's the shortcut of Out-Of-Band execution, its name is 
> > >> >> >> >> > > given by
> > >> >> >> >> > > Markus.  It's a way to quickly execute a QMP request.  Say, 
> > >> >> >> >> > > originally
> > >> >> >> >> > > QMP is going throw these steps:
> > >> >> >> >> > >
> > >> >> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> > >> >> >> >> > >           /|\    (2)                (3)     |
> > >> >> >> >> > >        (1) |                               \|/ (4)
> > >> >> >> >> > >            +---------  main thread  --------+
> > >> >> >> >> > >
> > >> >> >> >> > > The requests are executed by the so-called QMP-dispatcher 
> > >> >> >> >> > > after the
> > >> >> >> >> > > JSON is parsed.  If OOB is on, we run the command directly 
> > >> >> >> >> > > in the
> > >> >> >> >> > > parser and quickly returns.
> > >> >> >> >> >
> > >> >> >> >> > All commands should have the "id" field mandatory in this 
> > >> >> >> >> > case, else
> > >> >> >> >> > the client will not distinguish the replies coming from the 
> > >> >> >> >> > last/oob
> > >> >> >> >> > and the previous commands.
> > >> >> >> >> >
> > >> >> >> >> > This should probably be enforced upfront by client capability 
> > >> >> >> >> > checks,
> > >> >> >> >> > more below.
> > >> >> >> >
> > >> >> >> > Hmm yes since the oob commands are actually running in async way,
> > >> >> >> > request ID should be needed here.  However I'm not sure whether
> > >> >> >> > enabling the whole "request ID" thing is too big for this "try 
> > >> >> >> > to be
> > >> >> >> > small" oob change... And IMHO it suites better to be part of the 
> > >> >> >> > whole
> > >> >> >> > async work (no matter which implementation we'll use).
> > >> >> >> >
> > >> >> >> > How about this: we make "id" mandatory for "run-oob" requests 
> > >> >> >> > only.
> > >> >> >> > For oob commands, they will always have ID then no ordering 
> > >> >> >> > issue, and
> > >> >> >> > we can do it async; for the rest of non-oob commands, we still 
> > >> >> >> > allow
> > >> >> >> > them to go without ID, and since they are not oob, they'll 
> > >> >> >> > always be
> > >> >> >> > done in order as well.  Would this work?
> > >> >> >>
> > >> >> >> This mixed-mode is imho more complicated to deal with than having 
> > >> >> >> the
> > >> >> >> protocol enforced one way or the other, but that should work.
> > >> >> >>
> > >> >> >> >
> > >> >> >> >> >
> > >> >> >> >> > > Yeah I know in current code the parser calls dispatcher 
> > >> >> >> >> > > directly
> > >> >> >> >> > > (please see handle_qmp_command()).  However it's not true 
> > >> >> >> >> > > again after
> > >> >> >> >> > > this series (parser will has its own IO thread, and 
> > >> >> >> >> > > dispatcher will
> > >> >> >> >> > > still be run in main thread).  So this OOB does brings 
> > >> >> >> >> > > something
> > >> >> >> >> > > different.
> > >> >> >> >> > >
> > >> >> >> >> > > There are more details on why OOB and the 
> > >> >> >> >> > > difference/relationship
> > >> >> >> >> > > between OOB, async QMP, block/general jobs, etc.. but IMHO 
> > >> >> >> >> > > that's
> > >> >> >> >> > > slightly out of topic (and believe me, it's not easy for me 
> > >> >> >> >> > > to
> > >> >> >> >> > > summarize that).  For more information, please refers to 
> > >> >> >> >> > > [1].
> > >> >> >> >> > >
> > >> >> >> >> > > Summary ends here.
> > >> >> >> >> > >
> > >> >> >> >> > > Some Implementation Details
> > >> >> >> >> > > ===========================
> > >> >> >> >> > >
> > >> >> >> >> > > Again, I mentioned that the old QMP workflow is this:
> > >> >> >> >> > >
> > >> >> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> > >> >> >> >> > >           /|\    (2)                (3)     |
> > >> >> >> >> > >        (1) |                               \|/ (4)
> > >> >> >> >> > >            +---------  main thread  --------+
> > >> >> >> >> > >
> > >> >> >> >> > > What this series does is, firstly:
> > >> >> >> >> > >
> > >> >> >> >> > >       JSON Parser     QMP Dispatcher --> Respond
> > >> >> >> >> > >           /|\ |           /|\       (4)     |
> > >> >> >> >> > >            |  | (2)        | (3)            |  (5)
> > >> >> >> >> > >        (1) |  +----->      |               \|/
> > >> >> >> >> > >            +---------  main thread  <-------+
> > >> >> >> >> > >
> > >> >> >> >> > > And further:
> > >> >> >> >> > >
> > >> >> >> >> > >                queue/kick
> > >> >> >> >> > >      JSON Parser ======> QMP Dispatcher --> Respond
> > >> >> >> >> > >          /|\ |     (3)       /|\        (4)    |
> > >> >> >> >> > >       (1) |  | (2)            |                |  (5)
> > >> >> >> >> > >           | \|/               |               \|/
> > >> >> >> >> > >         IO thread         main thread  <-------+
> > >> >> >> >> >
> > >> >> >> >> > Is the queue per monitor or per client?
> > >> >> >> >
> > >> >> >> > The queue is currently global. I think yes maybe at least we can 
> > >> >> >> > do it
> > >> >> >> > per monitor, but I am not sure whether that is urgent or can be
> > >> >> >> > postponed.  After all now QMPRequest (please refer to patch 11) 
> > >> >> >> > is
> > >> >> >> > defined as (mon, id, req) tuple, so at least "id" namespace is
> > >> >> >> > per-monitor.
> > >> >> >> >
> > >> >> >> >> > And is the dispatching going
> > >> >> >> >> > to be processed even if the client is disconnected, and are 
> > >> >> >> >> > new
> > >> >> >> >> > clients going to receive the replies from previous clients
> > >> >> >> >> > commands?
> > >> >> >> >
> > >> >> >> > [1]
> > >> >> >> >
> > >> >> >> > (will discuss together below)
> > >> >> >> >
> > >> >> >> >> > I
> > >> >> >> >> > believe there should be a per-client context, so there won't 
> > >> >> >> >> > be "id"
> > >> >> >> >> > request conflicts.
> > >> >> >> >
> > >> >> >> > I'd say I am not familiar with this "client" idea, since after 
> > >> >> >> > all
> > >> >> >> > IMHO one monitor is currently designed to mostly work with a 
> > >> >> >> > single
> > >> >> >> > client. Say, unix sockets, telnet, all these backends are only 
> > >> >> >> > single
> > >> >> >> > channeled, and one monitor instance can only work with one 
> > >> >> >> > client at a
> > >> >> >> > time.  Then do we really need to add this client layer upon it?  
> > >> >> >> > IMHO
> > >> >> >> > the user can just provide more monitors if they wants more 
> > >> >> >> > clients
> > >> >> >> > (and at least these clients should know the existance of the 
> > >> >> >> > others or
> > >> >> >> > there might be problem, otherwise user2 will fail a migration, 
> > >> >> >> > finally
> > >> >> >> > noticed that user1 has already triggered one), and the user 
> > >> >> >> > should
> > >> >> >> > manage them well.
> > >> >> >>
> > >> >> >> qemu should support a management layer / libvirt restart/reconnect.
> > >> >> >> Afaik, it mostly work today. There might be a cases where libvirt 
> > >> >> >> can
> > >> >> >> be confused if it receives a reply from a previous connection 
> > >> >> >> command,
> > >> >> >> but due to the sync processing of the chardev, I am not sure you 
> > >> >> >> can
> > >> >> >> get in this situation.  By adding "oob" commands and queuing, the
> > >> >> >> client will have to remember which was the last "id" used, or it 
> > >> >> >> will
> > >> >> >> create more conflict after a reconnect.
> > >> >> >>
> > >> >> >> Imho we should introduce the client/connection concept to avoid 
> > >> >> >> this
> > >> >> >> confusion (unexpected reply & per client id space).
> > >> >> >
> > >> >> > Hmm I agree that the reconnect feature would be nice, but if so IMHO
> > >> >> > instead of throwing responses away when client disconnect, we should
> > >> >> > really keep them, and when the client reconnects, we queue the
> > >> >> > responses again.
> > >> >> >
> > >> >> > I think we have other quite simple ways to solve the "unexpected
> > >> >> > reply" and "per-client-id duplication" issues you have mentioned.
> > >> >> >
> > >> >> > Firstly, when client gets unexpected replies ("id" field not in its
> > >> >> > own request queue), the client should just ignore that reply, which
> > >> >> > seems natural to me.
> > >> >>
> > >> >> The trouble is that it may legitimately use the same "id" value for
> > >> >> new requests. And I don't see a simple way to handle that without
> > >> >> races.
> > >> >
> > >> > Under what circumstances can it reuse the same ID for new requests?
> > >> > Can't we simply tell it not to?
> > >>
> > >> I don't see any restriction today in the protocol in connecting with a
> > >> new client that may not know anything from a previous client.
> > >
> > > Well, it knows it's doing a reconnection.
> > 
> > If you assume the "same client" reconnects to the monitor, I agree.
> > But this is a restriction of monitor usage.
> 
> In monitor_qmp_event(), we can empty the request queue when got
> CHR_EVENT_CLOSED.  Would that be a solution?

What happens to commands that are in flight?

Dave

> -- 
> Peter Xu
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]