qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v5 16/26] monitor: separate QMP parser and dispatc


From: Peter Xu
Subject: Re: [Qemu-devel] [RFC v5 16/26] monitor: separate QMP parser and dispatcher
Date: Mon, 18 Dec 2017 13:26:49 +0800
User-agent: Mutt/1.9.1 (2017-09-22)

On Sat, Dec 16, 2017 at 09:23:22AM +0000, Stefan Hajnoczi wrote:
> On Sat, Dec 16, 2017 at 02:37:03PM +0800, Peter Xu wrote:
> > On Wed, Dec 13, 2017 at 08:09:38PM +0000, Stefan Hajnoczi wrote:
> > > On Tue, Dec 05, 2017 at 01:51:50PM +0800, Peter Xu wrote:
> > > > @@ -3956,12 +3968,122 @@ static void 
> > > > handle_qmp_command(JSONMessageParser *parser, GQueue *tokens,
> > > >          }
> > > >      }
> > > >  
> > > > -err_out:
> > > > -    monitor_qmp_respond(mon, rsp, err, id);
> > > > +    /* Respond if necessary */
> > > > +    monitor_qmp_respond(mon, rsp, NULL, id);
> > > > +
> > > > +    /* This pairs with the monitor_suspend() in handle_qmp_command(). 
> > > > */
> > > > +    if (!qmp_oob_enabled(mon)) {
> > > > +        monitor_resume(mon);
> > > 
> > > monitor_resume() does not work between threads: if the event loop is
> > > currently blocked in poll() it won't notice that the monitor fd should
> > > be watched again.
> > > 
> > > Please add aio_notify() to monitor_resume() and monitor_suspend().  That
> > > way the event loop is forced to check can_read() again.
> > 
> > Ah, yes.  I think monitor_suspend() does not need the notify?  Since
> > if it's sleeping it won't miss the next check in can_read() after all?
> 
> No, that would be a bug.  Imagine the IOThread is blocked in poll
> monitoring the chardev file descriptor when the main loop calls
> monitor_suspend().  If the file descriptors becomes readable then the
> handler function executes even though the monitor is supposed to be
> suspended!

When you say "the handler function executes", do you mean the handler
that has already added to the qmp request queue, or the one that
hasn't yet parsed by the parser?

For the previous case (the handler that has queued already): IMHO
that's what we expect it to behave, say, when we call
monitor_suspend(), we only stop accepting and parsing new inputs from
the user, but the requests on the queue should still be processed.

For the latter (the handler of a newly typed command):
monitor_suspend() should suspend the parser already, so
monitor_can_read() check should fail, then that command should never
be queued until we call another monitor_resume().

> 
> > Regarding to monitor_resume(), I noticed that I missed the fact that
> > it's only tailored for HMP before, which seems to be a bigger problem.
> > Do you agree with a change like this?
> > 
> > diff --git a/monitor.c b/monitor.c
> > index 9970418d6f..8f96880ad7 100644
> > --- a/monitor.c
> > +++ b/monitor.c
> > @@ -4244,10 +4244,12 @@ int monitor_suspend(Monitor *mon)
> >  
> >  void monitor_resume(Monitor *mon)
> >  {
> > -    if (!mon->rs)
> > -        return;
> >      if (atomic_dec_fetch(&mon->suspend_cnt) == 0) {
> > -        readline_show_prompt(mon->rs);
> > +        if (monitor_is_qmp(mon)) {
> > +            aio_notify(mon_global.mon_iothread->ctx);
> 
> Please use iothread_get_aio_context() instead of accessing struct
> members.

Ah, yes!

> 
> > +        } else {
> > +            assert(mon->rs);
> > +            readline_show_prompt(mon->rs);
> 
> I haven't studied the HMP and ->rs code.  I'll do that when reviewing
> the next revision of this series.

Sure.

> 
> > +        }
> >      }
> >  }
> > 
> > Even, I'm thinking about whether I should start to introduce
> > iothread_notify() now to mask out the IOThread.ctx details.
> 
> aio_notify() is a low-level AioContext operation that is somewhat
> bug-prone.  I think it's better to leave it directly exposed so callers
> have to think about what they are doing.

Sure.  Thanks!

-- 
Peter Xu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]