qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] monitor: postpone monitor_qmp_cleanup_queues


From: Peter Xu
Subject: Re: [Qemu-devel] [PATCH] monitor: postpone monitor_qmp_cleanup_queues
Date: Fri, 8 Jun 2018 17:11:54 +0800
User-agent: Mutt/1.9.5 (2018-04-13)

On Fri, Jun 08, 2018 at 10:18:25AM +0200, Markus Armbruster wrote:
> Stefan Hajnoczi <address@hidden> writes:
> 
> > On Fri, Jun 08, 2018 at 12:42:35PM +0800, Peter Xu wrote:
> >> On Thu, Jun 07, 2018 at 01:53:01PM +0200, Markus Armbruster wrote:
> >> > Peter Xu <address@hidden> writes:
> >> > 
> >> > > Previously we cleanup the queues when we got CLOSED event.  It was used
> >> > 
> >> > we clean up
> >> > 
> >> > > to make sure we won't leftover replies/events of a old client to a new
> >> > 
> >> > we won't send leftover replies/events of an old client
> >> > 
> >> > > client.  Now this patch postpones that until OPENED.
> >> > 
> >> > What if OPENED never comes?
> >> 
> >> Then we clean that up until destruction of the monitor.  IMHO it's
> >> fine, but I'm not sure whether that's an overall acceptable approach.
> >
> > I think this patch fixes the problem at the wrong level.  Marc-André's
> > fix seemed like a cleaner solution.

Actually I like Marc-Andre's solution too.

However I left a comment there, I'm not sure whether that's workable.
My feeling is that currently our chardev can't really work very nicely
when the chardev backend is composed by two different channels, say,
when IN and OUT are different fds underneath.

Btw, I just noticed that fd_chr_add_watch() will only add a watch to
the ioc_out object, I'm not sure why (see that we have both ioc_in and
ioc_out for fd chardev):

static GSource *fd_chr_add_watch(Chardev *chr, GIOCondition cond)
{
    FDChardev *s = FD_CHARDEV(chr);
    return qio_channel_create_watch(s->ioc_out, cond);
}

(this is not related to current problem at all, it's a pure question;
 feel free to ignore)

> 
> Is it the right solution?
> 
> I proposed another one:
> 
> [...]
> >> > Here's what I think we should do on such EOF:
> >> > 
> >> > 1. Shut down input
> >> > 
> >> >    Stop reading input.  If input has its own file descriptor (as opposed
> >> >    to a read/write fd shared with output), close it.
> >> > 
> >> > 2. Flush output
> >> > 
> >> >    Continue to write output until the queue becomes empty or we run into
> >> >    an error (such as EPIPE, telling us the output's consumer went away).
> >> >    Works exactly as before, except we proceed to step 3 when the queue
> 
> exactly as before EOF

Yeah this seems reasonable.

I did a quick test with below change and it fixes the problem too:

diff --git a/monitor.c b/monitor.c
index 5eb4630215..b76cab5022 100644
--- a/monitor.c
+++ b/monitor.c
@@ -4366,6 +4366,8 @@ static void monitor_qmp_event(void *opaque, int event)
         mon_refcount++;
         break;
     case CHR_EVENT_CLOSED:
+        /* Force flush all events */
+        monitor_qmp_bh_responder(NULL);
         monitor_qmp_cleanup_queues(mon);
         json_message_parser_destroy(&mon->qmp.parser);
         json_message_parser_init(&mon->qmp.parser, handle_qmp_command);

(Though maybe I can do it a bit "prettier" when I post a formal
 patch... for example, only flush the response queue for that specific
 monitor)

Frankly speaking I think this might be an ideal fix as well.  For
example what if we are executing the dispatcher of a command when we
received the CLOSED event?  If so, the dispatcher will put the
response onto the response queue after the CLOSED event, and ideally
we'd better also deliver that to the filter_output process.

> 
> >> >    becomes empty.
> >> > 
> >> > 3. Shut down output
> >> > 
> >> >    Close the output file descriptor.

For this one I don't know whether it'll be necessary.  That's managed
by chardev backends now, I think it now won't be closed until QEMU
quits (for our bash pipelne scenario).

Regards,

-- 
Peter Xu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]