qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3] monitor: let cur_mon be per-thread


From: Markus Armbruster
Subject: Re: [Qemu-devel] [PATCH v3] monitor: let cur_mon be per-thread
Date: Thu, 24 May 2018 13:13:43 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux)

Peter Xu <address@hidden> writes:

> On Thu, May 24, 2018 at 10:22:48AM +0200, Markus Armbruster wrote:
>> Peter Xu <address@hidden> writes:
>> 
>> > On Wed, May 23, 2018 at 03:13:07PM +0200, Markus Armbruster wrote:
>> >> Peter Xu <address@hidden> writes:
>> >> 
>> >> > On Wed, May 23, 2018 at 10:23:22AM +0200, Markus Armbruster wrote:
>> >> >> Peter Xu <address@hidden> writes:
>> >> >> 
>> >> >> > In the future the monitor iothread may be accessing the cur_mon as
>> >> >> > well (via monitor_qmp_dispatch_one()).  Before we introduce a real
>> >> >> > Out-Of-Band command,
>> >> >> 
>> >> >> Uh, inhowfar are the commands marked allow-oob: true now not real?
>> >> >> These are migrate-recover, migrate-pause, x-oob-test.
>> >> >
>> >> > x-oob-test is unreal; the rest are real.
>> >> 
>> >> Sounds like the commit message needs tweaking then.
>> >
>> > Ah yes.  When I drafted this patch we don't really have the other two
>> > commands yet.  They are there only after the postcopy recovery series.
>> >
>> > Do you want me to repost one?
>> 
>> Repost is fine, but I'm also happy to edit commit messages when I apply
>> to qapi-next.
>
> If this is the only thing to touch up, I would appreciate if you can
> do that to avoid message bounces.

Okay.

> [...]
>
>> >> > When there is better suggestion we can remove x-oob-test, but I can't
>> >> > see any so far.
>> >> 
>> >> The bit we actually need is having a command block on something we
>> >> control.  You picked a QemuSemaphore, internal to QEMU.  What about some
>> >> blocking system call?  Here's my idea:
>> >> 
>> >> 0. Create a named pipe with mkfifo(1)
>> >> 
>> >> 1. Issue some (non-OOB) QMP command that opens this pipe.  open()
>> >>    blocks.
>> >> 
>> >> 2. Issue the OOB-command, verify it completes
>> >> 
>> >> 3. If the non-OOB command writes to the pipe, suck the pipe dry, say
>> >>    with cat pipe | >/dev/null.  If it reads, give it something to read,
>> >>    say echo >pipe.
>> >> 
>> >> To make 3. complete quickly with a command that writes, you need one
>> >> that writes only a small amount of data.  Candidates: memsave, pmemsave,
>> >> screendump.
>> >
>> > I am not sure I have fully understood above, please correct me if I
>> > haven't.
>> >
>> > IMHO they are just similar ideas just like when we use semaphores.
>> > Semaphores are QEMU internal, but underneath that it'll also do
>> > syscalls (I believe we are using futex() syscalls for semaphores,
>> > though that should be in libc code not QEMU).  I believe fifos can be
>> > as easy to implement as semaphores, but I don't see much difference
>> > here - IMHO fifo just use its own way to do similar thing.  On the
>> > kernel side, both of them will basically do:
>> >
>> > (1) schedule() self process out to wait for the event
>> > (2) wake_up() by another process
>> >
>> > From that point of view, they are same to me.
>> 
>> The difference is we don't need a special command just for the test.
>
> Ah I see the point now - an existing command that will open a fifo.
> Yes that works too, though we'll need to find a good candidate, add
> some work to prepare the testbed and make sure there's no side effect.
> Anyway, I am totally fine with it once we really want it that way.

Separate patch, not blocking this series.

>> > In other words, since we don't allow OOB-capable command handlers to
>> > take risky locks, meanwhile we should also expand that idea further
>> > into any operation that might block for some time.  Reading a fifo
>> > here is the same as taking a risky lock - we need to make sure the
>> > fifo read won't block for long or we should never do that in OOB
>> > command handlers.
>> 
>> Absolutely.  Any code that can run in an OOB command is realtime code,
>> and that means it must not block for an indeterminate time.  Synchronous
>> read() is right out.
>> 
>> >> >> >                      let's convert the cur_mon variable to be a
>> >> >> > per-thread variable to make sure there won't be a race between 
>> >> >> > threads.
>> >> >> >
>> >> >> > Note that thread variables are not initialized to a valid value when 
>> >> >> > new
>> >> >> > thread is created.  However for our case we don't need to set it up,
>> >> >> > since the cur_mon variable is only used in such a pattern:
>> >> >> >
>> >> >> >   old_mon = cur_mon;
>> >> >> >   cur_mon = xxx;
>> >> >> >   (do something, read cur_mon if necessary in the stack)
>> >> >> >   cur_mon = old_mon;
>> >> >> >
>> >> >> > It plays a role as stack variable, so no need to be initialized at 
>> >> >> > all.
>> >> >> > We only need to make sure the variable won't be changed unexpectedly 
>> >> >> > by
>> >> >> > other threads.
>> >> >> 
>> >> >> Do we plan to keep switching cur_mon forever?  Or do we intend to work
>> >> >> towards a 1:1 association between Monitor struct and monitor thread?
>> >> >
>> >> > I still don't see a good way to remove the cur_mon switching... E.g.,
>> >> > in qmp_human_monitor_command() we'll switch no matter what.
>> >> 
>> >> That one's a hack.  We can make exceptions for hacks.  Where else do we
>> >> switch?
>> >
>> > Yeah, the major one should be monitor_qmp_dispatch_one().  If we can
>> > have one thread per monitor, it seems to be good we can at least omit
>> > the switching for this one.
>> 
>> Yes, please.
>
> Note that currently it's still not 1:1 - we only have one thread for
> all the monitors, so now it seems that we still cannot omit it.  And
> I'm not sure whether we should do that only to omit this cur_mon
> fiddling, since it might be an overkill too.

If we decide we want 1:1, we should make 1:1 as obvious as we can in its
implementation.  That means getting rid of cur_mon switching.  Anyway,
not an immediate worry.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]