qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC] thread-pool: Add option to fix the pool size


From: Kevin Wolf
Subject: Re: [RFC] thread-pool: Add option to fix the pool size
Date: Fri, 11 Feb 2022 12:32:57 +0100

Am 03.02.2022 um 15:19 hat Stefan Hajnoczi geschrieben:
> On Thu, Feb 03, 2022 at 10:56:49AM +0000, Daniel P. Berrangé wrote:
> > On Thu, Feb 03, 2022 at 10:53:07AM +0000, Stefan Hajnoczi wrote:
> > > On Wed, Feb 02, 2022 at 06:52:34PM +0100, Nicolas Saenz Julienne wrote:
> > > > The thread pool regulates itself: when idle, it kills threads until
> > > > empty, when in demand, it creates new threads until full. This behaviour
> > > > doesn't play well with latency sensitive workloads where the price of
> > > > creating a new thread is too high. For example, when paired with qemu's
> > > > '-mlock', or using safety features like SafeStack, creating a new thread
> > > > has been measured take multiple milliseconds.
> > > > 
> > > > In order to mitigate this let's introduce a new option to set a fixed
> > > > pool size. The threads will be created during the pool's initialization,
> > > > remain available during its lifetime regardless of demand, and destroyed
> > > > upon freeing it. A properly characterized workload will then be able to
> > > > configure the pool to avoid any latency spike.
> > > > 
> > > > Signed-off-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
> > > > 
> > > > ---
> > > > 
> > > > The fix I propose here works for my specific use-case, but I'm pretty
> > > > sure it'll need to be a bit more versatile to accommodate other
> > > > use-cases.
> > > > 
> > > > Some questions:
> > > > 
> > > > - Is unanimously setting these parameters for any pool instance too
> > > >   limiting? It'd make sense to move the options into the AioContext the
> > > >   pool belongs to. IIUC, for the general block use-case, this would be
> > > >   'qemu_aio_context' as initialized in qemu_init_main_loop().
> > > 
> > > Yes, qemu_aio_context is the main loop's AioContext. It's used unless
> > > IOThreads are configured.
> > > 
> > > It's nice to have global settings that affect all AioContexts, so I
> > > think this patch is fine for now.
> > > 
> > > In the future IOThread-specific parameters could be added if individual
> > > IOThread AioContexts need tuning (similar to how poll-max-ns works
> > > today).
> > > 
> > > > - Currently I'm setting two pool properties through a single qemu
> > > >   option. The pool's size and dynamic behaviour, or lack thereof. I
> > > >   think it'd be better to split them into separate options. I thought of
> > > >   different ways of expressing this (min/max-size where static happens
> > > >   when min-size=max-size, size and static/dynamic, etc..), but you might
> > > >   have ideas on what could be useful to other use-cases.
> > > 
> > > Yes, "min" and "max" is more flexible than fixed-size=n. fixed-size=n is
> > > equivalent to min=n,max=n. The current default policy is min=0,max=64.
> > > If you want more threads you could do min=0,max=128. If you want to
> > > reserve 1 thread all the time use min=1,max=64.
> > > 
> > > I would go with min and max.
> > 
> > This commit also exposes this as a new top level command line
> > argument. Given our aim to eliminate QemuOpts and use QAPI/QOM
> > properties for everything I think we need a different approach.
> > 
> > I'm not sure which exisiting QAPI/QOM option it most appropriate
> > to graft these tunables onto ?  -machine ?  -accel ?  Or is there
> > no good fit yet ?

I would agree that it should be QAPI, but just like QemuOpts doesn't
require that you shoehorn it into an existing option, QAPI doesn't
necessarily either if that's the interface that we want. You could just
create a new QAPI struct for it and parse the new option into that. This
would already be an improvement over this RFC.

Now, whether we actually want a new top-level option is a different
question (we usually try to avoid it), but it's not related to QAPI vs.
QemuOpts.

> Yep, I didn't comment on this because I don't have a good suggestion.
> 
> In terms of semantics I think we should have:
> 
> 1. A global default value that all new AioContext take. The QEMU main
>    loop's qemu_aio_context will use this and all IOThread AioContext
>    will use it (unless they have been overridden).
> 
>    I would define it on --machine because that's the "global" object for
>    a guest, but that's not very satisfying.

Semantically, -machine is about the virtual hardware where as iothreads
are about the backend, so I agree it's not a good fit.

For the main thread, you may want to configure all the same options that
you can configure for an iothread. So to me that sounds like we would
want to allow using an iothread object for the main thread, too.

That would still require us to tell QEMU which iothread object should be
used for the main thread, though.

> 2. (Future patch) --object iothread,thread-pool-min=N,thread-pool-max=M
>    just like poll-max-ns and friends. This allows the values to be set
>    on a per-IOThread basis.

And to be updated with qom-set. (Which is again something that you'll
want for the main thread, too.)

Kevin

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]