qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/3] iothread: provide helpers for internal use


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH 1/3] iothread: provide helpers for internal use
Date: Fri, 22 Sep 2017 12:26:16 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0

On 22/09/2017 12:20, Daniel P. Berrange wrote:
> On Fri, Sep 22, 2017 at 12:18:44PM +0200, Paolo Bonzini wrote:
>> On 22/09/2017 12:16, Stefan Hajnoczi wrote:
>>> I suggest adding internal IOThreads alongside user-created IOThreads
>>> instead of hiding them.  IOThread also needs a bool user_created field
>>> and a UserCreatableClass->can_be_deleted() function:
>>>
>>>   static bool iothread_can_be_deleted(UserCreatable *uc)
>>>   {
>>>       return IOTHREAD(uc)->user_created;
>>>   }
>>>
>>> This way users cannot delete internal IOThreads.
>>>
>>> But how should object ids be handled?  In theory existing -object
>>> iothread,id=<id> users could use any name.  How can QEMU generate ids
>>> for internal IOThreads without conflicting with existing users's ids?
>>
>> I would add an 'internal' boolean to query-iothreads' response and a new
>> 'show-internal' boolean to the command.  This way, applications that
>> request internal iothreads would know that the "primary key" is
>> (internal, id) rather than just the id.
> 
> What is the app going to do with iothreads if it sees "internal" flag
> set ? They have no way of knowing what part of QEMU internally is using
> this iothread, so I don't see that they can do anything intelligent
> once they find out they exist.

The application could apply them default settings for scheduler policy
or CPU affinity.

Unlike the main or the I/O thread, the monitor thread doesn't interrupt
the CPU, so it need not run at SCHED_FIFO even in real-time settings.
Alternatively, the application could ensure that such threads would not
get in the way of VCPU or I/O threads, providing slightly more stable
performance.

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]