qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] [PATCHv10 00/31] aio / timers: Add AioContext tim


From: Jan Kiszka
Subject: Re: [Qemu-devel] [RFC] [PATCHv10 00/31] aio / timers: Add AioContext timers and use ppoll
Date: Tue, 13 Aug 2013 14:57:33 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2013-08-13 14:44, Alex Bligh wrote:
> 
> On 13 Aug 2013, at 13:22, Jan Kiszka wrote:
> 
>> With tweaking I mean:
>>
>> bool aio_poll(AioContext *ctx, bool blocking,
>>              void (*blocking_cb)(bool, void *),
>>              void *blocking_cb_opaque);
>>
>> i.e. adding a callback that aio_poll will invoke before and right after
>> waiting for events/timeouts. This allows to drop/reacquire locks that
>> protect data structures used both by the timer thread and other threads
>> running the device model.
> 
> That's interesting. I didn't give a huge amount of thought
> to thread extensibility (not least as the locking needed
> fixing first), but the model I had in my head was not that
> the locks were taken on exit from qemu_poll_ns and
> released on entry to it, but rather that the individual
> dispatch functions and timer functions called only took whatever
> locks they needed as and when they needed them. IE everything
> would already be unlocked prior to calling qemu_poll_ns.
> I suppose both would work.

Well, all the timer machinery requires some locking as well. So one
option is to add this to the core, the other - the one that I'm
following - is to push the locking to the timer users. The advantage of
the latter approach is that you can often reuse existing locks instead
of extending their number excessively, potentially causing ordering issues.

The locks to be reused are, or course, the BQL or device model locks,
like in my RTC scenario. Or think of a networking backend like slirp:
TCP timers could run under the same lock that is also protecting the
rest of a slirp instance state machine. Well, not sure we can gain a lot
by threading slirp, but the concept remains the same.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux



reply via email to

[Prev in Thread] Current Thread [Next in Thread]