qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Using aio_poll for timer carrier threads


From: liu ping fan
Subject: Re: [Qemu-devel] Using aio_poll for timer carrier threads
Date: Wed, 14 Aug 2013 08:48:24 +0800

On Tue, Aug 13, 2013 at 10:13 PM, Jan Kiszka <address@hidden> wrote:
> On 2013-08-13 15:45, Stefan Hajnoczi wrote:
>> On Tue, Aug 13, 2013 at 09:56:17AM +0200, Jan Kiszka wrote:
>>> in the attempt to use Alex' ppoll-based timer rework for decoupled,
>>> real-time capable timer device models I'm now scratching my head over
>>> the aio_poll interface. I'm looking at dataplane/virtio-blk.c, just finding
>>>
>>> static void *data_plane_thread(void *opaque)
>>> {
>>>     VirtIOBlockDataPlane *s = opaque;
>>>
>>>     do {
>>>         aio_poll(s->ctx, true);
>>>     } while (!s->stopping || s->num_reqs > 0);
>>>     return NULL;
>>> }
>>>
>>> wondering where the locking is. Or doesn't this use need any at all? Are
>>> all data structures that this thread accesses exclusively used by it, or
>>> are they all accessed in a lock-less way?
>>
>> Most of the data structures in dataplane upstream are not shared.
>> Virtio, virtio-blk, and Linux AIO raw file I/O are duplicated for
>> dataplane and do not rely on QEMU infrastructure.
>>
>> I've been working on undoing this duplication over the past months but
>> upstream QEMU still mostly does not share data structures and therefore
>> does not need much synchronization.  For the crude synchronization that
>> we do need we simply start/stop the dataplane thread.
>>
>>> Our iothread mainloop more or less open-codes aio_poll and is, thus,
>>> able to drop its lock before falling asleep while still holding it
>>> during event dispatching. Obviously, I need the same when processing
>>> timer lists of an AioContext, protecting them against concurrent
>>> modifications over VCPUs or other threads. So I'm thinking of adding a
>>> block notification callback to aio_poll, to be called before/after
>>> qemu_poll_ns so that any locks can be dropped / reacquired as needed. Or
>>> am I missing some magic interface / pattern?
>>
>> Upstream dataplane does not use timers, so the code there cannot serve
>> as an example.
>>
>> If you combine Alex Bligh, Ping Fan, and my latest timer series, you get
>> support for QEMUTimer in AioContexts where qemu_timer_mod_ns() and
>> qemu_timer_del() are thread-safe.  vm_clock (without icount) and
>> rt_clock are thread-safe clock sources.
>
> To which series of yours and Ping Fan are you referring? [1] and [2]?
>
Stefan's [1] has been rebased onto Alex's v10
My part [2']  http://thread.gmane.org/gmane.comp.emulators.qemu/227751,
rebased onto v10 too.

>>
>> This should make timers usable in another thread for clock device
>> emulation if only your iothread uses the AioContext and its timers
>> (besides the thread-safe mod/del interfaces).
>
> As argued in the other thread, I don't think we need (and want) locking
> in the timer subsystem, rather push this to its users. But I'll look
> again at your patches, if they are also usable.
>
>>
>> The details depend on your device, do you have a git repo I can look at
>> to understand your device model?
>
> Pushed my hacks here:
>
> git://git.kiszka.org/qemu.git queues/rt.new3
>
> Jan
>
> [1] http://thread.gmane.org/gmane.comp.emulators.qemu/227590
> [2] http://thread.gmane.org/gmane.comp.emulators.qemu/226369
>
> --
> Siemens AG, Corporate Technology, CT RTC ITP SES-DE
> Corporate Competence Center Embedded Linux



reply via email to

[Prev in Thread] Current Thread [Next in Thread]