[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking
From: |
Jan Kiszka |
Subject: |
Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API |
Date: |
Thu, 05 Apr 2012 16:01:14 +0200 |
User-agent: |
Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666 |
On 2012-04-05 15:40, Paolo Bonzini wrote:
> Il 05/04/2012 15:00, Jan Kiszka ha scritto:
>>>> But QemuEvent takes away the best name for a useful concept (a
>>>> cross-platform implementation of Win32 events; you can see that in the
>> The concept is not lost, it perfectly fit this incarnation. Just the
>> special futex version for Linux is not feasible.
>
> It's not just about the futex version. Can you implement a
> userspace-only fast path? Perhaps with EFD_SEMAPHORE you can:
>
> x = state of the event
> bit 0 = set/reset
> bit 1..31 = waiters
>
> set
> y = xchg(&x, 1)
> if y > 1
> write y >> 1 to eventfd
>
> wait
> do {
> y = x
> if (y & 1) return;
> } while (fail to cmpxchg x from y to y + 2)
> read from eventfd
>
> reset
> cmpxchg x from 1 to 0
>
> but what if you are falling back to pipes?
Either you signal via the fd or via a variable. Doing both won't work as
the state can only be in the eventfd/pipe (for external triggers). We
could switch the mode of our QemuEvent on init, but that will become
ugly I'm afraid.
>
> 2) It's much more heavyweight since (like Windows primitives) you need
> to set aside OS resources for each QemuEvent. With mutexes and condvars
> the kernel-side waitqueues come and go as they are used.
>
>>>> RCU patches which were even posted on the list). We already have a
>>>> perfectly good name for EventNotifiers, and there's no reason to break
>>>> the history of event-notifier.c.
>> Have you measured if the futex optimization is actually worth the
>> effort, specifically compared to the fast path of mutex/cond loop?
>
> A futex is 30% faster than the mutex/cond combination. It's called on
> fast paths (call_rcu and, depending on how you implement RCU,
> rcu_read_unlock) so it's important.
If RCU is the only user for this optimized signaling, then I would vote
for doing it in the RCU layer directly. If there are also other users in
sight that could benefit (because of mostly-set-rarely-reset patterns),
I agree that a QemuEvent is the better home. Can you name more use cases
in QEMU?
Happy vacations,
Jan (off for Easter now)
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
- Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API, (continued)
- Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API, Jan Kiszka, 2012/04/04
- Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API, Paolo Bonzini, 2012/04/05
- Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API, Jan Kiszka, 2012/04/05
- Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API, Paolo Bonzini, 2012/04/05
- Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API, Jan Kiszka, 2012/04/05
- Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API, Paolo Bonzini, 2012/04/05
- Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API, Jan Kiszka, 2012/04/05
- Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API, Paolo Bonzini, 2012/04/05
- Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API, Jan Kiszka, 2012/04/06
- Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API, Paolo Bonzini, 2012/04/05
- Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API,
Jan Kiszka <=
- Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API, Paolo Bonzini, 2012/04/05