qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-block] [PATCH 14/17] block: optimize access to re


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH 14/17] block: optimize access to reqs_lock
Date: Thu, 4 May 2017 18:06:39 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0


On 04/05/2017 16:59, Stefan Hajnoczi wrote:
> On Thu, Apr 20, 2017 at 02:00:55PM +0200, Paolo Bonzini wrote:
>> Hot path reqs_lock critical sections are very small; the only large critical
>> sections happen when a request waits for serialising requests, and these
>> should never happen in usual circumstances.
>>
>> We do not want these small critical sections to yield in any case,
>> which calls for using a spinlock while writing the list.
> 
> Is this patch purely an optimization?

Yes, it is, and pretty much a no-op until we have true multiqueue.  But
I expect it to have a significant effect for multiqueue.

> I'm hesitant about using spinlocks in userspace.  There are cases where
> the thread is descheduled that are beyond our control.  Nested virt will
> probably make things worse.  People have been optimizing and trying
> paravirt approaches to kernel spinlocks for these reasons for years.

This is true, but here we're talking about a 5-10 instruction window for
preemption; it matches the usage of spinlocks in other parts of QEMU.
The long critical sections, which only happen with combination with
copy-on-read or RMW (large logical block sizes on the host), take the
CoMutex.

On one hand it's true that the more you nest, the more things get worse.
 On the other hand there can only ever be contention with multiqueue,
and the multiqueue scenarios are going to use pinning.

> Isn't a futex-based lock efficient enough?  That way we don't hog the
> CPU when there is contention.

It is efficient when there is no contention, but when there is, the
latency goes up by several orders of magnitude.

Paolo

> Also, there are no performance results included in this patch that
> justify the spinlock.
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]