qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] kvm: First step to push iothread lock out of in


From: Jan Kiszka
Subject: Re: [Qemu-devel] [PATCH] kvm: First step to push iothread lock out of inner run loop
Date: Sat, 23 Jun 2012 11:11:48 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2012-06-23 00:59, Anthony Liguori wrote:
> On 06/22/2012 05:45 PM, Jan Kiszka wrote:
>> This sketches a possible path to get rid of the iothread lock on vmexits
>> in KVM mode. On x86, the the in-kernel irqchips has to be used because
>> we otherwise need to synchronize APIC and other per-cpu state accesses
>> that could be changed concurrently. Not yet fully analyzed is the NMI
>> injection path in the absence of an APIC.
>>
>> s390x should be fine without specific locking as their pre/post-run
>> callbacks are empty. Power requires locking for the pre-run callback.
>>
>> This patch is untested, but a similar version was successfully used in
>> a x86 setup with a network I/O path that needed no central iothread
>> locking anymore (required special MMIO exit handling).
>> ---
>>   kvm-all.c         |   18 ++++++++++++++++--
>>   target-i386/kvm.c |    7 +++++++
>>   target-ppc/kvm.c  |    4 ++++
>>   3 files changed, 27 insertions(+), 2 deletions(-)
>>
>> diff --git a/kvm-all.c b/kvm-all.c
>> index f8e4328..9c3e26f 100644
>> --- a/kvm-all.c
>> +++ b/kvm-all.c
>> @@ -1460,6 +1460,8 @@ int kvm_cpu_exec(CPUArchState *env)
>>           return EXCP_HLT;
>>       }
>>
>> +    qemu_mutex_unlock_iothread();
>> +
>>       do {
>>           if (env->kvm_vcpu_dirty) {
>>               kvm_arch_put_registers(env, KVM_PUT_RUNTIME_STATE);
>> @@ -1476,14 +1478,16 @@ int kvm_cpu_exec(CPUArchState *env)
>>                */
>>               qemu_cpu_kick_self();
>>           }
>> -        qemu_mutex_unlock_iothread();
>>
>>           run_ret = kvm_vcpu_ioctl(env, KVM_RUN, 0);
>>
>> -        qemu_mutex_lock_iothread();
>>           kvm_arch_post_run(env, run);
>>
>> +        /* TODO: push coalesced mmio flushing to the point where we
>> access
>> +         * devices that are using it (currently VGA and E1000). */
>> +        qemu_mutex_lock_iothread();
>>           kvm_flush_coalesced_mmio_buffer();
>> +        qemu_mutex_unlock_iothread();
>>
>>           if (run_ret<  0) {
>>               if (run_ret == -EINTR || run_ret == -EAGAIN) {
>> @@ -1499,19 +1503,23 @@ int kvm_cpu_exec(CPUArchState *env)
>>           switch (run->exit_reason) {
>>           case KVM_EXIT_IO:
>>               DPRINTF("handle_io\n");
>> +            qemu_mutex_lock_iothread();
>>               kvm_handle_io(run->io.port,
>>                             (uint8_t *)run + run->io.data_offset,
>>                             run->io.direction,
>>                             run->io.size,
>>                             run->io.count);
>> +            qemu_mutex_unlock_iothread();
>>               ret = 0;
>>               break;
>>           case KVM_EXIT_MMIO:
>>               DPRINTF("handle_mmio\n");
>> +            qemu_mutex_lock_iothread();
>>               cpu_physical_memory_rw(run->mmio.phys_addr,
>>                                      run->mmio.data,
>>                                      run->mmio.len,
>>                                      run->mmio.is_write);
>> +            qemu_mutex_unlock_iothread();
>>               ret = 0;
>>               break;
>>           case KVM_EXIT_IRQ_WINDOW_OPEN:
>> @@ -1520,7 +1528,9 @@ int kvm_cpu_exec(CPUArchState *env)
>>               break;
>>           case KVM_EXIT_SHUTDOWN:
>>               DPRINTF("shutdown\n");
>> +            qemu_mutex_lock_iothread();
>>               qemu_system_reset_request();
>> +            qemu_mutex_unlock_iothread();
>>               ret = EXCP_INTERRUPT;
>>               break;
>>           case KVM_EXIT_UNKNOWN:
>> @@ -1533,11 +1543,15 @@ int kvm_cpu_exec(CPUArchState *env)
>>               break;
>>           default:
>>               DPRINTF("kvm_arch_handle_exit\n");
>> +            qemu_mutex_lock_iothread();
>>               ret = kvm_arch_handle_exit(env, run);
>> +            qemu_mutex_unlock_iothread();
>>               break;
>>           }
>>       } while (ret == 0);
>>
>> +    qemu_mutex_lock_iothread();
>> +
>>       if (ret<  0) {
>>           cpu_dump_state(env, stderr, fprintf, CPU_DUMP_CODE);
>>           vm_stop(RUN_STATE_INTERNAL_ERROR);
>> diff --git a/target-i386/kvm.c b/target-i386/kvm.c
>> index 0d0d8f6..0ad64d1 100644
>> --- a/target-i386/kvm.c
>> +++ b/target-i386/kvm.c
>> @@ -1631,7 +1631,10 @@ void kvm_arch_pre_run(CPUX86State *env, struct
>> kvm_run *run)
>>
>>       /* Inject NMI */
>>       if (env->interrupt_request&  CPU_INTERRUPT_NMI) {
> 
> Strictly speaking, wouldn't we need to use testbit() and setbit()?  I
> would expect in the very least a barrier would be needed.

I need to think about this as well. We ignored it so far, just saw it
when hacking up this patch.

> 
> Looks pretty nice overall.  I'll need to apply and spend some time
> carefully walking through the code.

Without getting the coalesced mmio flushing out of the way, it does not
buy us that much yet. But I have some idea...

Jan

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]