qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Cross-posted : Odd QXL/KVM performance issue with a Win


From: Alex Bennée
Subject: Re: [Qemu-devel] Cross-posted : Odd QXL/KVM performance issue with a Windows 7 Guest
Date: Tue, 05 Nov 2019 11:32:51 +0000
User-agent: mu4e 1.3.5; emacs 27.0.50

Brad Campbell <address@hidden> writes:

> On 6/9/19 21:38, Brad Campbell wrote:
>> 7022@1567775824.002106:kvm_vm_ioctl type 0xc008ae67, arg 0x7ffe13b0c970
>> 7022@1567775824.002115:kvm_vm_ioctl type 0xc008ae67, arg 0x7ffe13b0c980
>> 7022@1567775824.003122:kvm_vm_ioctl type 0xc008ae67, arg 0x7ffe13b0c970
>
>> Does this look familiar to anyone?
>
> Ugh. System timer.
>
> So with the timer interrupt removed and an added trace on IRQ > 0:
>
> qxl/guest-0: 79096403248: qxldd: DrvCopyBits
> 14955@1567780063.149527:kvm_vcpu_ioctl cpu_index 2, type 0xae80, arg (nil)
> 14956@1567780063.150291:qxl_ring_res_put 0 #res=1
> 14955@1567780063.163672:kvm_run_exit cpu_index 2, reason 2
> 14955@1567780063.163688:qxl_io_write 0 native addr=4 (QXL_IO_NOTIFY_OOM) 
> val=0 size=1 async=0
> 14955@1567780063.163704:qxl_spice_oom 0
> 14955@1567780063.163720:kvm_vcpu_ioctl cpu_index 2, type 0xae80, arg (nil)
> 14956@1567780063.163755:qxl_ring_command_check 0 native
> 14956@1567780063.163779:qxl_ring_res_push 0 native s#=0 res#=1 
> last=0x7f3c0d44b6e0 notify=yes
> 14956@1567780063.163816:qxl_ring_res_push_rest 0 ring 1/8 [326,325]
> 14956@1567780063.163841:qxl_send_events 0 1
> 14956@1567780063.163868:qxl_ring_cursor_check 0 native
> 14956@1567780063.163888:qxl_ring_command_check 0 native
> 14924@1567780063.163879:kvm_set_irq irq 11, level 1, status 1
> 14954@1567780063.163895:kvm_run_exit cpu_index 1, reason 2
> 14954@1567780063.163965:qxl_io_write 0 native addr=3 (QXL_IO_UPDATE_IRQ) 
> val=0 size=1 async=0
> 14954@1567780063.164006:kvm_set_irq irq 11, level 0, status 1
> 14954@1567780063.164029:kvm_vcpu_ioctl cpu_index 1, type 0xae80, arg (nil)
> 14954@1567780063.164065:kvm_run_exit cpu_index 1, reason 2
> 14954@1567780063.164080:qxl_io_write 0 native addr=3 (QXL_IO_UPDATE_IRQ) 
> val=0 size=1 async=0
> 14954@1567780063.164104:kvm_vcpu_ioctl cpu_index 1, type 0xae80, arg (nil)
> 14955@1567780063.266778:kvm_run_exit cpu_index 2, reason 2
> 14955@1567780063.266790:qxl_io_write 0 native addr=0 (QXL_IO_NOTIFY_CMD) 
> val=0 size=1 async=0
> 14955@1567780063.266809:kvm_vcpu_ioctl cpu_index 2, type 0xae80, arg (nil)
> 14956@1567780063.266822:qxl_ring_cursor_check 0 native
> 14956@1567780063.266842:qxl_ring_command_check 0 native
> 79213750625 qxl-0/cmd: cmd @ 0x10000000104b598 draw: surface_id 0 type copy 
> effect opaque src 100000001fecbf8 (id 9fe0870780 type 0 flags 0 width 1920 
> height 1080, fmt 8 flags 0 x 1920 y 1080 stride 7680 palette 0 data 
> 100000001fecc28) area 1920x1080+0+0 rop 8
> 14956@1567780063.266983:qxl_ring_command_get 0 native
> 14956@1567780063.267044:qxl_ring_command_check 0 native
> 14956@1567780063.267070:qxl_ring_cursor_check 0 native
> 14956@1567780063.267087:qxl_ring_command_check 0 native
> 14956@1567780063.267109:qxl_ring_command_req_notification 0
> 14955@1567780063.267967:kvm_run_exit cpu_index 2, reason 2
> 14955@1567780063.267987:qxl_io_write 0 native addr=7 (QXL_IO_LOG) val=0 
> size=1 async=0
> 14955@1567780063.268015:qxl_io_log 0 qxldd: DrvCopyBits
>
> So if I'm not mistaken (for the nth time), we have KVM_RUN on cpu index 2 
> here:
>
> 14955@1567780063.163720:kvm_vcpu_ioctl cpu_index 2, type 0xae80, arg (nil)
>
> And it returns here :
>
> 14955@1567780063.266778:kvm_run_exit cpu_index 2, reason 2
>
> Does that imply guest code is running for ~100ms on that vcpu?

Yes. In the KVM game vmexits is what kills performance. If QEMU is
involved in doing the emulation you have to exit the guest, go through
the kernel, exit the ioctl and then QEMU does it's thing before you
restart.

You can't avoid all exits - indeed VIRTIO is designed to limit the exits
to a single exit per transmission. However if you are emulating a legacy
device in QEMU every access to a memory mapped register will involve an
exit.

If you monitor QEMU with "perf record" and then look at the result
("perf report") see what QEMU is doing all that time. It's likely there
is a legacy device somewhere which the guest kernel is hammering.

--
Alex.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]