qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-discuss] High host CPU load and slow Windows 10 vm after upgra


From: utdilya
Subject: Re: [Qemu-discuss] High host CPU load and slow Windows 10 vm after upgrade to 1803
Date: Tue, 31 Jul 2018 13:35:59 +0300

Thank you.
This is solution not working for me. Maybe, because of Centos7.
I have in the log:
3707 : host doesn't support hyperv 'relaxed' feature
3707 : host doesn't support hyperv 'vapic' feature
3707 : host doesn't support hyperv 'spinlocks' feature

/usr/libexec/qemu-kvm --version
QEMU emulator version 1.5.3 (qemu-kvm-1.5.3-156.el7_5.3)

/usr/libexec/qemu-kvm -M ?
Supported machines are:
none                 empty machine
pc                   RHEL 7.0.0 PC (i440FX + PIIX, 1996) (alias of
pc-i440fx-rhel7.0.0)
pc-i440fx-rhel7.0.0  RHEL 7.0.0 PC (i440FX + PIIX, 1996) (default)
rhel6.6.0            RHEL 6.6.0 PC
rhel6.5.0            RHEL 6.5.0 PC
rhel6.4.0            RHEL 6.4.0 PC
rhel6.3.0            RHEL 6.3.0 PC
rhel6.2.0            RHEL 6.2.0 PC
rhel6.1.0            RHEL 6.1.0 PC
rhel6.0.0            RHEL 6.0.0 PC


On 31.07.2018 13:14, Giovanni Panozzo wrote:
> See the solution posted by MKHR on askubuntu.com:
> 
> https://askubuntu.com/questions/1033985/kvm-high-host-cpu-load-after-upgrading-vm-to-windows-10-1803
> 
> 
> 
> Giovanni
> 
> Il 31/07/2018 12:10, utdilya ha scritto:
>> Hello.
>> I have this problem too, on Centos7 with qemu.kvm on the 4 different
>> machines.
>> Tell me please, do you have resolved this problem?
>> My windows 10 after upgrades to 1803 has too much interrupts
>> more than 2000. I think it is RTC (Real Timer Clock) interrupts.
>> Because of this VirtualMachine has 10-20% CPU load in idle state.
>> See attachment:
>>
>> Thank you.
>>
>>
>> On 10.06.2018 22:38, Giovanni Panozzo wrote:
>>> Hi to all, I'm new in this ML.
>>>
>>> After upgrading some VM from Windows 10 1709 to Windows 10 1803, the
>>> VMs runs slower. And when VM is almost idle, host CPU load is quite
>>> high.
>>>
>>> It happens on 4 different hardware platform (AMD FX 4300 and intel
>>> core i3/i5), with Arch and ubuntu 16.04/18.04 with libvirt. I already
>>> asked help on askubuntu.com and opened a bug to quemu with non answers.
>>>
>>> So I continued my investigation, but it's very difficult for me to
>>> investigate, having limited time and knowledge on Kvm/Qemu.
>>>
>>> perf kvm --host stat live reports:
>>>
>>> Analyze events for all VMs, all VCPUs:
>>>
>>>               VM-EXIT    Samples  Samples%     Time% Min Time    Max
>>> Time         Avg time
>>>
>>>        IO_INSTRUCTION      17379    54.45%    49.37% 4.73us
>>> 5274.05us     40.94us ( +-   1.89% )
>>>              MSR_READ       5382    16.86%     1.56% 2.24us
>>> 2126.01us      4.17us ( +-  12.26% )
>>>         EPT_VIOLATION       3183     9.97%     3.63% 2.83us
>>> 8829.17us     16.44us ( +-  24.23% )
>>>             MSR_WRITE       2425     7.60%     0.80% 3.12us
>>> 220.26us      4.77us ( +-   1.96% )
>>>    EXTERNAL_INTERRUPT       1464     4.59%     3.05% 1.99us
>>> 7080.61us     29.98us ( +-  26.92% )
>>>     PENDING_INTERRUPT        999     3.13%     0.29% 2.87us
>>> 7.12us      4.13us ( +-   0.31% )
>>>                   HLT        662     2.07%    41.16% 2.75us
>>> 7956.90us    895.99us ( +-   2.27% )
>>>   TPR_BELOW_THRESHOLD        220     0.69%     0.08% 3.61us
>>> 94.16us      5.55us ( +-   7.33% )
>>>                VMCALL        171     0.54%     0.05% 2.30us
>>> 58.09us      4.20us ( +-   8.11% )
>>>                 CPUID         24     0.08%     0.00% 2.05us
>>> 4.12us      2.84us ( +-   4.05% )
>>>         EPT_MISCONFIG          7     0.02%     0.01% 23.05us
>>> 32.87us     25.97us ( +-   4.98% )
>>>
>>> Total Samples:31916, Total events handled time:1441189.06us.
>>>
>>> And perf kvm --host stat live --event=ioport
>>>
>>>
>>> Analyze events for all VMs, all VCPUs:
>>>
>>>        IO Port Access    Samples  Samples%     Time% Min Time    Max
>>> Time         Avg time
>>>
>>>             0x70:POUT      11138    49.69%    85.13% 8.60us
>>> 392.56us     30.34us ( +-   0.99% )
>>>              0x71:PIN      11138    49.69%    14.63% 3.80us
>>> 58.95us      5.21us ( +-   0.16% )
>>>           0xc010:POUT        110     0.49%     0.12% 2.65us
>>> 11.67us      4.15us ( +-   4.13% )
>>>            0x1f0:POUT          6     0.03%     0.06% 13.81us
>>> 82.45us     37.23us ( +-  34.19% )
>>>             0x1f7:PIN          4     0.02%     0.01% 4.99us
>>> 6.12us      5.62us ( +-   4.36% )
>>> [...]
>>>
>>> On another virtualization host, I noticed also traffic on io port
>>> 0x0608:
>>>
>>> Analyze events for all VMs, all VCPUs:
>>>
>>>        IO Port Access    Samples  Samples%     Time% Min Time    Max
>>> Time         Avg time
>>>
>>>             0x70:POUT       4220    40.00%    90.93% 3.42us
>>> 2023.80us     12.61us ( +-   3.97% )
>>>              0x71:PIN       4220    40.00%     5.70% 0.53us
>>> 9.23us      0.79us ( +-   0.65% )
>>>             0x608:PIN       2074    19.66%     3.15% 0.55us
>>> 10.62us      0.89us ( +-   0.95% )
>>>            0x1f0:POUT          6     0.06%     0.06% 4.64us
>>> 8.51us      5.54us ( +-  11.23% )
>>>           0xc070:POUT          5     0.05%     0.05% 4.06us
>>> 7.09us      5.49us ( +-  10.68% )
>>>             0x1f7:PIN          4     0.04%     0.00% 0.65us
>>> 0.70us      0.68us ( +-   2.10% )
>>>           0xc010:POUT          3     0.03%     0.04% 6.26us
>>> 9.15us      7.40us ( +-  11.99% )
>>>
>>> when running older Windows 10 1709 VM, 0x70 and 0x71 IO port Samples
>>> value are around 120-130 with spikes to 500, on Windows 10 1803, as
>>> you can see, 4000 to 11000 samples.
>>>
>>> Thank you in advance for any help.
>>>
>>> Giovanni
>>>
>>>
>>>
>>>
>>>
>>>
>>>
> 
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]