qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] vm performance degradation after kvm live migration or


From: Gleb Natapov
Subject: Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
Date: Wed, 7 Aug 2013 08:52:23 +0300

On Wed, Aug 07, 2013 at 01:34:41AM +0000, Zhanghaoyu (A) wrote:
> >>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log), 
> >>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ 
> >>> QEMU_AUDIO_DRV=none
> >>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu 
> >>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
> >>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults 
> >>> -chardev 
> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,
> >>> n owait -mon chardev=charmonitor,id=monitor,mode=control -rtc 
> >>> base=localtime -no-shutdown -device
> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
> >>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cac
> >>> h
> >>> e=none -device
> >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,i
> >>> d
> >>> =virtio-disk0,bootindex=1 -netdev
> >>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device 
> >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x3,bootindex=2 -netdev
> >>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device 
> >>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device 
> >>> virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device 
> >>> virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device 
> >>> virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device 
> >>> virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x9 -chardev pty,id=charserial0 -device 
> >>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga 
> >>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
> >>> -watchdog-action poweroff -device
> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
> >>> 
> >>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
> >>
> >This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem 
> >exists, including the performance degradation and readonly GFNs' flooding.
> >I tried with e1000 NICs instead of virtio, including the performance 
> >degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
> >No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at 
> >post-restore stage (i.e. running stage), as soon as the restoring completed, 
> >the flooding is starting.
> >
> >Thanks,
> >Zhang Haoyu
> >
> >>--
> >>                    Gleb.
> 
> Should we focus on the first bad 
> commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' 
> flooding?
> 
Not really. There is no point in debugging very old version compiled
with kvm-kmod, there are to many variables in the environment. I cannot
reproduce the GFN flooding on upstream, so the problem may be gone, may
be a result of kvm-kmod problem or something different in how I invoke
qemu. So the best way to proceed is for you to reproduce with upstream
version then at least I will be sure that we are using the same code.

> I applied below patch to  __direct_map(), 
> @@ -2223,6 +2223,8 @@ static int __direct_map(struct kvm_vcpu
>         int pt_write = 0;
>         gfn_t pseudo_gfn;
> 
> +        map_writable = true;
> +
>         for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>                 if (iterator.level == level) {
>                         unsigned pte_access = ACC_ALL;
> and rebuild the kvm-kmod, then re-insmod it.
> After I started a VM, the host seemed to be abnormal, so many programs cannot 
> be started successfully, segmentation fault is reported.
> In my opinion, after above patch applied, the commit: 
> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test 
> result proved me wrong.
> Dose the map_writable value's getting process in hva_to_pfn() have effect on 
> the result?
> 
If hva_to_pfn() returns map_writable == false it means that page is
mapped as read only on primary MMU, so it should not be mapped writable
on secondary MMU either. This should not happen usually.

--
                        Gleb.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]