qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-discuss] Supported hypervisors running VMs in nested VM


From: Bandan Das
Subject: Re: [Qemu-discuss] Supported hypervisors running VMs in nested VM
Date: Mon, 05 Oct 2015 15:18:26 -0400
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux)

Rain Maker <address@hidden> writes:

> Thanks Bandan.
>
> That helped a bit. It got me to the next hurdle, as you suspected.
>
> I modified the virsh XML so that -cpu host,+vmx,-hypervisor is passed,
> and the installation now reports "Hyper-V cannot be installed because
> virtualization support is not enabled in the BIOS.".

Thanks for trying this out.

> I am sure that vmx is passed. but "systeminfo" does report "Hyper-V
> cannot be installed because virtualization support is not enabled in
> the BIOS."
>
> Apparently, Microsoft queries the BIOS to verify that the

When kvm is initialized, it checks for TXT and VMX both being enabled.
That too, only if the feature control msr is locked. I don't think there
are actually any specific "bios calls" to find this out. I would assume
Hyper-V should be doing the same thing but your testing says otherwise.

Can you please run linux as L1 with "-hypervisor" and see if it works ?
If it doesn't, please check dmesg for relevant messages.

Thanks,
Bandan


> virtualization bit is actually enabled, instead of simply relying on
> the VMX flag.
> Unfortunately, VMs are still not starting either. The seabios in Qemu
> seems to be pretty difficult to modify. I'll check whether I can
> reinstall on UEFI, maybe that is going to make a difference.
>
> The way VMWare does this is actually semi-documented (it hasn't always
> been in the product, and a workaround involving manually editing the
> configuration has been used for a long time). I'll see if I can
> correlate these to Qemu options, to see whether we can use those
> instructions to get this working on Qemu.
>
> 1. Set 'vhv.enable = "TRUE" on the VM
>   It "enables virtual hardware virtualization". This seems equivalent
> to the -hypervisor flag
>
> 2. Set 'monitor.virtual_exec = "hardware" on the VM.
>   This option seems to force hardware virtualization for both CPU and
> MMU. Unsure whether there's an equivalent Qemu configuration option.
> Unsure whether it's needed on Qemu. Details at
> http://www.vmware.com/files/pdf/perf-vsphere-monitor_modes.pdf
>
> 3. Set hypervisor.cpuid.v0 = “FALSE” in the VM configuration
>   This seems synonymous to the +vmx flag
>
> 4. Enable the option to "Virtualize VT-x/EPT or AMD/RVI"
>   I have not found any option to explicitly do this in Qemu. Looking
> at my Ubuntu VM, the "ept" flag IS passed to the VM, so this should be
> OK.
>
> 5. Add the following CPU mask Level ECX: ---- ---- ---- ---- ---- ---- --H- 
> ----
>   Not sure how to do that in Qemu or what it does. Looking at
> https://en.wikipedia.org/wiki/CPUID, it seems to disable the XSAVE
> instruction(?). For fun, I passed -cpu ...-xsave, but it did not seem
> to make any difference whatsoever.
>
> Sincerely,
> Roel Brook
>
>
>
> 2015-10-04 5:07 GMT+02:00 Bandan Das <address@hidden>:
>> ...
>>> Windows 2012 / 2016 technical preview 3
>>> --------------------------------------------------------
>>> The installation via the "default" method of Add/Remove Features does
>>> not work. Hyper-V displays the error message "A hypervisor is already
>>> running".
>>>
>>> This check can be skipped by using a different method of installation
>>> (from PowerShell):
>>> Enable-WindowsOptionalFeature –Online -FeatureName Microsoft-Hyper-V
>>> –All -NoRestart
>>>
>>> This results in (again) the server booting up, but being unable to run
>>> any guest VMs. The error message is less clear then that in 2008, just
>>> "The Virtual Machine Management Service failed to start the virtual
>>> machine 'New Virtual Machine' because one of the Hyper-V components is
>>> not running (Virtual machine ID
>>> 0C063B29-249A-41C8-8A5B-6D4D2E37EF7C)."
>>> is what I could find.
>>>
>>> Other
>>> --------
>>> Just to verify that "nesting" is actually working, I've also installed
>>> a Ubuntu 15.10 VM and installed Qemu on it.
>>> This combination CAN successfully run a VM.
>>>
>>> I've also installed VirtualBox on one of the Windows VMs. This
>>> VirtualBox instance is also capable of running virtual machines.
>>> According to the icon in the bottom right, VirtualBox IS using the
>>> hardware virtualization.
>>>
>>> Is this a problem specific to Hyper-V? Is there a method to get
>>
>> Nesting a Hyper-V L1 hypervisor is largely untested. But one of the problems 
>> I recollect is that Hyper-V doesn’t like running in a virtualized 
>> environment. It checks the “hypervisor” feature flag that Qemu exports. You 
>> could try running qemu with “-cpu  host,-hypervisor” or something similar 
>> and see if it makes any difference. I suspect there would be other 
>> roadblocks though, this is just one of them.
>>
>>
>>
>>> Hyper-V working including running guests? I know for a fact that
>>> VMWare Workstation / ESX is able to run Hyper-V fully, so it should
>>
>> Yes, IIRC one of the things ESX does is hide the hypervisor flag 
>> specifically for Hyper-V.
>>
>> Bandan
>>
>>> not be completely impossible (but I dislike VMWare for different
>>> reasons).
>>>
>>> My Qemu command line (generated by virt-manager). Except for disks and
>>> domain names, all are identical:
>>>
>>> qemu-system-x86_64 -enable-kvm -name Windows_2008_R2 -S -machine
>>> pc-i440fx-vivid,accel=kvm,usb=off -cpu
>>> SandyBridge,+invtsc,+osxsave,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme
>>> -m 2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid
>>> 54a8f3a3-66c2-45a5-a280-ecf7019a67fa -no-user-config -nodefaults
>>> -chardev 
>>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/Windows_2008_R2.monitor,server,nowait
>>> -mon chardev=charmonitor,id=monitor,mode=control -rtc
>>> base=localtime,driftfix=slew -global kvm-pit.lost_tick_policy=discard
>>> -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global
>>> PIIX4_PM.disable_s4=1 -boot strict=on -device
>>> ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x6.0x7 -device
>>> ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x6
>>> -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x6.0x1
>>> -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x6.0x2
>>> -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
>>> file=/sub/kvm/Windows_2008_R2.qcow2,if=none,id=drive-ide0-0-0,format=qcow2,cache=unsafe,aio=threads
>>> -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1
>>> -drive 
>>> file=/sub/ISO/en_windows_server_2008_r2_with_sp1_x64_dvd_617601.iso,if=none,id=drive-ide0-0-1,readonly=on,format=raw
>>> -device ide-cd,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1
>>> -netdev tap,fd=24,id=hostnet0 -device
>>> rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:7b:d7:d2,bus=pci.0,addr=0x3
>>> -chardev pty,id=charserial0 -device
>>> isa-serial,chardev=charserial0,id=serial0 -chardev
>>> spicevmc,id=charchannel0,name=vdagent -device
>>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0
>>> -device usb-tablet,id=input0 -spice
>>> port=5903,addr=127.0.0.1,disable-ticketing,seamless-migration=on
>>> -device 
>>> qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vgamem_mb=16,bus=pci.0,addr=0x2
>>> -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device
>>> hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev
>>> spicevmc,id=charredir0,name=usbredir -device
>>> usb-redir,chardev=charredir0,id=redir0 -chardev
>>> spicevmc,id=charredir1,name=usbredir -device
>>> usb-redir,chardev=charredir1,id=redir1 -chardev
>>> spicevmc,id=charredir2,name=usbredir -device
>>> usb-redir,chardev=charredir2,id=redir2 -chardev
>>> spicevmc,id=charredir3,name=usbredir -device
>>> usb-redir,chardev=charredir3,id=redir3 -device
>>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on
>>>
>>> Thank you in advance for response.
>>>
>>> Sincerely,
>>> Roel Brook
>>>
>>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]