qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Bug 1889945] Re: virtiofsd exits when iommu_platform is enabled after v


From: Launchpad Bug Tracker
Subject: [Bug 1889945] Re: virtiofsd exits when iommu_platform is enabled after virtiofs driver is loaded
Date: Wed, 07 Jul 2021 04:17:21 -0000

[Expired for QEMU because there has been no activity for 60 days.]

** Changed in: qemu
       Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1889945

Title:
  virtiofsd exits when iommu_platform is enabled after virtiofs driver
  is loaded

Status in QEMU:
  Expired

Bug description:
  Bug in QEMU 5.0.0:

  virtiofsd exits when iommu_platform is enabled after virtiofs driver is 
loaded.
  If iommu_platform is disabled the guest immediately locks up as a result of 
the configured PCIe-Passthrough.

  Host system:
  - Arch Linux amd64
  - AMD Ryzen Platform
  - QEMU 5.0.0

  Guest system:
  - Windows Server 2019 (also happens in linux installations)
  - PCIe GPU hostdev
  - virtiofs passthrough

  Many thanks for any advice.

  QEMU LOG:
  2020-07-28 19:20:07.197+0000: Starting external device: virtiofsd
  /usr/lib/qemu/virtiofsd --fd=29 -o source=/viofstest
  2020-07-28 19:20:07.207+0000: starting up libvirt version: 6.5.0, qemu 
version: 5.0.0, kernel: 5.7.10-arch1-1, hostname: mspc
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/bin \
  HOME=/var/lib/libvirt/qemu/domain-7-win \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-7-win/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-7-win/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-7-win/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=win,debug-threads=on \
  -S \
  -object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-7-win/master-key.aes
 \
  -blockdev 
'{"driver":"file","filename":"/usr/share/ovmf/x64/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}'
 \
  -blockdev 
'{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}'
 \
  -blockdev 
'{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/win_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}'
 \
  -blockdev 
'{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}'
 \
  -machine 
pc-q35-5.0,accel=kvm,usb=off,vmport=off,dump-guest-core=off,kernel_irqchip=on,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format
 \
  -cpu 
host,migratable=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vendor-id=whatever,kvm=off
 \
  -m 2048 \
  -overcommit mem-lock=off \
  -smp 8,sockets=8,cores=1,threads=1 \
  -object 
memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu/7-win,share=yes,size=2147483648
 \
  -numa node,nodeid=0,cpus=0-7,memdev=ram-node0 \
  -uuid c8efa194-52f8-4526-a0f8-29a254839b55 \
  -display none \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=29,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=localtime,driftfix=slew \
  -global kvm-pit.lost_tick_policy=delay \
  -no-hpet \
  -no-shutdown \
  -global ICH9-LPC.disable_s3=1 \
  -global ICH9-LPC.disable_s4=1 \
  -boot menu=off,strict=on \
  -device 
pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2
 \
  -device pcie-pci-bridge,id=pci.2,bus=pci.1,addr=0x0 \
  -device pcie-root-port,port=0x11,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x1 \
  -device pcie-root-port,port=0x12,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x2 \
  -device pcie-root-port,port=0x13,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x3 \
  -device pcie-root-port,port=0x14,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x4 \
  -device pcie-root-port,port=0x15,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x5 \
  -device pcie-root-port,port=0x16,chassis=8,id=pci.8,bus=pcie.0,addr=0x2.0x6 \
  -device pcie-root-port,port=0x17,chassis=9,id=pci.9,bus=pcie.0,addr=0x2.0x7 \
  -device 
pcie-root-port,port=0x18,chassis=10,id=pci.10,bus=pcie.0,multifunction=on,addr=0x3
 \
  -device pcie-root-port,port=0x19,chassis=11,id=pci.11,bus=pcie.0,addr=0x3.0x1 
\
  -device pcie-root-port,port=0x1a,chassis=12,id=pci.12,bus=pcie.0,addr=0x3.0x2 
\
  -device 
pcie-root-port,port=0x8,chassis=13,id=pci.13,bus=pcie.0,multifunction=on,addr=0x1
 \
  -device pcie-root-port,port=0x9,chassis=14,id=pci.14,bus=pcie.0,addr=0x1.0x1 \
  -device pcie-root-port,port=0xa,chassis=15,id=pci.15,bus=pcie.0,addr=0x1.0x2 \
  -device pcie-root-port,port=0xb,chassis=16,id=pci.16,bus=pcie.0,addr=0x1.0x3 \
  -device nec-usb-xhci,id=usb,bus=pci.7,addr=0x0 \
  -device virtio-serial-pci,id=virtio-serial0,bus=pci.14,addr=0x0 \
  -blockdev 
'{"driver":"host_device","filename":"/dev/zvol/ssd/windows","aio":"threads","node-name":"libvirt-3-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}'
 \
  -blockdev 
'{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}'
 \
  -device 
virtio-blk-pci,bus=pci.3,addr=0x0,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on
 \
  -blockdev 
'{"driver":"host_device","filename":"/dev/zvol/ssd/windows-ssdgames1","aio":"threads","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}'
 \
  -blockdev 
'{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}'
 \
  -device 
virtio-blk-pci,bus=pci.9,addr=0x0,drive=libvirt-2-format,id=virtio-disk1,write-cache=on
 \
  -blockdev 
'{"driver":"host_device","filename":"/dev/zvol/hdd/win-games1","aio":"threads","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}'
 \
  -blockdev 
'{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}'
 \
  -device 
virtio-blk-pci,bus=pci.13,addr=0x0,drive=libvirt-1-format,id=virtio-disk2,write-cache=on
 \
  -chardev 
socket,id=chr-vu-fs0,path=/var/lib/libvirt/qemu/domain-7-win/fs0-fs.sock \
  -device 
vhost-user-fs-pci,chardev=chr-vu-fs0,tag=viofstest,iommu_platform=on,ats=on,bus=pci.15,addr=0x0
 \
  -netdev tap,fd=32,id=hostnet0,vhost=on,vhostfd=34 \
  -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:fb:0c:28,bus=pci.10,addr=0x0
 \
  -chardev spicevmc,id=charchannel0,name=vdagent \
  -device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0
 \
  -device virtio-keyboard-pci,id=input0,bus=pci.12,addr=0x0 \
  -device virtio-tablet-pci,id=input1,bus=pci.8,addr=0x0 \
  -device virtio-mouse-pci,id=input2,bus=pci.11,addr=0x0 \
  -device ich9-intel-hda,id=sound0,bus=pci.2,addr=0x1 \
  -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 \
  -device vfio-pci,host=0000:08:00.0,id=hostdev0,bus=pci.5,addr=0x0,rombar=1 \
  -device vfio-pci,host=0000:08:00.1,id=hostdev1,bus=pci.6,addr=0x0,rombar=1 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 \
  -object 
input-linux,id=kbd1,evdev=/dev/input/by-path/pci-0000:0a:00.3-usb-0:3:1.0-event-kbd,grab_all=on,repeat=on
 \
  -object 
input-linux,id=mouse1,evdev=/dev/input/by-path/pci-0000:0a:00.3-usb-0:4:1.0-event-mouse
 \
  -sandbox 
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  2020-07-28 19:20:07.207+0000: Domain id=7 is tainted: high-privileges
  2020-07-28 19:20:07.207+0000: Domain id=7 is tainted: custom-argv
  2020-07-28 19:20:07.207+0000: Domain id=7 is tainted: host-cpu
  <--- VIOFS DRIVER GETS LOADED HERE --->
  2020-07-28T19:20:57.568089Z qemu-system-x86_64: Failed to read msg header. 
Read -1 instead of 12. Original request 1566376224.
  2020-07-28T19:20:57.568120Z qemu-system-x86_64: Fail to update device iotlb
  2020-07-28T19:20:57.568147Z qemu-system-x86_64: Failed to read msg header. 
Read 0 instead of 12. Original request 1566376528.
  2020-07-28T19:20:57.568151Z qemu-system-x86_64: Fail to update device iotlb
  2020-07-28T19:20:57.568153Z qemu-system-x86_64: Failed to set msg fds.
  2020-07-28T19:20:57.568156Z qemu-system-x86_64: vhost_set_vring_call failed: 
Invalid argument (22)
  2020-07-28T19:20:57.568160Z qemu-system-x86_64: Failed to set msg fds.
  2020-07-28T19:20:57.568162Z qemu-system-x86_64: vhost_set_vring_call failed: 
Invalid argument (22)
  2020-07-28T19:20:57.568296Z qemu-system-x86_64: Failed to read from slave.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1889945/+subscriptions



reply via email to

[Prev in Thread] Current Thread [Next in Thread]