qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Bug 1867519] Re: qemu 4.2 segfaults on VF detach


From: Christian Ehrhardt 
Subject: [Bug 1867519] Re: qemu 4.2 segfaults on VF detach
Date: Thu, 19 Mar 2020 09:01:47 -0000

I regularly before a release pull in fixes that were posted for qemu-stable.
This is one of them, I'll again do such a build and retest this issue with it.

I identified and backported (only one needed modification) 33 patches.
But as usual there might be some context needed on top - I have build that over 
night in [1]

Testing that on my reproducer

Attach-host:
[84652.671123] vfio-pci 0000:08:00.2: enabling device (0000 -> 0002)

Attach-guest:
[   45.199920] pci 0000:00:08.0: [15b3:1016] type 00 class 0x020000
[   45.200374] pci 0000:00:08.0: reg 0x10: [mem 0x00000000-0x000fffff 64bit 
pref]
[   45.201358] pci 0000:00:08.0: enabling Extended Tags
[   45.202726] pci 0000:00:08.0: 0.000 Gb/s available PCIe bandwidth, limited 
by Unknown speed x0 link at 0000:00:08.0 (capable of 63.008 Gb/s with 8 GT/s x8 
link)
[   45.208316] pci 0000:00:08.0: BAR 0: assigned [mem 0x100000000-0x1000fffff 
64bit pref]
[   45.256566] mlx5_core 0000:00:08.0: enabling device (0000 -> 0002)
[   45.262103] mlx5_core 0000:00:08.0: firmware version: 14.27.1016
[   45.544010] mlx5_core 0000:00:08.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) 
RxCqeCmprss(0)
[   45.710123] mlx5_core 0000:00:08.0 ens8: renamed from eth0
[   60.992547] random: crng init done
[   60.992552] random: 3 urandom warning(s) missed due to ratelimiting

Detach-host:
[84926.767411] mlx5_core 0000:08:00.2: enabling device (0000 -> 0002)
[84926.767514] mlx5_core 0000:08:00.2: firmware version: 14.27.1016
[84927.036146] mlx5_core 0000:08:00.2: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) 
RxCqeCmprss(0)
[84927.208523] mlx5_core 0000:08:00.2 ens1v1: renamed from eth0

Detach-guest:
<nothing>


So yes, these changes fix the issue here (and a bunch of others).
I'll open up an MP to get these changes into Ubuntu 20.04.

[1]: https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/3981

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1867519

Title:
  qemu 4.2 segfaults on VF detach

Status in QEMU:
  Fix Committed
Status in qemu package in Ubuntu:
  Confirmed

Bug description:
  After updating Ubuntu 20.04 to the Beta version, we get the following
  error and the virtual machines stucks when detaching PCI devices using
  virsh command:

  Error:
  error: Failed to detach device from /tmp/vf_interface_attached.xml
  error: internal error: End of file from qemu monitor

  steps to reproduce:
   1. create a VM over Ubuntu 20.04 (5.4.0-14-generic)
   2. attach PCI device to this VM (Mellanox VF for example)
   3. try to detaching  the PCI device using virsh command:
     a. create a pci interface xml file:
          
        <hostdev mode='subsystem' type='pci' managed='yes'>
        <driver name='vfio'/>
        <source>
        <address type='pci' domain='0x0000' bus='0x11' slot='0x00' 
function='0x2' />
        </source>
        </hostdev>
      
     b.  #virsh detach-device <VM-Doman-name> <pci interface xml file>


  - Ubuntu release:
    Description:    Ubuntu Focal Fossa (development branch)
    Release:        20.04

  - Package ver:
    libvirt0:
    Installed: 6.0.0-0ubuntu3
    Candidate: 6.0.0-0ubuntu5
    Version table:
       6.0.0-0ubuntu5 500
          500 http://il.archive.ubuntu.com/ubuntu focal/main amd64 Packages
   *** 6.0.0-0ubuntu3 100
          100 /var/lib/dpkg/status

  - What you expected to happen: 
    PCI device detached without any errors.

  - What happened instead:
    getting the errors above and he VM stuck

  additional info:
  after downgrading the libvirt0 package and all the dependent packages to 5.4 
the previous, version, seems that the issue disappeared

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1867519/+subscriptions



reply via email to

[Prev in Thread] Current Thread [Next in Thread]