qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Possible bug: virtio-scsi + iothread (Former x-data-plane)


From: Zir Blazer
Subject: [Qemu-devel] Possible bug: virtio-scsi + iothread (Former x-data-plane) = "Guest moved index from 0 to 61440" warning
Date: Tue, 5 Apr 2016 19:49:47 -0300

Recently I hear that the experimental x-data-plane feature from virtio-blk was production-ready and that virtio-scsi also got support for it, so, after finding what the new syntax is:


...I decided to test it. After all, it was supposed to be a free huge performance boost.


My test setup:

1- A Xen Host running Xen 4.5 with Arch Linux as Dom0 using Kernel 4.0, and three VMs (One main everyday VM, another for some work which I want isolated, and the Nested KVM Host). I'm intending to replace this host with QEMU-KVM-VFIO after I nail down all the details.
2- A Nested KVM Host running Arch Linux using Kernel 4.4 and QEMU 2.5 (I'm not using libvirt, just standalone QEMU). Since I'm using Nested Virtualization, this host can create KVM capable VMs
3- A test VM spawned by the Nested KVM Host. I just use it to boot ArchISO (Arch Linux LiveCD) and check if things are working


The problem (And the reason why I'm sending this mail) is that when I launch QEMU from the command line to create a VM using virtio-scsi-pci with the iothread parameter, it throws a strange "Guest moved index from 0 to 61440" warning. Some googling reveals that this error appeared for some users in totally different circunstances (Like removing not-hotpluggable devices) and that its actually a fatal error that makes the VM hang, crash, or something else, but in my case, it appears only at the moment that I create the VM, and it seems to work after the warning. However, since the device involved is a storage controller, I'm worried about possible data corruption or other major issues.

The bare minimum script that produces the warning, should be this:

#!/bin/bash

qemu-system-x86_64 \
-m 1024M \
-enable-kvm \
-object iothread,id=iothread0 \
-device virtio-scsi-pci,iothread=iothread0 \

This produces the "Guest moved index from 0 to 61440" warning. It happens when virtio-scsi-pci has the iothread parameters set.
Of note, not using -enable-kvm in the previous script produces this fatal error:

virtio-scsi: VRing setup failed
qemu-system-x86_64: /build/qemu/src/qemu-2.5.0/memory.c:1735: memory_region_del_eventfd: Assertion `i != mr->ioeventfd_nb' failed.

...but since I didn't googled if having KVM enabled was required to use virtio-scsi-pci or other VirtIO devices, I don't know if this is standard behavior or unintended, so this may not be important at all.

You can even get multiple warnings, one for each virtio-scsi-pci device using iothreads. For example...

#!/bin/bash

qemu-system-x86_64 \
-monitor stdio \
-m 1024M \
-enable-kvm \
-object iothread,id=iothread0 \
-object iothread,id=iothread1 \
-object iothread,id=iothread2 \
-device virtio-scsi-pci,iothread=iothread0 \
-device virtio-scsi-pci,iothread=iothread1 \
-device virtio-scsi-pci,iothread=iothread2 \
-device virtio-scsi-pci \

This one produces three identical "Guest moved index from 0 to 61440" warnings on the Terminal.
Using info iothreads on QEMU Monitor does shows the three IO Threads. Additional -object iothread,id=iothreadX that aren't used, or -device virtio-scsi-pci devices that aren't using a iothread, does not produce extra warnings. The warning does NOT happen if I use instead virtio-blk-pci with iothread.

This is a more complete test script booting the Arch Linux ArchISO LiveCD, which was what I was using previously when I found the warning the first time:

#!/bin/bash

qemu-system-x86_64 \
-name "Test VM" \
-monitor stdio \
-m 1024M \
-M pc-q35-2.5,accelt=kvm \
-nodefaults \
-drive if=none,file=archlinux-2016.03.01-dual.iso,readonly=on,format=raw,id=drive0
-drive if=none,file=/dev/vg0/lv0,format=raw,id=drive1 \
-drive if=none,file=filetest1.img,format=raw,id=drive2 \
-drive if=none,file=filetest2.img,format=raw,id=drive3 \
-device ide-cd,drive=drive0 \
-device qxl-vga \
-object iothread,id=iothread0 \
-object iothread,id=iothread1 \
-object iothread,id=iothread2 \
-device virtio-scsi-pci,iothread=iothread0,id=scsi0 \
-device virtio-scsi-pci,iothread=iothread1,id=scsi1 \
-device scsi-hd,bus=scsi0.0,drive=drive1 \
-device scsi-hd,bus=scsi1.0,drive=drive2 \
-device virtio-blk-pci,iothread=iothread2,drive=drive3 \

Starting the VM produces two warnings on the Terminal. Arch Linux however does see with lsblk the drives sda and sdb (Plus vda for the VirtIO Block Device), and lsblk does show two Virtio SCSI Controllers (Plus the VirtIO Block Device). I didn't tried to read or write to them to check if they work.

I also asked to a guy at QEMU IRC that had an Arch Linux host with a cloned test VM with a virtio-scsi-pci device, to add the -object iothread and iothread parameter to virtio-scsi-pci. He said that it worked and he didn't received any Terminal warning, but that Windows CPU Usage was always 100% when he launched with it. I don't know if in his case, VirtIO Windows Drivers may be the reason (Sadly, forgot to ask Windows and VirtIO Drivers versions). In my VM with ArchISO, using top, CPU Usage seems to be under 1%, so at least in Linux it doesn't happens.

Basically, I want if someone can confirm me if the warning happens to someone else, why, and if its something to worry about that should be fixed (Possible data corruption), or I can merely ignore it. Maybe is Nested Virtualization related and doesn't happen on bare metal...


As an additional question, I have found some somewhat related info here...


...that seems to say that if I want to use iothreads with virtio-scsi, since you can't point a scsi-hd to an iothread, you should place each scsi-hd at an exclusive virtio-scsi-pci controller, each with its own iothread. So if I have 10 scsi-hd, I should have 10 virtio-scsi-pci with 10 iothread objects. Is this required, or just recommended? At least I tried with a single virtio-scsi-pci with iothread and two scsi-hd attached to it, and besides the warning, they looked in lsblk as expected. Wasn't one of the goals of virtio-scsi to allow consolidating multiple drives on a single PCI Device Slot instead of requiring one per drive like virtio-blk? It seems like using iothreads hurts that goal a bit.

Is there any other feature or performance reason to use virtio-scsi, or should I be like the other home users and just stick to virtio-blk?


Finally, does using iothreads adds anything at all to the QEMU Monitor qtree? I also tested having two virtio-scsi-pci, one with iothread and other without, and can't find a way to figure out by using info qtree if there is a parameter that tells you that it is using an iothread.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]