qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 0/5] Connect a PCIe host and graphics support


From: Andrea Bolognani
Subject: Re: [Qemu-devel] [PATCH v5 0/5] Connect a PCIe host and graphics support to RISC-V
Date: Wed, 10 Oct 2018 15:43:19 +0200

On Wed, 2018-10-10 at 13:11 +0000, Stephen  Bates wrote:
> I also tried these out but I was interested in seeing if I could create NVMe 
> models inside the new PCIe subsystem (for both the virt and sifive_u 
> machines). The sifive_u machine did not work at all (so I'll leave that one 
> for now). The virt machine successfully mapped in the NVMe devices and the OS 
> driver was able to probe the nvme driver against them. However something 
> seems to be broken with interrupts as I see messages like these in the OS 
> dmesg:
> 
> [   62.852000] nvme nvme0: I/O 856 QID 1 timeout, completion polled
> [   64.832000] nvme nvme1: I/O 819 QID 1 timeout, completion polled
> [   64.836000] nvme nvme1: I/O 820 QID 1 timeout, completion polled
> [   64.840000] nvme nvme1: I/O 821 QID 1 timeout, completion polled
> [   64.844000] nvme nvme1: I/O 822 QID 1 timeout, completion polled
> [   64.848000] nvme nvme0: I/O 856 QID 1 timeout, completion polled
> [   64.852000] nvme nvme0: I/O 857 QID 1 timeout, completion polled
> 
> These imply the driver hit an admin queue timeout but when it reaped the NVMe 
> admin completion queue it found commands were done but no interrupt was 
> detected by the OS.

So it looks like you at least got to the point where the guest OS
would find PCIe devices... Can you share the output of 'lspci' as
well as the configuration you used when building your bbl?

> I plan to also try with a e1000 network interface model tomorrow and see how 
> that behaves....

Please do :)

-- 
Andrea Bolognani / Red Hat / Virtualization




reply via email to

[Prev in Thread] Current Thread [Next in Thread]