qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: completion timeouts with pin-based interrupts in QEMU hw/nvme


From: Alistair Francis
Subject: Re: completion timeouts with pin-based interrupts in QEMU hw/nvme
Date: Thu, 19 Jan 2023 10:41:42 +1000

On Thu, Jan 19, 2023 at 9:07 AM Keith Busch <kbusch@kernel.org> wrote:
>
> On Wed, Jan 18, 2023 at 09:33:05AM -0700, Keith Busch wrote:
> > On Wed, Jan 18, 2023 at 03:04:06PM +0000, Peter Maydell wrote:
> > > On Tue, 17 Jan 2023 at 19:21, Guenter Roeck <linux@roeck-us.net> wrote:
> > > > Anyway - any idea what to do to help figuring out what is happening ?
> > > > Add tracing support to pci interrupt handling, maybe ?
> > >
> > > For intermittent bugs, I like recording the QEMU session under
> > > rr (using its chaos mode to provoke the failure if necessary) to
> > > get a recording that I can debug and re-debug at leisure. Usually
> > > you want to turn on/add tracing to help with this, and if the
> > > failure doesn't hit early in bootup then you might need to
> > > do a QEMU snapshot just before point-of-failure so you can
> > > run rr only on the short snapshot-to-failure segment.
> > >
> > > https://translatedcode.wordpress.com/2015/05/30/tricks-for-debugging-qemu-rr/
> > > https://translatedcode.wordpress.com/2015/07/06/tricks-for-debugging-qemu-savevm-snapshots/
> > >
> > > This gives you a debugging session from the QEMU side's perspective,
> > > of course -- assuming you know what the hardware is supposed to do
> > > you hopefully wind up with either "the guest software did X,Y,Z
> > > and we incorrectly did A" or else "the guest software did X,Y,Z,
> > > the spec says A is the right/a permitted thing but the guest got 
> > > confused".
> > > If it's the latter then you have to look at the guest as a separate
> > > code analysis/debug problem.
> >
> > Here's what I got, though I'm way out of my depth here.
> >
> > It looks like Linux kernel's fasteoi for RISC-V's PLIC claims the
> > interrupt after its first handling, which I think is expected. After
> > claiming, QEMU masks the pending interrupt, lowering the level, though
> > the device that raised it never deasserted.
>
> I'm not sure if this is correct, but this is what I'm coming up with and
> appears to fix the problem on my setup. The hardware that sets the
> pending interrupt is going clear it, so I don't see why the interrupt
> controller is automatically clearing it when the host claims it.
>
> ---
> diff --git a/hw/intc/sifive_plic.c b/hw/intc/sifive_plic.c
> index c2dfacf028..f8f7af08dc 100644
> --- a/hw/intc/sifive_plic.c
> +++ b/hw/intc/sifive_plic.c
> @@ -157,7 +157,6 @@ static uint64_t sifive_plic_read(void *opaque, hwaddr 
> addr, unsigned size)
>              uint32_t max_irq = sifive_plic_claimed(plic, addrid);
>
>              if (max_irq) {
> -                sifive_plic_set_pending(plic, max_irq, false);
>                  sifive_plic_set_claimed(plic, max_irq, true);
>              }
>

This change isn't correct. The PLIC spec
(https://github.com/riscv/riscv-plic-spec/releases/download/1.0.0_rc5/riscv-plic-1.0.0_rc5.pdf)
states:

"""
On receiving a claim message, the PLIC core will atomically determine
the ID of the highest-priority pending interrupt for the target and
then clear down the corresponding source’s IP bit
"""

which is what we are doing here. We are clearing the pending interrupt
inside the PLIC

Alistair



reply via email to

[Prev in Thread] Current Thread [Next in Thread]