qemu-riscv
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and


From: Peter Maydell
Subject: Re: [PATCH] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows
Date: Tue, 23 Mar 2021 10:54:37 +0000

On Mon, 22 Mar 2021 at 22:35, Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Mon, Mar 22, 2021 at 08:13:36PM +0000, Peter Maydell wrote:
> > Currently the gpex PCI controller implements no special behaviour for
> > guest accesses to areas of the PIO and MMIO where it has not mapped
> > any PCI devices, which means that for Arm you end up with a CPU
> > exception due to a data abort.
> >
> > Most host OSes expect "like an x86 PC" behaviour, where bad accesses
> > like this return -1 for reads and ignore writes.  In the interests of
> > not being surprising, make host CPU accesses to these windows behave
> > as -1/discard where there's no mapped PCI device.
> >
> > Reported-by: Dmitry Vyukov <dvyukov@google.com>
> > Fixes: https://bugs.launchpad.net/qemu/+bug/1918917
> > Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
>
> BTW it looks like launchpad butchered the lore.kernel.org
> link so one can't find out what was the guest issue this is
> fixing. Want to include a bit more data in the commit log
> instead?

The link in the LP report works for me, I can just click
straight through.
https://lore.kernel.org/lkml/CAK8P3a0HVu+x0T6+K3d0v1bvU-Pes0F0CSjqm5x=bxFgv5Y3mA@mail.gmail.com/

It's a syzkaller report that the kernel falls over if userspace
tries to access a non-existent 8250 UART, because it doesn't
expect the external abort.

> > Do we need to have the property machinery so that old
> > virt-5.2 etc retain the previous behaviour ?

Musing on this after sending the patch, I'm leaning towards
adding the property stuff, just to be on the safe side.

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]