qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PULL 1/2] spapr: Reset CAS & IRQ subsystem after devices


From: David Gibson
Subject: Re: [Qemu-ppc] [PULL 1/2] spapr: Reset CAS & IRQ subsystem after devices
Date: Mon, 26 Aug 2019 17:47:05 +1000
User-agent: Mutt/1.12.1 (2019-06-15)

On Fri, Aug 23, 2019 at 03:47:52PM +0200, Laurent Vivier wrote:
> On 23/08/2019 07:39, David Gibson wrote:
> > On Thu, Aug 22, 2019 at 10:07:09PM +0200, Laurent Vivier wrote:
> >> On 13/08/2019 08:59, David Gibson wrote:
> >>> This fixes a nasty regression in qemu-4.1 for the 'pseries' machine,
> >>> caused by the new "dual" interrupt controller model.  Specifically,
> >>> qemu can crash when used with KVM if a 'system_reset' is requested
> >>> while there's active I/O in the guest.
> >>>
> >>> The problem is that in spapr_machine_reset() we:
> >>>
> >>> 1. Reset the CAS vector state
> >>>   spapr_ovec_cleanup(spapr->ov5_cas);
> >>>
> >>> 2. Reset all devices
> >>>   qemu_devices_reset()
> >>>
> >>> 3. Reset the irq subsystem
> >>>   spapr_irq_reset();
> >>>
> >>> However (1) implicitly changes the interrupt delivery mode, because
> >>> whether we're using XICS or XIVE depends on the CAS state.  We don't
> >>> properly initialize the new irq mode until (3) though - in particular
> >>> setting up the KVM devices.
> >>>
> >>> During (2), we can temporarily drop the BQL allowing some irqs to be
> >>> delivered which will go to an irq system that's not properly set up.
> >>>
> >>> Specifically, if the previous guest was in (KVM) XIVE mode, the CAS
> >>> reset will put us back in XICS mode.  kvm_kernel_irqchip() still
> >>> returns true, because XIVE was using KVM, however XICs doesn't have
> >>> its KVM components intialized and kernel_xics_fd == -1.  When the irq
> >>> is delivered it goes via ics_kvm_set_irq() which assert()s that
> >>> kernel_xics_fd != -1.
> >>>
> >>> This change addresses the problem by delaying the CAS reset until
> >>> after the devices reset.  The device reset should quiesce all the
> >>> devices so we won't get irqs delivered while we mess around with the
> >>> IRQ.  The CAS reset and irq re-initialize should also now be under the
> >>> same BQL critical section so nothing else should be able to interrupt
> >>> it either.
> >>>
> >>> We also move the spapr_irq_msi_reset() used in one of the legacy irq
> >>> modes, since it logically makes sense at the same point as the
> >>> spapr_irq_reset() (it's essentially an equivalent operation for older
> >>> machine types).  Since we don't need to switch between different
> >>> interrupt controllers for those old machine types it shouldn't
> >>> actually be broken in those cases though.
> >>>
> >>> Cc: Cédric Le Goater <address@hidden>
> >>>
> >>> Fixes: b2e22477 "spapr: add a 'reset' method to the sPAPR IRQ backend"
> >>> Fixes: 13db0cd9 "spapr: introduce a new sPAPR IRQ backend supporting
> >>>                  XIVE and XICS"
> >>> Signed-off-by: David Gibson <address@hidden>
> >>> ---
> >>>  hw/ppc/spapr.c | 24 ++++++++++++------------
> >>>  1 file changed, 12 insertions(+), 12 deletions(-)
> >>>
> >>> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> >>> index 821f0d4a49..12ed4b065c 100644
> >>> --- a/hw/ppc/spapr.c
> >>> +++ b/hw/ppc/spapr.c
> >>> @@ -1726,6 +1726,18 @@ static void spapr_machine_reset(MachineState 
> >>> *machine)
> >>>          spapr_setup_hpt_and_vrma(spapr);
> >>>      }
> >>>  
> >>> +    /*
> >>> +     * NVLink2-connected GPU RAM needs to be placed on a separate NUMA 
> >>> node.
> >>> +     * We assign a new numa ID per GPU in spapr_pci_collect_nvgpu() 
> >>> which is
> >>> +     * called from vPHB reset handler so we initialize the counter here.
> >>> +     * If no NUMA is configured from the QEMU side, we start from 1 as 
> >>> GPU RAM
> >>> +     * must be equally distant from any other node.
> >>> +     * The final value of spapr->gpu_numa_id is going to be written to
> >>> +     * max-associativity-domains in spapr_build_fdt().
> >>> +     */
> >>> +    spapr->gpu_numa_id = MAX(1, nb_numa_nodes);
> >>> +    qemu_devices_reset();
> >>> +
> >>>      /*
> >>>       * If this reset wasn't generated by CAS, we should reset our
> >>>       * negotiated options and start from scratch
> >>> @@ -1741,18 +1753,6 @@ static void spapr_machine_reset(MachineState 
> >>> *machine)
> >>>          spapr_irq_msi_reset(spapr);
> >>>      }
> >>>  
> >>> -    /*
> >>> -     * NVLink2-connected GPU RAM needs to be placed on a separate NUMA 
> >>> node.
> >>> -     * We assign a new numa ID per GPU in spapr_pci_collect_nvgpu() 
> >>> which is
> >>> -     * called from vPHB reset handler so we initialize the counter here.
> >>> -     * If no NUMA is configured from the QEMU side, we start from 1 as 
> >>> GPU RAM
> >>> -     * must be equally distant from any other node.
> >>> -     * The final value of spapr->gpu_numa_id is going to be written to
> >>> -     * max-associativity-domains in spapr_build_fdt().
> >>> -     */
> >>> -    spapr->gpu_numa_id = MAX(1, nb_numa_nodes);
> >>> -    qemu_devices_reset();
> >>> -
> >>>      /*
> >>>       * This is fixing some of the default configuration of the XIVE
> >>>       * devices. To be called after the reset of the machine devices.
> >>>
> >>
> >> This commit breaks migration between POWER8 <-> POWER9 hosts:
> >>
> >> qemu-system-ppc64: error while loading state for instance 0x1 of device 
> >> 'cpu'
> >> qemu-system-ppc64: load of migration failed: Operation not permitted
> >>
> >> Using a guest with a running 4.18 kernel (RHEL 8) and "-M 
> >> pseries,max-cpu-compat=power8" on both sides.
> >>
> >> There is no problem if both hosts are of the same kind ( P8 <-> P8 or P9 
> >> <-> P9).
> > 
> > Crud, I was afraid there might be some subtle dependency on the
> > reverse order.
> 
> It seems the side effect of patch in comment 5 is to add a supplementary 
> field compat_pvr in CPUs in the migration stream:
> 
>         {
>             "name": "cpu",
>             "instance_id": 0,
>             "vmsd_name": "cpu",
>             "version": 5,
> ...
>             "subsections": [
> ...
>                 {
>                     "vmsd_name": "cpu/compat",
>                     "version": 1,
>                     "fields": [
>                         {
>                             "name": "compat_pvr",
>                             "type": "uint32",
>                             "size": 4
>                         }
>                     ]
>                 }
>             ]
>         },
> ...
> 
> What seems to happen is compat_pvr is not propagated correctly to all
>  CPUs.
> 
> Originally, spapr_machine_reset() calls ppc_set_compat() to set the 
> value to max_compat_pvr for the first cpu and this was propagated to all 
> CPUs by spapr_cpu_reset().
> Now, as spapr_cpu_reset() is called before that, the value is not 
> propagated to all CPUs.
> 
> A simple fix seems to be:
> 
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -1752,7 +1752,7 @@ static void spapr_machine_reset(MachineState *machine)
>          spapr_ovec_cleanup(spapr->ov5_cas);
>          spapr->ov5_cas = spapr_ovec_new();
>  
> -        ppc_set_compat(first_ppc_cpu, spapr->max_compat_pvr, &error_fatal);
> +        ppc_set_compat_all(spapr->max_compat_pvr, &error_fatal);
>      }
>  
>      /*
> 

As discussed on our call, I think this is a good fix at least in the
medium term.  If you can add in some explanatory comments and post it
formally ASAP.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]