qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PATCH] spapr: fix memory hotplug error path


From: Bharata B Rao
Subject: Re: [Qemu-ppc] [PATCH] spapr: fix memory hotplug error path
Date: Tue, 4 Jul 2017 09:01:43 +0530
User-agent: Mutt/1.7.1 (2016-10-04)

On Mon, Jul 03, 2017 at 02:21:31PM +0200, Greg Kurz wrote:
> QEMU shouldn't abort if spapr_add_lmbs()->spapr_drc_attach() fails.
> Let's propagate the error instead, like it is done everywhere else
> where spapr_drc_attach() is called.
> 
> Signed-off-by: Greg Kurz <address@hidden>
> ---
>  hw/ppc/spapr.c |   10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 70b3fd374e2b..e103be500189 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -2601,6 +2601,7 @@ static void spapr_add_lmbs(DeviceState *dev, uint64_t 
> addr_start, uint64_t size,
>      int i, fdt_offset, fdt_size;
>      void *fdt;
>      uint64_t addr = addr_start;
> +    Error *local_err = NULL;
> 
>      for (i = 0; i < nr_lmbs; i++) {
>          drc = spapr_drc_by_id(TYPE_SPAPR_DRC_LMB,
> @@ -2611,7 +2612,12 @@ static void spapr_add_lmbs(DeviceState *dev, uint64_t 
> addr_start, uint64_t size,
>          fdt_offset = spapr_populate_memory_node(fdt, node, addr,
>                                                  SPAPR_MEMORY_BLOCK_SIZE);
> 
> -        spapr_drc_attach(drc, dev, fdt, fdt_offset, errp);
> +        spapr_drc_attach(drc, dev, fdt, fdt_offset, &local_err);
> +        if (local_err) {
> +            g_free(fdt);
> +            error_propagate(errp, local_err);
> +            return;
> +        }

There is some history to this. I was doing error recovery and propagation
here similarly during memory hotplug development phase until Igor
suggested that we shoudn't try to recover after we have done guest
visible changes.

Refer to "changes in v6" section in this post:
https://lists.gnu.org/archive/html/qemu-ppc/2015-06/msg00296.html

However at that time we were doing memory add by DRC index method
and hence would attach and online one LMB at a time.
In that method, if an intermediate attach fails we would end up with a few
LMBs being onlined by the guest already. However subsequently
we have switched (optionally, based on dedicated_hp_event_source) to
count-indexed method of hotplug where we do attach of all LMBs one by one
and then request the guest to hotplug all of them at once using count-indexed
method.

So it will be a bit tricky to abort for index based case and recover
correctly for count-indexed case.

Regards,
Bharata.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]