qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 3/4] hw/ppc: Take nested guest into account when saving t


From: David Gibson
Subject: Re: [RFC PATCH 3/4] hw/ppc: Take nested guest into account when saving timebase
Date: Mon, 28 Feb 2022 13:06:18 +1100

On Fri, Feb 25, 2022 at 01:08:46PM -0300, Fabiano Rosas wrote:
> David Gibson <david@gibson.dropbear.id.au> writes:
> 
> > On Thu, Feb 24, 2022 at 03:58:16PM -0300, Fabiano Rosas wrote:
> >> When saving the guest "timebase" we look to the first_cpu for its
> >> tb_offset. If that CPU happens to be running a nested guest at this
> >> time, the tb_offset will have the nested guest value.
> >> 
> >> This was caught by code inspection.
> >
> > This doesn't seem right.  Isn't the real problem that nested_tb_offset
> > isn't being migrated?  If you migrate that, shouldn't everything be
> > fixed up when the L1 cpu leaves the nested guest on the destination
> > host?
> 
> This uses first_cpu, so after we introduced the nested guest code, this
> value has become dependent on what the first_cpu is doing. If it happens
> to be running the nested guest when we migrate, then guest_timebase here
> will have a different value from the one it would have if we had used
> another cpu's tb_offset.

Oh, good point.  Yes, you probably do need this.

> Now, we might have a bug or at least an inefficiency here because
> timebase_load is never called for the TCG migration case. The
> cpu_ppc_clock_vm_state_change callback is only registered for KVM. So in
> TCG we call timebase_save during pre_save, migrate the guest_timebase,
> but never do anything with it on the remote side.

Oh.. yeah.. that sounds probably buggy.  Unless the logic you need is
done at the time of read TB or read DECR.

> 
> >> 
> >> Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
> >> ---
> >>  hw/ppc/ppc.c | 17 ++++++++++++++++-
> >>  1 file changed, 16 insertions(+), 1 deletion(-)
> >> 
> >> diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
> >> index 9e99625ea9..093cd87014 100644
> >> --- a/hw/ppc/ppc.c
> >> +++ b/hw/ppc/ppc.c
> >> @@ -36,6 +36,7 @@
> >>  #include "kvm_ppc.h"
> >>  #include "migration/vmstate.h"
> >>  #include "trace.h"
> >> +#include "hw/ppc/spapr_cpu_core.h"
> >>  
> >>  static void cpu_ppc_tb_stop (CPUPPCState *env);
> >>  static void cpu_ppc_tb_start (CPUPPCState *env);
> >> @@ -961,19 +962,33 @@ static void timebase_save(PPCTimebase *tb)
> >>  {
> >>      uint64_t ticks = cpu_get_host_ticks();
> >>      PowerPCCPU *first_ppc_cpu = POWERPC_CPU(first_cpu);
> >> +    int64_t tb_offset;
> >>  
> >>      if (!first_ppc_cpu->env.tb_env) {
> >>          error_report("No timebase object");
> >>          return;
> >>      }
> >>  
> >> +    tb_offset = first_ppc_cpu->env.tb_env->tb_offset;
> >> +
> >> +    if (first_ppc_cpu->vhyp && vhyp_cpu_in_nested(first_ppc_cpu)) {
> >> +        SpaprCpuState *spapr_cpu = spapr_cpu_state(first_ppc_cpu);
> >> +
> >> +        /*
> >> +         * If the first_cpu happens to be running a nested guest at
> >> +         * this time, tb_env->tb_offset will contain the nested guest
> >> +         * offset.
> >> +         */
> >> +        tb_offset -= spapr_cpu->nested_tb_offset;
> >> +    }
> >> +
> >>      /* not used anymore, we keep it for compatibility */
> >>      tb->time_of_the_day_ns = qemu_clock_get_ns(QEMU_CLOCK_HOST);
> >>      /*
> >>       * tb_offset is only expected to be changed by QEMU so
> >>       * there is no need to update it from KVM here
> >>       */
> >> -    tb->guest_timebase = ticks + first_ppc_cpu->env.tb_env->tb_offset;
> >> +    tb->guest_timebase = ticks + tb_offset;
> >>  
> >>      tb->runstate_paused =
> >>          runstate_check(RUN_STATE_PAUSED) || 
> >> runstate_check(RUN_STATE_SAVE_VM);
> 

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]