qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-ppc] [PATCH v3] ppc: make idle_timer a per-cpu va


From: Greg Kurz
Subject: Re: [Qemu-devel] [Qemu-ppc] [PATCH v3] ppc: make idle_timer a per-cpu variable
Date: Thu, 18 Jul 2019 18:17:56 +0200

On Thu, 18 Jul 2019 10:21:28 -0500
Shivaprasad G Bhat <address@hidden> wrote:

> The current code is broken for more than vcpu as
> each thread would overwrite and there were memory leaks.
> 
> Make it part of PowerPCCPU so that every thread has a
> separate one. Avoid using the timer_new_ns which is
> not the preferred way to create timers.
> 
> Signed-off-by: Shivaprasad G Bhat <address@hidden>
> ---
>  v2: https://lists.gnu.org/archive/html/qemu-devel/2019-07/msg04023.html
>  Changes from v2:
>    v2 just looked at avoiding the memory leak.
>    This patch incorporates all of Greg's suggestions.
> 
>  target/ppc/cpu.h |    1 +
>  target/ppc/kvm.c |   31 ++++++++++++++++---------------
>  2 files changed, 17 insertions(+), 15 deletions(-)
> 
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index c9beba2a5c..521086d91a 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -1190,6 +1190,7 @@ struct PowerPCCPU {
>      void *machine_data;
>      int32_t node_id; /* NUMA node this CPU belongs to */
>      PPCHash64Options *hash64_opts;
> +    QEMUTimer idle_timer;
>  
>      /* Fields related to migration compatibility hacks */
>      bool pre_2_8_migration;
> diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c
> index 8a06d3171e..6e1b96bb0a 100644
> --- a/target/ppc/kvm.c
> +++ b/target/ppc/kvm.c
> @@ -87,18 +87,6 @@ static int cap_large_decr;
>  
>  static uint32_t debug_inst_opcode;
>  
> -/*
> - * XXX We have a race condition where we actually have a level triggered
> - *     interrupt, but the infrastructure can't expose that yet, so the guest
> - *     takes but ignores it, goes to sleep and never gets notified that 
> there's
> - *     still an interrupt pending.
> - *
> - *     As a quick workaround, let's just wake up again 20 ms after we 
> injected
> - *     an interrupt. That way we can assure that we're always reinjecting
> - *     interrupts in case the guest swallowed them.
> - */
> -static QEMUTimer *idle_timer;
> -
>  static void kvm_kick_cpu(void *opaque)
>  {
>      PowerPCCPU *cpu = opaque;
> @@ -491,7 +479,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
>          return ret;
>      }
>  
> -    idle_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, kvm_kick_cpu, cpu);
> +    timer_init_ns(&cpu->idle_timer, QEMU_CLOCK_VIRTUAL, kvm_kick_cpu, cpu);
>  
>      switch (cenv->mmu_model) {
>      case POWERPC_MMU_BOOKE206:
> @@ -523,6 +511,10 @@ int kvm_arch_init_vcpu(CPUState *cs)
>  
>  int kvm_arch_destroy_vcpu(CPUState *cs)
>  {
> +    PowerPCCPU *cpu = POWERPC_CPU(cs);
> +
> +    timer_deinit(&cpu->idle_timer);

As stated in the timer.h header file, timer_del() should always be called
before timer_deinit().

With that fixed:

Reviewed-by: Greg Kurz <address@hidden>

> +
>      return 0;
>  }
>  
> @@ -1379,8 +1371,17 @@ void kvm_arch_pre_run(CPUState *cs, struct kvm_run 
> *run)
>              printf("cpu %d fail inject %x\n", cs->cpu_index, irq);
>          }
>  
> -        /* Always wake up soon in case the interrupt was level based */
> -        timer_mod(idle_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
> +        /*
> +         * XXX We have a race condition where we actually have a level
> +         *     triggered interrupt, but the infrastructure can't expose that
> +         *     yet, so the guest takes but ignores it, goes to sleep and
> +         *     never gets notified that there's still an interrupt pending.
> +         *
> +         *     As a quick workaround, let's just wake up again 20 ms after
> +         *     we injected an interrupt. That way we can assure that we're
> +         *     always reinjecting interrupts in case the guest swallowed 
> them.
> +         */
> +        timer_mod(&cpu->idle_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
>                         (NANOSECONDS_PER_SECOND / 50));
>      }
>  
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]