qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 4/4] target-ppc: Handle NMI guest exit


From: David Gibson
Subject: Re: [Qemu-devel] [PATCH 4/4] target-ppc: Handle NMI guest exit
Date: Fri, 13 Nov 2015 12:57:15 +1100
User-agent: Mutt/1.5.23 (2015-06-09)

On Thu, Nov 12, 2015 at 10:40:11AM +0100, Thomas Huth wrote:
> On 12/11/15 09:09, Thomas Huth wrote:
> > On 11/11/15 18:16, Aravinda Prasad wrote:
> >> Memory error such as bit flips that cannot be corrected
> >> by hardware are passed on to the kernel for handling.
> >> If the memory address in error belongs to guest then
> >> guest kernel is responsible for taking suitable action.
> >> Patch [1] enhances KVM to exit guest with exit reason
> >> set to KVM_EXIT_NMI in such cases.
> >>
> >> This patch handles KVM_EXIT_NMI exit. If the guest OS
> >> has registered the machine check handling routine by
> >> calling "ibm,nmi-register", then the handler builds
> >> the error log and invokes the registered handler else
> >> invokes the handler at 0x200.
> >>
> >> [1] http://marc.info/?l=kvm-ppc&m=144726114408289
> >>
> >> Signed-off-by: Aravinda Prasad <address@hidden>
[snip]
> >> +        env->nip = 0x200;
> >> +        return 0;
> >> +    }
> >> +
> >> +    qemu_mutex_lock(&spapr->mc_in_progress);
> > 
> > Using a mutex here is definitely wrong. The kvm_arch_handle_exit() code
> > is run under the Big QEMU Lockā„¢ (see qemu_mutex_lock_iothread() in
> > kvm_cpu_exec()),
> 
> In case you're looking for the calls, I just noticed that the
> qemu_mutex_lock_iothread() have recently been pushed into
> kvm_arch_handle_exit() itself.
> 
> > so if you would ever get one thread waiting for this
> > mutex here, it could never be unlocked again in rtas_ibm_nmi_interlock()
> > because the other code would wait forever to get the BQL ==> Deadlock.
> > 
> > I think if you want to be able to handle multiple NMIs at once, you
> > likely need something like an error log per CPU instead. And if an NMI
> > happens one CPU while there is already a NMI handler running on the very
> > same CPU, you could likely simply track this with an boolean variable
> > and put the CPU into checkstop if this happens?
> 
> Ok, I now had a look into the LoPAPR spec, and if I've got that right,
> you really have to serialize the NMIs in case they happen at multiple
> CPUs at the same time. So I guess the best thing you can do here is
> something like:
> 
>    while (spapr->mc_in_progress) {
>        /*
>         * There is already another NMI in progress, thus we need
>         * to yield here to wait until it has been finsihed
>         */
>        qemu_mutex_unlock_iothread();
>        usleep(10);
>        qemu_mutex_lock_iothread();
>    }
>    spapr->mc_in_progress = true;
> 
> Also LoPAPR talks about 'subsequent processors report "fatal error
> previously reported"', so maybe the other processors should report that
> condition in this case?
> And of course you've also got to check that the same CPU is not getting
> multiple NMIs before the interlock function has been called again.

You should be able to avoid the nasty usleep by using a pthread
condition variable.  So here you'd have

    while (spapr->mc_in_progress) {
        pthread_cond_wait(&mc_delivery_cond, &qemu_iothread_lock);
    }
    spapr->mc_in_progress = true;

Or.. there may be qemu wrappers around the pthread functions you
should be using.  Once delivery of a single MC is complete, you'd use
pthread_cond_signal() to wake up any additional ones.

pthread_cond_wait automatically drops the specified mutex internally,
so access to mc_in_progress itself is still protected by the iothread
mutex.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]