qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Holding the BQL for emulate_ppc_hypercall


From: Nikunj A Dadhania
Subject: Re: [Qemu-devel] Holding the BQL for emulate_ppc_hypercall
Date: Tue, 25 Oct 2016 09:13:38 +0530
User-agent: Notmuch/0.21 (https://notmuchmail.org) Emacs/25.0.94.1 (x86_64-redhat-linux-gnu)

Alex Bennée <address@hidden> writes:

> Hi,
>
> In the MTTCG patch set one of the big patches is to remove the
> requirement to hold the BQL while running code:
>
>   tcg: drop global lock during TCG code execution
>
> And this broke the PPC code because emulate_ppc_hypercall can cause
> changes to the global state. This function just calls spapr_hypercall()
> and puts the results into the TCG register file. Normally
> spapr_hypercall() is called under the BQL in KVM as
> kvm_arch_handle_exit() does things with the BQL held.
>
> I blithely wrapped the called in a lock/unlock pair only to find the
> ppc64 check builds failed as the hypercall was made during the
> cc->do_interrupt() code which also holds the BQL.
>
> I'm a little confused by the nature of PPC hypercalls in TCG? Are they
> not all detectable at code generation time? What is the case that causes
> an exception to occur rather than the helper function doing the
> hypercall?
>
> I guess it comes down to can I avoid doing:
>
>   /* If we come via cc->do_interrupt BQL may already be held */
>   if (!qemu_mutex_iothread_locked()) {
>       g_mutex_lock_iothread();
>       env->gpr[3] = spapr_hypercall(cpu, env->gpr[3], &env->gpr[4]);
>       g_muetx_unlock_iothread();
>   } else {
>       env->gpr[3] = spapr_hypercall(cpu, env->gpr[3], &env->gpr[4]);
>   }
>
> Any thoughts?

Similar discussions happened on this patch:
https://lists.gnu.org/archive/html/qemu-ppc/2016-09/msg00015.html

This was just working for TCG case, need to fix for KVM. I would need to
handle KVM case to avoid a deadlock.

Regards
Nikunj




reply via email to

[Prev in Thread] Current Thread [Next in Thread]