qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v6 7/8] memory: introduce tls context to trace n


From: Jan Kiszka
Subject: Re: [Qemu-devel] [PATCH v6 7/8] memory: introduce tls context to trace nested mmio request issue
Date: Mon, 05 Nov 2012 07:57:50 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2012-11-05 06:38, Liu Ping Fan wrote:
> From: Liu Ping Fan <address@hidden>
> 
> After breaking down big lock, nested MMIO request which not targeting
> at RAM can cause deadlock issue. Supposing the scene: dev_a,b with
> fine-grain locks lockA/B, then ABBA dealock issue can be triggered.
> We fix this by tracing and rejecting such request.
> 
> Signed-off-by: Liu Ping Fan <address@hidden>
> ---
>  exec.c        |   47 +++++++++++++++++++++++++++++++++++++++++++++++
>  qemu-thread.h |    7 +++++++
>  2 files changed, 54 insertions(+), 0 deletions(-)
> 
> diff --git a/exec.c b/exec.c
> index fa34ef9..1eb920d 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -3442,6 +3442,48 @@ static bool 
> address_space_section_lookup_ref(AddressSpace *as,
>      return safe_ref;
>  }
>  
> +typedef struct ThreadContext {
> +  DispatchType dispatch_type;
> +  unsigned int mmio_req_pending;
> +} ThreadContext;
> +
> +static __thread ThreadContext thread_context = {
          ^^^^^^^^
Again, you will have to work on qemu-tls.h and then use DEFINE_TLS. The
above is not portable.

> +    .dispatch_type = DISPATCH_INIT,
> +    .mmio_req_pending = 0
> +};
> +
> +void qemu_thread_set_dispatch_type(DispatchType type)
> +{
> +    thread_context.dispatch_type = type;
> +}
> +
> +void qemu_thread_reset_dispatch_type(void)
> +{
> +    thread_context.dispatch_type = DISPATCH_INIT;
> +}
> +
> +static void address_space_check_inc_req_pending(MemoryRegionSection *section)
> +{
> +    bool nested = false;
> +
> +    /* currently, only mmio out of big lock, and need this to avoid dead 
> lock */
> +    if (thread_context.dispatch_type == DISPATCH_MMIO) {
> +        nested = ++thread_context.mmio_req_pending > 1 ? true : false;
> +        /* To fix, will filter iommu case */
> +        if (nested && !memory_region_is_ram(section->mr)) {
> +            fprintf(stderr, "mmio: nested target not RAM is not support");
> +            abort();
> +        }
> +    }

This should already take PIO into account, thus all scenarios: If we are
dispatching MMIO or PIO, reject any further requests that are not
targeting RAM.

I don't think we need mmio_req_pending for this. We are not interested
in differentiating between MMIO and PIO, both will be problematic. We
just store the information if a request is going on in the TLS variable
here, not before entering cpu_physical_memory_xxx. And then we can
simply bail out if another non-RAM request is arriving, the nesting
level will never be >1.

And with bailing out I mean warn once + ignore request, not abort().
This would be a needless guest triggerable VM termination.

Jan


Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]