qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/2] trace: include CPU index in trace_memory_re


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH 1/2] trace: include CPU index in trace_memory_region_ops_*()
Date: Wed, 24 Feb 2016 14:12:45 +0000
User-agent: Mutt/1.5.24 (2015-08-30)

On Wed, Feb 17, 2016 at 01:29:14PM -0800, Hollis Blanchard wrote:
> Knowing which CPU performed an action is essential for understanding SMP guest
> behavior.
> 
> However, cpu_physical_memory_rw() may be executed by a machine init function,
> before any VCPUs are running, when there is no CPU running ('current_cpu' is
> NULL). In this case, store -1 in the trace record as the CPU index. Trace
> analysis tools may need to be aware of this special case.
> 
> Signed-off-by: Hollis Blanchard <address@hidden>
> ---
>  memory.c     | 48 ++++++++++++++++++++++++++++++++++++------------
>  trace-events |  8 ++++----
>  2 files changed, 40 insertions(+), 16 deletions(-)
> 
> diff --git a/memory.c b/memory.c
> index 2d87c21..6ae7bae 100644
> --- a/memory.c
> +++ b/memory.c
> @@ -395,13 +395,17 @@ static MemTxResult 
> memory_region_oldmmio_read_accessor(MemoryRegion *mr,
>                                                         MemTxAttrs attrs)
>  {
>      uint64_t tmp;
> +    int cpu_index = -1;
> +
> +    if (current_cpu)
> +        cpu_index = current_cpu->cpu_index;

QEMU coding style always uses curly braces, even when the if statement
body only has one line.

Cases like these should be caught by scripts/checkpatch.pl.  I use a git
hook to run it automatically on commit:
http://blog.vmsplice.net/2011/03/how-to-automatically-run-checkpatchpl.html


A helper function would avoid the code duplication throughout this patch:

static int get_cpu_index(void) {
    if (current_cpu) {
        return current_cpu->cpu_index;
    }
    return -1;
}

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]