qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v8 03/74] cpu: introduce cpu_mutex_lock/unlock


From: Robert Foley
Subject: Re: [PATCH v8 03/74] cpu: introduce cpu_mutex_lock/unlock
Date: Mon, 11 May 2020 12:09:15 -0400

On Mon, 11 May 2020 at 06:24, Alex Bennée <address@hidden> wrote:
> Robert Foley <address@hidden> writes:
snip
> > +/* XXX: is this really the max number of CPUs? */
> > +#define CPU_LOCK_BITMAP_SIZE 2048
>
> I wonder if we should be asserting this somewhere? Given it's an init
> time constant we can probably do it somewhere in the machine realise
> stage. I think the value is set in  MachineState *ms->smp.max_cpus;

Sure, I suppose we can relocate the define to something like hw/core/cpu.h,
and then assert on it in smp_parse() from hw/core/machine.c?

> <snip>
> > diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
> > index 45be5dc0ed..d2dd6c94cc 100644
> > --- a/stubs/Makefile.objs
> > +++ b/stubs/Makefile.objs
> > @@ -5,6 +5,7 @@ stub-obj-y += blockdev-close-all-bdrv-states.o
> >  stub-obj-y += clock-warp.o
> >  stub-obj-y += cpu-get-clock.o
> >  stub-obj-y += cpu-get-icount.o
> > +stub-obj-y += cpu-lock.o
> >  stub-obj-y += dump.o
> >  stub-obj-y += error-printf.o
> >  stub-obj-y += fdset.o
> > diff --git a/stubs/cpu-lock.c b/stubs/cpu-lock.c
> > new file mode 100644
> > index 0000000000..ca2ea8a9c2
> > --- /dev/null
> > +++ b/stubs/cpu-lock.c
> > @@ -0,0 +1,28 @@
> > +#include "qemu/osdep.h"
> > +#include "hw/core/cpu.h"
> > +
> > +void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
> > +{
> > +/* coverity gets confused by the indirect function call */
> > +#ifdef __COVERITY__
> > +    qemu_mutex_lock_impl(&cpu->lock, file, line);
> > +#else
> > +    QemuMutexLockFunc f = atomic_read(&qemu_mutex_lock_func);
> > +    f(&cpu->lock, file, line);
> > +#endif
> > +}
> > +
> > +void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line)
> > +{
> > +    qemu_mutex_unlock_impl(&cpu->lock, file, line);
> > +}
>
> I find this a little confusing because we clearly aren't stubbing
> something out here - we are indeed doing a lock. What we seem to have is
> effectively the linux-user implementation of cpu locking which currently
> doesn't support qsp profiling.

I agree, it seems like cpu_mutex_lock/unlock can follow the model of
stubs/iothread-lock.c,
which does not use a lock.  Will change this.

>
> > +bool cpu_mutex_locked(const CPUState *cpu)
> > +{
> > +    return true;
> > +}
> > +
> > +bool no_cpu_mutex_locked(void)
> > +{
> > +    return true;
> > +}
>
> What functions care about these checks. I assume it's only system
> emulation checks that are in common code. Maybe that indicates we could
> achieve better separation of emulation and linux-user code. My worry is
> by adding an assert in linux-user code we wouldn't actually be asserting
> anything.

There is code that runs during linux-user, which calls
cpu_mutex_locked().  I found a few cases at least where
cpu_interrupt_request_set, cpu_halted, cpu_halted_set from
include/hw/core/cpu.h are called in linux-user.  Also
cpu_handle_halt_locked from accel/tcg/cpu-exec.c

no_cpu_mutex_locked() is also linked into linux user for
run_on_cpu()in cpus-common.c.

Thanks,
-Rob



reply via email to

[Prev in Thread] Current Thread [Next in Thread]