qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PULL 11/24] tcg: enable thread-per-vCPU


From: Alex Bennée
Subject: Re: [Qemu-devel] [PULL 11/24] tcg: enable thread-per-vCPU
Date: Thu, 16 Mar 2017 17:31:10 +0000
User-agent: mu4e 0.9.19; emacs 25.2.9

Laurent Vivier <address@hidden> writes:

> Le 27/02/2017 à 15:38, Alex Bennée a écrit :
>>
>> Laurent Vivier <address@hidden> writes:
>>
>>> Le 24/02/2017 à 12:20, Alex Bennée a écrit :
>>>> There are a couple of changes that occur at the same time here:
>>>>
>>>>   - introduce a single vCPU qemu_tcg_cpu_thread_fn
>>>>
>>>>   One of these is spawned per vCPU with its own Thread and Condition
>>>>   variables. qemu_tcg_rr_cpu_thread_fn is the new name for the old
>>>>   single threaded function.
>>>>
>>>>   - the TLS current_cpu variable is now live for the lifetime of MTTCG
>>>>     vCPU threads. This is for future work where async jobs need to know
>>>>     the vCPU context they are operating in.
>>>>
>>>> The user to switch on multi-thread behaviour and spawn a thread
>>>> per-vCPU. For a simple test kvm-unit-test like:
>>>>
>>>>   ./arm/run ./arm/locking-test.flat -smp 4 -accel tcg,thread=multi
>>>>
>>>> Will now use 4 vCPU threads and have an expected FAIL (instead of the
>>>> unexpected PASS) as the default mode of the test has no protection when
>>>> incrementing a shared variable.
>>>>
>>>> We enable the parallel_cpus flag to ensure we generate correct barrier
>>>> and atomic code if supported by the front and backends. This doesn't
>>>> automatically enable MTTCG until default_mttcg_enabled() is updated to
>>>> check the configuration is supported.
>>>
>>> This commit breaks linux-user mode:
>>>
>>> debian-8 with qemu-ppc on x86_64 with ltp-full-20170116
>>>
>>> cd /opt/ltp
>>> ./runltp -p -l "qemu-$(date +%FT%T).log" -f /opt/ltp/runtest/syscalls -s
>>> setgroups03
>>>
>>> setgroups03    1  TPASS  :  setgroups(65537) fails, Size is >
>>> sysconf(_SC_NGROUPS_MAX), errno=22
>>> qemu-ppc: /home/laurent/Projects/qemu/include/qemu/rcu.h:89:
>>> rcu_read_unlock: Assertion `p_rcu_reader->depth != 0' failed.
>>> qemu-ppc: /home/laurent/Projects/qemu/include/qemu/rcu.h:89:
>>> rcu_read_unlock: Assertion `p_rcu_reader->depth != 0' failed.
>>> qemu-ppc: /home/laurent/Projects/qemu/include/qemu/rcu.h:89:
>>> rcu_read_unlock: Assertion `p_rcu_reader->depth != 0' failed.
>>> ...
>>
>> Interesting. I can only think the current_cpu change has broken it
>> because most of the changes in this commit affect softmmu targets only
>> (linux-user has its own run loop).
>>
>> Thanks for the report - I'll look into it.
>
> After:
>
>      95b0eca Merge remote-tracking branch
> 'remotes/stsquad/tags/pull-mttcg-fixups-090317-1' into staging
>
> [Tested with my HEAD on:
> b1616fe Merge remote-tracking branch
> 'remotes/famz/tags/docker-pull-request' into staging]
>
> I have now:
>
> <<<test_start>>>
> tag=setgroups03 stime=1489413401
> cmdline="setgroups03"
> contacts=""
> analysis=exit
> <<<test_output>>>
> **
> ERROR:/home/laurent/Projects/qemu/cpu-exec.c:656:cpu_exec: assertion
> failed: (cpu == current_cpu)
> **

Sorry about the delay. After lengthy fighting to get LTP on PowerPC
built I got this behaviour:

  17:26 address@hidden taken:41, git:mttcg/more-fixes-for-rc1, 
[/home/alex/lsrc/qemu/qemu.git]> sudo ./ppc-linux-user/qemu-ppc 
./ppc-linux-user/setgroups03
  setgroups03    1  TPASS  :  setgroups(65537) fails, Size is > 
sysconf(_SC_NGROUPS_MAX), errno=22
  setgroups03    2  TBROK  :  tst_sig.c:233: unexpected signal SIGSEGV(11) 
received (pid = 22137).
  setgroups03    3  TBROK  :  tst_sig.c:233: Remaining cases broken

I'm afraid I can't compare the result to real hardware so maybe my LTP
build is broken. But the main thing is I can't seem to reproduce it
here.

Could you ping me your LTP binary so I can have a look?

The other thing to note is the assert you now see firing is a guard for
buggy compilers. What version are you building with and can you try any
other versions?

--
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]