qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 00/45] tcg: support for multiple TCGcontexts


From: jiang.biao2
Subject: Re: [Qemu-devel] [PATCH v2 00/45] tcg: support for multiple TCGcontexts
Date: Wed, 19 Jul 2017 10:17:50 +0800 (CST)

> On 07/18/2017 02:22 PM, address@hidden wrote:



> > Seeing your work on multiple TCG, it seems that it has some kind of 
> > connection 
> > with the  MTTCG feature,
> > 
> > but I do not figure out how they are connected in detail.
> > 
> > Could you pls help to confirm the following questions:
> > 
> >  1.
> > 
> >     what is the relationship between your patches and the MTTCG feature
> >     mentioned by https://lwn.net/Articles/697265/?
> 
> The current MTTCG feature is in QEMU mainline.  It allows parallel execution 
> of 
> translated code in both system mode.  It does *not* allow parallel 
> translation 
> -- all translation is done with tb_lock held.
> 
> Note that we *always* have parallel execution in user mode.  However, this 
> can 
> and does lead to problems.  See below.
> 
> This patch set allows parallel translation in system mode.  This is shown to 
> improve the overall throughput.  It does *not* allow parallel translation in 
> user mode.  Firstly because user mode already shares more translations 
> between 
> threads (because it is running a single executable), and so the translation 
> routines are not high in the profile.  Secondly because there are additional 
> locking problems due to the fact that we have no bound on the number of user 
> threads.

Does that mean both the MTTCG feature and this patch set are all about system 


mode, and have nothing to do with linux-user mode?




>
>  2.
> 
>     What is the current status of the development of the MTTCG feature?
> 
> MTTCG has only been enabled on a few targets: alpha, arm, ppc64.
> Look for "mttcg=yes" in configure.
> 
> In order for MTTCG to be enabled, the target must be adjusted so that
> (1) all atomic instructions are implemented with atomic tcg operations,
> (2) define TCG_GUEST_DEFAULT_MO to indicate any barriers implied by
>     normal memory operations by the target architecture.
> 
> For target/mips, neither of these things are complete.
> 
> MTTCG has only been enabled on one host: i386.
> Look for TCG_TARGET_DEFAULT_MO in tcg/*/tcg-target.h.
> 
> In order for MTTCG to be enabled, the target memory order must not be 
> stronger 
> than the host memory order.  Since i386 has a very strong host memory order, 
> it 
> is easy for it to emulate any guest.  When the host has a weak memory order, 
> we 
> need to add the additional barriers that are implied by the target.  This is 
> work that has not been done.
> 
> I am not sure why we have not already added this definition to all of the 
> other 
> tcg hosts.  I think this is just oversight, since almost everyone uses x86_64 
> linux as the host for testing tcg.  However, since all of the supported 
> targets 
> have weak memory orders we ought to be able to support them with any host.
In my case, I use Mips64 host and i386 target, does that mean I can not enable 


the MTTCG?




> >  3.
> > 
> >     Is there any problem with the multithread programme running with 
> > linux-user
> >     qemu mode? would the situation be improved with  the MTTCG feature?
> > 
> >     We need to use linux-user mode qemu to run multithread app, but there 
> > seems
> >     to be many problem.
> 
> For user mode, we should still follow the rules for MTTCG, but we do not. 
> Instead we take it on faith that they have been and execute the code in 
> parallel anyway.  This faith is often misplaced and it does mean that 
> unsupported targets execute user mode code incorrectly.
What do you exactly mean about the *unsupported targets*? mips? arm? i386? 


What is the main reason for the incorrectly execution of multithread app for 
user mode?

Is MTTCG helpful for that?

Specificly for my case(i386 target on Mips64 host in user mode),  how to 
improved the situation?




Thanks a lot for your detailed explaination.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]