qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] accel/tcg/plugin: export host insn size


From: Wu, Fei
Subject: Re: [PATCH 1/2] accel/tcg/plugin: export host insn size
Date: Mon, 17 Apr 2023 21:01:34 +0800
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0

On 4/17/2023 8:11 PM, Alex Bennée wrote:
> 
> "Wu, Fei" <fei2.wu@intel.com> writes:
> 
>> On 4/11/2023 3:27 PM, Alex Bennée wrote:
>>>
>>> "Wu, Fei" <fei2.wu@intel.com> writes:
>>>
>>>> On 4/10/2023 6:36 PM, Alex Bennée wrote:
>>>>>
>>>>> Richard Henderson <richard.henderson@linaro.org> writes:
>>>>>
>>>>>> On 4/6/23 00:46, Alex Bennée wrote:
>>>>>>> If your aim is to examine JIT efficiency what is wrong with the current
>>>>>>> "info jit" that you can access via the HMP? Also I'm wondering if its
>>>>>>> time to remove the #ifdefs from CONFIG_PROFILER because I doubt the
>>>>>>> extra data it collects is that expensive.
>>>>>>> Richard, what do you think?
>>>>>>
>>>>>> What is it that you want from CONFIG_PROFILER that you can't get from 
>>>>>> perf?
>>>>>> I've been tempted to remove CONFIG_PROFILER entirely.
>>>>>
>>>>> I think perf is pretty good at getting the hot paths in the translator
>>>>> and pretty much all of the timer related stuff in CONFIG_PROFILER could
>>>>> be dropped. However some of the additional information about TCG ops
>>>>> usage and distribution is useful. That said last time I had a tilt at
>>>>> this on the back of a GSoC project:
>>>>>
>>>>>   Subject: [PATCH  v9 00/13] TCG code quality tracking and perf 
>>>>> integration
>>>>>   Date: Mon,  7 Oct 2019 16:28:26 +0100
>>>>>   Message-Id: <20191007152839.30804-1-alex.bennee@linaro.org>
>>>>>
>>>>> The series ended up moving all the useful bits of CONFIG_PROFILER into
>>>>> tb stats which was dynamically controlled on a per TB basis. Now that
>>>>> the perf integration stuff was merged maybe there is a simpler series to
>>>>> be picked out of the remains?
>>>>>
>>>>> Fei Wu,
>>>>>
>>>>> Have you looked at the above series? Is that gathering the sort of
>>>>> things you need? Is this all in service of examining the translation
>>>>> quality of hot code?
>>>>>
>>>> Yes, it does have what I want, I suppose this wiki is for the series:
>>>>     https://wiki.qemu.org/Features/TCGCodeQuality
>>>
>>> Yes.
>>>
>>>>
>>>> btw, the archive seems broken and cannot show the whole series:
>>>>     https://www.mail-archive.com/qemu-devel@nongnu.org/msg650258.html
>>>
>>> I have a v10 branch here:
>>>
>>>   https://github.com/stsquad/qemu/tree/tcg/tbstats-and-perf-v10
>>>
>>> I think the top two patches can be dropped on a re-base as the JIT/perf
>>> integration is already merged. It might be a tricky re-base though.
>>> Depends on how much churn there has been in the tree since.
>>>
>> I have rebased the patches to upstream here:
>>     https://github.com/atwufei/qemu/tree/tbstats
>>
>> I try to keep the patches as possible as they are, but there are lots of
>> changes since then, so changes are inevitable, e.g. CF_NOCACHE has been
>> removed from upstream, I just removed its usage in the corresponding
>> patch, which might not be preferred.
> 
> Yeah that fine. CF_NOCACHE was removed to avoid special cases in the
> generation code - we simply don't link or store the TBs in the QHT
> anymore. As long as the guest isn't executing a lot of non-RAM code we
> won't run out of translation buffer too quickly.
> 
>>
>> I did some basic tests and they worked (the output of info goes to qemu
>> console, instead of telnet terminal), including:
>>     * tb_stats start
>>     * info tb-list
>>     * info tb 10
>>
>> Alex, would you please take a look?
> 
> That looks pretty good, glad it wasn't too painful a re-base.
> 
> The next question is do you want to pick up the series and put through a
> review cycle or two to get merged? It would probably be worth checking
> the last posting thread to see if their are any outstanding review
> comments.
> 
Yes, I can do it. I have something else in hand right now, so the review
request may be sent out in a few days.

Thanks,
Fei.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]