qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [INFO] Some preliminary performance data


From: Alex Bennée
Subject: Re: [INFO] Some preliminary performance data
Date: Sat, 09 May 2020 17:49:55 +0100
User-agent: mu4e 1.4.4; emacs 28.0.50

Laurent Desnogues <address@hidden> writes:

> On Sat, May 9, 2020 at 2:38 PM Aleksandar Markovic
> <address@hidden> wrote:
>>
>> суб, 9. мај 2020. у 13:37 Laurent Desnogues
>> <address@hidden> је написао/ла:
>> >
>> > On Sat, May 9, 2020 at 12:17 PM Aleksandar Markovic
>> > <address@hidden> wrote:
>> > >  сре, 6. мај 2020. у 13:26 Alex Bennée <address@hidden> је написао/ла:
>> > >
>> > > > This is very much driven by how much code generation vs running you 
>> > > > see.
>> > > > In most of my personal benchmarks I never really notice code generation
>> > > > because I give my machines large amounts of RAM so code tends to stay
>> > > > resident so not need to be re-translated. When the optimiser shows up
>> > > > it's usually accompanied by high TB flush and invalidate counts in 
>> > > > "info
>> > > > jit" because we are doing more translation that we usually do.
>> > > >
>> > >
>> > > Yes, I think the machine was setup with only 128MB RAM.
>> > >
>> > > That would be an interesting experiment for Ahmed actually - to
>> > > measure impact of given RAM memory to performance.
>> > >
>> > > But it looks that at least for machines with small RAM, translation
>> > > phase will take significant percentage.
>> > >
>> > > I am attaching call graph for translation phase for "Hello World" built
>> > > for mips, and emulated by QEMU: *tb_gen_code() and its calees)
>> >
>>
>> Hi, Laurent,
>>
>> "Hello world" was taken as an example where code generation is
>> dominant. It was taken to illustrate how performance-wise code
>> generation overhead is distributed (illustrating dominance of a
>> single function).
>>
>> While "Hello world" by itself is not a significant example, it conveys
>> a useful information: it says how much is the overhead of QEMU
>> linux-user executable initialization, and code generation spent on
>> emulation of loading target executable and printing a simple
>> message. This can be roughly deducted from the result for
>> a meaningful benchmark.
>>
>> Booting of a virtual machine is a legitimate scenario for measuring
>> performance, and perhaps even attempting improving it.
>>
>> Everything should be measured - code generation, JIT-ed code
>> execution, and helpers execution - in all cases, and checked
>> whether it departs from expected behavior.
>>
>> Let's say that we emulate a benchmark that basically runs some
>> code in a loop, or an algorithm - one would expect that after a
>> while, while increasing number of iterations of the loop, or the
>> size of data in the algorithm, code generation becomes less and
>> less significant, converging to zero. Well, this should be confirmed
>> with an experiment, and not taken for granted.
>>
>> I think limiting measurements only on, let's say, execution of
>> JIT-ed code (if that is what you implied) is a logical mistake.
>> The right conclusions should be drawn from the complete
>> picture, shouldn't it?
>
> I explicitly wrote that you should consider a wide spectrum of
> programs so I think we're in violent agreement ;-)

If you want a good example for a real world use case where we could
improve things then I suggest looking at compilers.

They are frequently instantiated once per compilation unit and once done
all the JIT translations are thrown away. While the code-path taken by a
compiler may be different for every unit it compiles I bet there are
savings we could make by caching compilation. The first step would be
identifying how similar the profiles of the generated code generated is.

-- 
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]