qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Qemu-devel] Idea for speed improvement


From: Andreas Bollhalder
Subject: RE: [Qemu-devel] Idea for speed improvement
Date: Wed, 6 Oct 2004 17:56:43 +0200

Looks fine ;-) Would GCC sort
for example a case statement,
that the most used op-codes
will be the first entry ?
Could shurly save some cycles.
Or maybe I'm totaly wrong ?

Andreas

-----Original Message-----
From:
qemu-devel-bounces+bolle=geodb
address@hidden
[mailto:qemu-devel-bounces+bol
address@hidden On
Behalf Of Johannes Schindelin
Sent: Wednesday, October 06,
2004 4:19 PM
To: address@hidden
Subject: [Qemu-devel] Idea for
speed improvement


*This message was transferred
with a trial version of
CommuniGate(tm) Pro*
Hi,

how about the following
scenario: We add a "-profile
<filename>" option to
QEmu, which just writes out
profiling data:

- all the (optimized)
intermediate stages of all the
translated blocks are
  written into that file.
- the first op will be
op_incr_tb_usage_counter,
which increments a
  counter in the TB structure.
- whenever a TB is flushed,
the counter is written into
that file also.

After one run with "-profile",
a tool can analyze that data,
and generate
a header file which inlines
the most frequent sequences to
produce new
op_* functions, and code for
the optimization phase of the
dynamic
translation, which collapses
those sequences to the newly
created ops.

Then, QEmu is compiled anew,
using that code.

This all depends on gcc doing
a good job at optimizing the
hell out of
those sequences, of course.

Just op_exit_tb cannot be
inlined like that, because of
the stack problem
I mentioned earlier on this
list.

Thoughts, comments, bashing?

Ciao,
Dscho

P.S.: Fabrice, is this what
you meant by "gcc backend"?



______________________________
_________________
Qemu-devel mailing list
address@hidden
http://lists.nongnu.org/mailma
n/listinfo/qemu-devel





reply via email to

[Prev in Thread] Current Thread [Next in Thread]