qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 20/22] tcg: Save insn data and use it in cpu_


From: Peter Maydell
Subject: Re: [Qemu-devel] [PATCH v2 20/22] tcg: Save insn data and use it in cpu_restore_state_from_tb
Date: Fri, 18 Sep 2015 23:44:29 +0100

On 18 September 2015 at 17:18, Richard Henderson <address@hidden> wrote:
> On 09/18/2015 06:08 AM, Peter Maydell wrote:
>> You're still not allowing for your worst-case datatable size when we
>> calculate tcg_ctx.code_gen_buffer_max_size.
>
> Hum.  What factor do you suggest?
>
> The maximum table expansion is of course going to depend on the target, since
> the "extra words" could conceivably encode badly.  See mips, sh4 "flags".
>
> For a "normal" target with no, or "small" extra words (cc_op is never large,
> nor is the arm condexec), the max expansion per-opcode would appear to be a
> long sequence of nops, where the opcodes emitted would consist solely of
> insn_start.  The table would consume (2+extra) bytes per nop.  Since state is
> not changing, all but the PC column would be zeros.
>
> Would you be happy if I simply arbitrarily bumped the other magic number here,
> TCG_MAX_OP_SIZE?  Adjust it from 192 to 200, or something?

Well, if we're going to add a margin we need to add the worst-case margin.
However it occurred to me that the reason we use a margin for the codegen
is that we don't want to do a check for overrun every time we write
code to the buffer. For the datatable it seems more feasible to do
buffer length checks as we write the data. If we run out of space then
we just throw away the TB we generated (along with everything else in
the buffer) and start again.

> I didn't try to add a guard page yet, since our logic in allocating the code
> gen buffer are a bit confused.  I'm a bit surprised that we don't prefer mmap
> all of the time.

Would the idea with the guard page be to catch the segfault and use
that as our trigger to clear the codegen buffer and start again?

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]