qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 20/22] tcg: Save insn data and use it in cpu_


From: Richard Henderson
Subject: Re: [Qemu-devel] [PATCH v2 20/22] tcg: Save insn data and use it in cpu_restore_state_from_tb
Date: Sat, 19 Sep 2015 14:02:39 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0

On 09/18/2015 06:08 AM, Peter Maydell wrote:
On 18 September 2015 at 05:55, Richard Henderson <address@hidden> wrote:
We can now restore state without retranslation.

Signed-off-by: Richard Henderson <address@hidden>
---
+/* Encode the data collected about the instructions while compiling TB.
+   Place the data at BLOCK, and return the number of bytes consumed.
+
+   The logical table consisits of TARGET_INSN_START_WORDS target_ulong's,

"consists". No apostrophe in 'target_ulongs'.

+   which come from the target's insn_start data, followed by a uintptr_t
+   which comes from the host pc of the end of the code implementing the insn.
+
+   Each line of the table is encoded as sleb128 deltas from the previous
+   line.  The seed for the first line is { tb->pc, 0..., tb->tc_ptr }.
+   That is, the first column is seeded with the guest pc, the last column
+   with the host pc, and the middle columns with zeros.  */

You're still not allowing for your worst-case datatable size when we
calculate tcg_ctx.code_gen_buffer_max_size.

I'll note that the current worst-case estimate is way too big: 122kB.

Which honestly means we're wasting a ton of space at the end of the code_gen_buffer. While down-thread we talk about guard pages and sigsegv handlers etc, I now believe this shouldn't be a blocker for this patch set.

(And in particular, setting up a SEH handler for Win32 to act as a sigsegv handler is just too annoyingly difficult. It'd be one thing if we only targeted VC++, but doing SEH in GCC at present is just Too Ugly. So we'd have two different schemes for win32 and posix, which doesn't seem to be the best of ideas.)


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]