On Tue, Dec 05, 2023 at 11:09:59AM +0100, Michal Suchánek wrote:
On Mon, Dec 04, 2023 at 03:02:45PM -0800, Richard Henderson wrote:
On 12/4/23 12:09, Michal Suchánek wrote:
On Mon, Dec 04, 2023 at 02:50:17PM -0500, Stefan Hajnoczi wrote:
On Mon, 4 Dec 2023 at 14:40, Philippe Mathieu-Daudé <philmd@linaro.org> wrote:
+void tcg_unregister_thread(void)
+{
+ unsigned int n;
+
+ n = qatomic_fetch_dec(&tcg_cur_ctxs);
+ g_free(tcg_ctxs[n]);
+ qatomic_set(&tcg_ctxs[n], NULL);
+}
tcg_ctxs[n] may not be our context, so this looks like it could free
another thread's context and lead to undefined behavior.
Correct.
There is cpu->thread_id so perhaps cpu->thread_ctx could be added as
well. That would require a bitmap of used threads contexts rather than a
counter, though.
Or don't free the context at all, but re-use it when incrementing and
tcg_ctxs[n] != null (i.e. plugging in a repacement vcpu). After all, there
can only be tcg_max_ctxs contexts.
But you would not know which contexts are in use and which aren't without
tracking the association of contexts to CPUs.
Unless there is a cpu array somewhere and you can use the same index for
both to create the association.
I tried to use cpu_index for correlating the tcg_ctx with cpu. I added
some asserts that only null contexts are allocated and non-null contexts
released but qemu crashes somewhere in tcg sometime after the guest gets
to switch root.