qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH v4 03/12] tcg: Prepare safe tb_jmp_cache lookup out


From: Sergey Fedorov
Subject: [Qemu-devel] [PATCH v4 03/12] tcg: Prepare safe tb_jmp_cache lookup out of tb_lock
Date: Fri, 15 Jul 2016 20:58:43 +0300

From: Sergey Fedorov <address@hidden>

Ensure atomicity of CPU's 'tb_jmp_cache' access for future translation
block lookup out of 'tb_lock'.

Note that this patch does *not* make CPU's TLB invalidation safe if it
is done from some other thread while the CPU is in its execution loop.

Signed-off-by: Sergey Fedorov <address@hidden>
Signed-off-by: Sergey Fedorov <address@hidden>
[AJB: fixed missing atomic set, tweak title]
Signed-off-by: Alex Bennée <address@hidden>
[Sergey Fedorov: removed explicit memory barriers;
removed unnecessary atomic_read();
tweaked commit title and message]
Signed-off-by: Sergey Fedorov <address@hidden>
Reviewed-by: Alex Bennée <address@hidden>

---
Changes in v3:
 - explicit memory barriers removed
 - memset() on 'tb_jmp_cache' replaced with a loop on atomic_set()
Changes in v2:
 - fix spelling s/con't/can't/
 - add atomic_read while clearing tb_jmp_cache
 - add r-b tags
Changes in v1 (AJB):
 - tweak title
 - fixed missing set of tb_jmp_cache
---
 cpu-exec.c      |  4 ++--
 translate-all.c | 10 +++++++---
 2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/cpu-exec.c b/cpu-exec.c
index 974de6aa27ee..2fd1875a7317 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
@@ -315,7 +315,7 @@ static TranslationBlock *tb_find_slow(CPUState *cpu,
 
 found:
     /* we add the TB in the virtual pc hash table */
-    cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)] = tb;
+    atomic_set(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)], tb);
     return tb;
 }
 
@@ -333,7 +333,7 @@ static inline TranslationBlock *tb_find_fast(CPUState *cpu,
        is executed. */
     cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
     tb_lock();
-    tb = cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)];
+    tb = atomic_read(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)]);
     if (unlikely(!tb || tb->pc != pc || tb->cs_base != cs_base ||
                  tb->flags != flags)) {
         tb = tb_find_slow(cpu, pc, cs_base, flags);
diff --git a/translate-all.c b/translate-all.c
index 0d47c1c0cf82..fdf520a86d68 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -848,7 +848,11 @@ void tb_flush(CPUState *cpu)
     tcg_ctx.tb_ctx.nb_tbs = 0;
 
     CPU_FOREACH(cpu) {
-        memset(cpu->tb_jmp_cache, 0, sizeof(cpu->tb_jmp_cache));
+        int i;
+
+        for (i = 0; i < TB_JMP_CACHE_SIZE; ++i) {
+            atomic_set(&cpu->tb_jmp_cache[i], NULL);
+        }
         cpu->tb_flushed = true;
     }
 
@@ -1007,8 +1011,8 @@ void tb_phys_invalidate(TranslationBlock *tb, 
tb_page_addr_t page_addr)
     /* remove the TB from the hash list */
     h = tb_jmp_cache_hash_func(tb->pc);
     CPU_FOREACH(cpu) {
-        if (cpu->tb_jmp_cache[h] == tb) {
-            cpu->tb_jmp_cache[h] = NULL;
+        if (atomic_read(&cpu->tb_jmp_cache[h]) == tb) {
+            atomic_set(&cpu->tb_jmp_cache[h], NULL);
         }
     }
 
-- 
2.9.1




reply via email to

[Prev in Thread] Current Thread [Next in Thread]