[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] [RFC v7 09/16] softmmu: Include MMIO/invalid exclusive acce
From: |
Alvise Rigo |
Subject: |
[Qemu-devel] [RFC v7 09/16] softmmu: Include MMIO/invalid exclusive accesses |
Date: |
Fri, 29 Jan 2016 10:32:38 +0100 |
Enable exclusive accesses when the MMIO/invalid flag is set in the TLB
entry.
In case a LL access is done to MMIO memory, we treat it differently from
a RAM access in that we do not rely on the EXCL bitmap to flag the page
as exclusive. In fact, we don't even need the TLB_EXCL flag to force the
slow path, since it is always forced anyway.
This commit does not take care of invalidating an MMIO exclusive range from
other non-exclusive accesses i.e. CPU1 LoadLink to MMIO address X and
CPU2 writes to X. This will be addressed in the following commit.
Suggested-by: Jani Kokkonen <address@hidden>
Suggested-by: Claudio Fontana <address@hidden>
Signed-off-by: Alvise Rigo <address@hidden>
---
cputlb.c | 7 +++----
softmmu_template.h | 26 ++++++++++++++++++++------
2 files changed, 23 insertions(+), 10 deletions(-)
diff --git a/cputlb.c b/cputlb.c
index aa9cc17..87d09c8 100644
--- a/cputlb.c
+++ b/cputlb.c
@@ -424,7 +424,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong
vaddr,
if ((memory_region_is_ram(section->mr) && section->readonly)
|| memory_region_is_romd(section->mr)) {
/* Write access calls the I/O callback. */
- te->addr_write = address | TLB_MMIO;
+ address |= TLB_MMIO;
} else if (memory_region_is_ram(section->mr)
&& cpu_physical_memory_is_clean(section->mr->ram_addr
+ xlat)) {
@@ -437,11 +437,10 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong
vaddr,
if (cpu_physical_memory_is_excl(section->mr->ram_addr + xlat)) {
/* There is at least one vCPU that has flagged the address as
* exclusive. */
- te->addr_write = address | TLB_EXCL;
- } else {
- te->addr_write = address;
+ address |= TLB_EXCL;
}
}
+ te->addr_write = address;
} else {
te->addr_write = -1;
}
diff --git a/softmmu_template.h b/softmmu_template.h
index 267c52a..c54bdc9 100644
--- a/softmmu_template.h
+++ b/softmmu_template.h
@@ -476,7 +476,7 @@ void helper_le_st_name(CPUArchState *env, target_ulong
addr, DATA_TYPE val,
/* Handle an IO access or exclusive access. */
if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
- if ((tlb_addr & ~TARGET_PAGE_MASK) == TLB_EXCL) {
+ if (tlb_addr & TLB_EXCL) {
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
CPUState *cpu = ENV_GET_CPU(env);
CPUClass *cc = CPU_GET_CLASS(cpu);
@@ -500,8 +500,15 @@ void helper_le_st_name(CPUArchState *env, target_ulong
addr, DATA_TYPE val,
}
}
- glue(helper_le_st_name, _do_ram_access)(env, val, addr, oi,
- mmu_idx, index, retaddr);
+ if (tlb_addr & ~(TARGET_PAGE_MASK | TLB_EXCL)) { /* MMIO access */
+ glue(helper_le_st_name, _do_mmio_access)(env, val, addr, oi,
+ mmu_idx, index,
+ retaddr);
+ } else {
+ glue(helper_le_st_name, _do_ram_access)(env, val, addr, oi,
+ mmu_idx, index,
+ retaddr);
+ }
lookup_and_reset_cpus_ll_addr(hw_addr, DATA_SIZE);
@@ -620,7 +627,7 @@ void helper_be_st_name(CPUArchState *env, target_ulong
addr, DATA_TYPE val,
/* Handle an IO access or exclusive access. */
if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
- if ((tlb_addr & ~TARGET_PAGE_MASK) == TLB_EXCL) {
+ if (tlb_addr & TLB_EXCL) {
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
CPUState *cpu = ENV_GET_CPU(env);
CPUClass *cc = CPU_GET_CLASS(cpu);
@@ -644,8 +651,15 @@ void helper_be_st_name(CPUArchState *env, target_ulong
addr, DATA_TYPE val,
}
}
- glue(helper_be_st_name, _do_ram_access)(env, val, addr, oi,
- mmu_idx, index, retaddr);
+ if (tlb_addr & ~(TARGET_PAGE_MASK | TLB_EXCL)) { /* MMIO access */
+ glue(helper_be_st_name, _do_mmio_access)(env, val, addr, oi,
+ mmu_idx, index,
+ retaddr);
+ } else {
+ glue(helper_be_st_name, _do_ram_access)(env, val, addr, oi,
+ mmu_idx, index,
+ retaddr);
+ }
lookup_and_reset_cpus_ll_addr(hw_addr, DATA_SIZE);
--
2.7.0
- [Qemu-devel] [RFC v7 00/16] Slow-path for atomic instruction translation, Alvise Rigo, 2016/01/29
- [Qemu-devel] [RFC v7 05/16] softmmu: Add new TLB_EXCL flag, Alvise Rigo, 2016/01/29
- [Qemu-devel] [RFC v7 02/16] softmmu: Simplify helper_*_st_name, wrap unaligned code, Alvise Rigo, 2016/01/29
- [Qemu-devel] [RFC v7 03/16] softmmu: Simplify helper_*_st_name, wrap MMIO code, Alvise Rigo, 2016/01/29
- [Qemu-devel] [RFC v7 06/16] qom: cpu: Add CPUClass hooks for exclusive range, Alvise Rigo, 2016/01/29
- [Qemu-devel] [RFC v7 08/16] softmmu: Honor the new exclusive bitmap, Alvise Rigo, 2016/01/29
- [Qemu-devel] [RFC v7 01/16] exec.c: Add new exclusive bitmap to ram_list, Alvise Rigo, 2016/01/29
- [Qemu-devel] [RFC v7 04/16] softmmu: Simplify helper_*_st_name, wrap RAM code, Alvise Rigo, 2016/01/29
- [Qemu-devel] [RFC v7 09/16] softmmu: Include MMIO/invalid exclusive accesses,
Alvise Rigo <=
- [Qemu-devel] [RFC v7 15/16] target-arm: cpu64: use custom set_excl hook, Alvise Rigo, 2016/01/29
- [Qemu-devel] [RFC v7 10/16] softmmu: Protect MMIO exclusive range, Alvise Rigo, 2016/01/29
- [Qemu-devel] [RFC v7 14/16] target-arm: translate: Use ld/st excl for atomic insns, Alvise Rigo, 2016/01/29
- [Qemu-devel] [RFC v7 16/16] target-arm: aarch64: add atomic instructions, Alvise Rigo, 2016/01/29
- [Qemu-devel] [RFC v7 07/16] softmmu: Add helpers for a new slowpath, Alvise Rigo, 2016/01/29
- [Qemu-devel] [RFC v7 12/16] configure: Use slow-path for atomic only when the softmmu is enabled, Alvise Rigo, 2016/01/29
- [Qemu-devel] [RFC v7 13/16] softmmu: Add history of excl accesses, Alvise Rigo, 2016/01/29
- [Qemu-devel] [RFC v7 11/16] tcg: Create new runtime helpers for excl accesses, Alvise Rigo, 2016/01/29