[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [RFC PATCH] cputlb: implement load_helper_unaligned() for unaligned
From: |
Philippe Mathieu-Daudé |
Subject: |
Re: [RFC PATCH] cputlb: implement load_helper_unaligned() for unaligned loads |
Date: |
Wed, 9 Jun 2021 12:28:48 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.10.1 |
On 6/9/21 11:35 AM, Mark Cave-Ayland wrote:
> [RFC because this is currently only lightly tested and there have been some
> discussions about whether this should be handled elsewhere in the memory API]
>
> If an unaligned load is required then the load is split into 2 separate
> accesses
> and combined together within load_helper(). This does not work correctly with
> MMIO accesses because the original access size is used for both individual
> accesses causing the little and big endian combine to return the wrong result.
>
> There is already a similar solution in place for store_helper() where an
> unaligned
> access is handled by a separate store_helper_unaligned() function which
> instead
> of using the original access size, uses a single-byte access size to shift and
> combine the result correctly regardless of the orignal access size or endian.
>
> Implement a similar load_helper_unaligned() function which uses the same
> approach
> for unaligned loads to return the correct result according to the original
> test
> case.
>
> Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/360
> ---
> accel/tcg/cputlb.c | 99 ++++++++++++++++++++++++++++++++++++++--------
> 1 file changed, 82 insertions(+), 17 deletions(-)
>
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index f24348e979..1845929e99 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1851,6 +1851,85 @@ load_memop(const void *haddr, MemOp op)
> }
> }
>
> +static uint64_t __attribute__((noinline))
> +load_helper_unaligned(CPUArchState *env, target_ulong addr, uintptr_t
> retaddr,
> + size_t size, uintptr_t mmu_idx, bool code_read,
> + bool big_endian)
> +{
...
> +}
> +
> static inline uint64_t QEMU_ALWAYS_INLINE
> load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
> uintptr_t retaddr, MemOp op, bool code_read,
> @@ -1893,7 +1972,7 @@ load_helper(CPUArchState *env, target_ulong addr,
> TCGMemOpIdx oi,
> CPUIOTLBEntry *iotlbentry;
> bool need_swap;
>
> - /* For anything that is unaligned, recurse through full_load. */
> + /* For anything that is unaligned, recurse through byte loads. */
> if ((addr & (size - 1)) != 0) {
> goto do_unaligned_access;
> }
> @@ -1932,23 +2011,9 @@ load_helper(CPUArchState *env, target_ulong addr,
> TCGMemOpIdx oi,
> if (size > 1
> && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1
> >= TARGET_PAGE_SIZE)) {
It would be easier to review extracting load_helper_unaligned() first.
> + res = load_helper_unaligned(env, addr, retaddr, size, mmu_idx,
> + code_read, memop_big_endian(op));
> return res & MAKE_64BIT_MASK(0, size * 8);
> }
>
>