qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] linux-user: Fix unaligned memory access in prlimit64 sys


From: Ilya Leoshkevich
Subject: Re: [PATCH 1/2] linux-user: Fix unaligned memory access in prlimit64 syscall
Date: Thu, 23 Feb 2023 23:45:24 +0100
User-agent: Evolution 3.46.3 (3.46.3-1.fc37)

On Thu, 2023-02-23 at 12:31 -1000, Richard Henderson wrote:
> On 2/23/23 11:58, Ilya Leoshkevich wrote:
> > 32-bit guests may enforce only 4-byte alignment for
> > target_rlimit64,
> > whereas 64-bit hosts normally require the 8-byte one. Therefore
> > accessing this struct directly is UB.
> > 
> > Fix by adding a local copy.
> > 
> > Fixes: 163a05a8398b ("linux-user: Implement prlimit64 syscall")
> > Reported-by: Richard Henderson <richard.henderson@linaro.org>
> > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > ---
> >   linux-user/syscall.c | 12 +++++++-----
> >   1 file changed, 7 insertions(+), 5 deletions(-)
> > 
> > diff --git a/linux-user/syscall.c b/linux-user/syscall.c
> > index a6c426d73cf..8ae7696d8f1 100644
> > --- a/linux-user/syscall.c
> > +++ b/linux-user/syscall.c
> > @@ -12876,7 +12876,7 @@ static abi_long do_syscall1(CPUArchState
> > *cpu_env, int num, abi_long arg1,
> >       case TARGET_NR_prlimit64:
> >       {
> >           /* args: pid, resource number, ptr to new rlimit, ptr to
> > old rlimit */
> > -        struct target_rlimit64 *target_rnew, *target_rold;
> > +        struct target_rlimit64 *target_rnew, *target_rold, tmp;
> 
> The bug is that target_rlimit64 uses uint64_t (64-bit host
> alignment), when it should be 
> using abi_ullong (64-bit target alignment).  There are quite a number
> of these sorts of 
> bugs in linux-user.
> 
> 
> r~

Thanks, this helps.

I thought that unaligned accesses were illegal no matter what, e.g., on
sparc64, but turns out the compiler is actually smart enough to handle
them:

#include <stdint.h>
typedef uint64_t abi_ullong __attribute__((aligned(4)));
abi_ullong load(abi_ullong *x) { return *x; }

produces

load:
        save    %sp, -176, %sp
        lduw    [%i0], %g1
        lduw    [%i0+4], %i0
        sllx    %g1, 32, %g1
        return  %i7+8
         or     %o0, %g1, %o0

instead of just

load:
        save    %sp, -176, %sp
        return  %i7+8
         ldx    [%o0], %o0

I'll send a v2.

Best regards,
Ilya



reply via email to

[Prev in Thread] Current Thread [Next in Thread]