[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] PPC: Fix linker scripts on ppc hosts

From: Peter Maydell
Subject: Re: [Qemu-devel] [PATCH] PPC: Fix linker scripts on ppc hosts
Date: Wed, 14 Dec 2011 08:53:42 +0000

On 14 December 2011 00:30, Paul Brook <address@hidden> wrote:
>> IIRC mmap'ing files would break with 32-on-64, but I'd have to check up on
>> the details. I ended up passing MAP_32BIT to all linux-user mmap calla for
>> 32-on-x86_64, but that doesn't work with -R.
> Hmm, I thought we'd fixed that.  It's the reason h2g_valid exists

A lot of the problem is that linux-user/mmap.c isn't very clever. What
happens, IIRC, is something like this:
 * we pick a guest base, and happily start to hand out memory from there
 * at some point, we hit a host shared library or whatever, so the
   kernel can't use our hinted preferred address, and picks one itself.
   On 64 bit kernels it seems to usually like to skip way ahead into
   the >4GB bit of the virtual address space, even if there's still
   plenty of space below 4GB
 * mmap_find_vma() wrongly assumes this means there's no more memory
   to be had below 4GB, and starts again with a hint address at the
   bottom of memory
 * that address is typically already used (by host lib or by a previous
   guest mmap). The kernel hands us back the same useless >4GB address.
 * mmap_find_vma() says "ooh, same as last time" and decides this means
   we're out of memory.

The effect is that on a 32-on-64 config we will fail mmap() unnecessarily
and in a lot of cases which work fine on 32-on-32.

The cheesy solution is to use MAP_32BIT, which I agree is a nasty hack.
The proper solution would be to rewrite mmap.c to be smarter (perhaps
by looking at /proc/self/maps and reserving a lot of space with PROT_NONE
mappings at startup and then managing it itself), but so far nobody's
done that, and MAP_32BIT is a much smaller change that improves matters
in the 99% situation (ie "host is x86-64").

-- PMM

reply via email to

[Prev in Thread] Current Thread [Next in Thread]