qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] mmap-alloc: use same backend for all mappings


From: Greg Kurz
Subject: Re: [Qemu-devel] [PATCH] mmap-alloc: use same backend for all mappings
Date: Mon, 30 Nov 2015 14:46:31 +0100

On Mon, 30 Nov 2015 15:06:33 +0200
"Michael S. Tsirkin" <address@hidden> wrote:

> On Mon, Nov 30, 2015 at 11:51:57AM +0100, Greg Kurz wrote:
> > Since commit 8561c9244ddf1122d "exec: allocate PROT_NONE pages on top of 
> > RAM",
> > it is no longer possible to back guest RAM with hugepages on ppc64 hosts:
> > 
> > mmap(NULL, 285212672, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
> > 0x3fff57000000
> > mmap(0x3fff57000000, 268435456, PROT_READ|PROT_WRITE, 
> > MAP_PRIVATE|MAP_FIXED, 19, 0) = -1 EBUSY (Device or resource busy)
> > 
> > This is due to a limitation on ppc64 that requires MAP_FIXED mappings to 
> > have
> > the same page size as other mappings already present in the same "slice" of
> > virtual address space (Cc'ing Ben for details).
> 
> I'd like some details please.
> What do you mean when you say "same page size" and "slice"?
> 

On ppc64, the address space is divided in 256MB-sized segments where all pages
have the same size. This is a hw limitation IIUC. I don't know if it can be
fixed and I'll let Ben comment on it.

Hugepage support is implemented using an abstraction of segments called
"slices". Here's a quote from the related commit changelog in the kernel
tree:

commit d0f13e3c20b6fb73ccb467bdca97fa7cf5a574cd
Author: Benjamin Herrenschmidt <address@hidden>
Date:   Tue May 8 16:27:27 2007 +1000

    [POWERPC] Introduce address space "slices"

...

    The main issues are:
    
     - To maintain/keep track of the page size per "segment" (as we can
    only have one page size per segment on powerpc, which are 256MB
    divisions of the address space).
    
     - To make sure special mappings stay within their allotted
    "segments" (including MAP_FIXED crap)
    
     - To make sure everybody else doesn't mmap/brk/grow_stack into a
    "segment" that is used for a special mapping
...

> > This is exactly what happens
> > when calling mmap() above: first one uses native host page size (64k) and
> > second one uses huge page size (16M).
> > 
> > To be sure we always have the same page size, let's use the same backend for
> > both calls to mmap(): this is enough to fix the ppc64 issue.
> > 
> > This has no effect on RAM based mappings.
> > 
> > Signed-off-by: Greg Kurz <address@hidden>
> > ---
> > 
> > This is a bug fix for 2.5
> > 
> >  util/mmap-alloc.c |    3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
> > index c37acbe58ede..0ff221dd94f4 100644
> > --- a/util/mmap-alloc.c
> > +++ b/util/mmap-alloc.c
> > @@ -21,7 +21,8 @@ void *qemu_ram_mmap(int fd, size_t size, size_t align, 
> > bool shared)
> >       * space, even if size is already aligned.
> >       */
> >      size_t total = size + align;
> > -    void *ptr = mmap(0, total, PROT_NONE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 
> > 0);
> > +    void *ptr = mmap(0, total, PROT_NONE,
> > +                     (fd == -1 ? MAP_ANONYMOUS : 0) | MAP_PRIVATE, fd, 0);
> >      size_t offset = QEMU_ALIGN_UP((uintptr_t)ptr, align) - (uintptr_t)ptr;
> >      void *ptr1;
> >  
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]