qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH QEMU] Transparent Hugepage Support #3


From: Andrea Arcangeli
Subject: Re: [Qemu-devel] [PATCH QEMU] Transparent Hugepage Support #3
Date: Wed, 17 Mar 2010 17:23:51 +0100

On Wed, Mar 17, 2010 at 04:07:09PM +0000, Paul Brook wrote:
> > On Wed, Mar 17, 2010 at 03:52:15PM +0000, Paul Brook wrote:
> > > > > > Size not multiple I think is legitimate, the below-4G chunk isn't
> > > > > > required to end 2M aligned, all it matters is that the above-4G
> > > > > > then starts aligned. In short one thing to add in the future as
> > > > > > parameter to qemu_ram_alloc is the physical address that the host
> > > > > > virtual address corresponds to.
> > > > >
> > > > > In general you don't know this at allocation time.
> > > >
> > > > Caller knows it, it's not like the caller is outside of qemu, it's not
> > > > some library. We know this is enough with the caller that there is now.
> > >
> > > No we don't.  As discussed previously, there are machines where the
> > > physical location of RAM is configurable at runtime.  In fact it's common
> > > for the ram to be completely absent at reset.
> > 
> > This is why PREFERRED_RAM_ALIGN is only defined for __x86_64__. I'm
> > not talking about other archs that may never support transparent
> > hugepages in the kernel because of other architectural constrains that
> > may prevent to map hugepages mixed with regular pages in the same vma.
> 
> __x86__64 only tells you about the host. I'm talking about the guest machine.

When it's qemu and not kvm (so when the guest might not be x86 arch) the
guest physical address becomes as irrelevant as the size and only the
host virtual address has to start 2M aligned on x86_64 host.

I think this already takes care of all practical issues, and there's
no need of further work until pc.c will start allocating chunks of ram
starting at guest physical addresses not 2M aligned. Maybe if we add
memory hotplug or something.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]