qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] ppc: Disable huge page support if it is not ava


From: David Gibson
Subject: Re: [Qemu-devel] [PATCH] ppc: Disable huge page support if it is not available for main RAM
Date: Thu, 23 Jun 2016 12:58:55 +1000
User-agent: Mutt/1.6.1 (2016-04-27)

On Wed, Jun 22, 2016 at 10:50:05AM +0200, Thomas Huth wrote:
> On powerpc, we must only signal huge page support to the guest if
> all memory areas are capable of supporting huge pages. The commit
> 2d103aae8765 ("fix hugepage support when using memory-backend-file")
> already fixed the case when the user specified the mem-path property
> for NUMA memory nodes instead of using the global "-mem-path" option.
> However, there is one more case where it currently can go wrong.
> When specifying additional memory DIMMs without using NUMA, e.g.
> 
>  qemu-system-ppc64 -enable-kvm ... -m 1G,slots=2,maxmem=2G \
>     -device pc-dimm,id=dimm-mem1,memdev=mem1 -object \
>     memory-backend-file,policy=default,mem-path=/...,size=1G,id=mem1
> 
> the code in getrampagesize() currently assumes that huge pages
> are possible since they are enabled for the mem1 object. But
> since the main RAM is not backed by a huge page filesystem,
> the guest Linux kernel then crashes very quickly after being
> started. So in case the we've got "normal" memory without NUMA
> and without the global "-mem-path" option, we must not announce
> huge pages to the guest. Since this is likely a mis-configuration
> by the user, also spill out a message in this case.
> 
> Signed-off-by: Thomas Huth <address@hidden>

Applied to ppc-for-2.7.
-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]