qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [RFC for-2.13 0/7] spapr: Clean up pagesize handling


From: David Gibson
Subject: Re: [Qemu-ppc] [RFC for-2.13 0/7] spapr: Clean up pagesize handling
Date: Fri, 20 Apr 2018 20:21:17 +1000
User-agent: Mutt/1.9.2 (2017-12-15)

On Fri, Apr 20, 2018 at 11:31:10AM +0200, Andrea Bolognani wrote:
> On Fri, 2018-04-20 at 12:35 +1000, David Gibson wrote:
> > On Thu, Apr 19, 2018 at 05:30:04PM +0200, Andrea Bolognani wrote:
> > > On Thu, 2018-04-19 at 16:29 +1000, David Gibson wrote:
> > > > This means that in order to use hugepages in a PAPR guest it's
> > > > necessary to add a "cap-hpt-mps=24" machine parameter as well as
> > > > setting the mem-path correctly.  This is a bit more work on the user
> > > > and/or management side, but results in consistent behaviour so I think
> > > > it's worth it.
> > > 
> > > libvirt guests already need to explicitly opt-in to hugepages, so
> > > adding this new option automagically based on that shouldn't be too
> > > difficult.
> > 
> > Right.  We have to be a bit careful with automagic though, because
> > treating hugepage as a boolean is one of the problems that this
> > parameter is there to address.
> > 
> > If libvirt were to set the parameter based on the pagesize of the
> > hugepage mount, then it might not be consistent across a migration
> > (e.g. p8 to p9).  Now the new code would at least catch that and
> > safely fail the migration, but that might be confusing to users.
> 
> Good point.
> 
> I'll have to look into it to be sure, but I think it should be
> possible for libvirt to convert a generic
> 
>   <memoryBacking>
>     <hugepages/>
>   </memoryBacking>
> 
> to a more specific
> 
>   <memoryBacking>
>     <hugepages>
>       <page size="16384" unit="KiB"/>
>     </hugepages>
>   </memoryBacking>
> 
> by figuring out the page size for the default hugepage mount,
> which actually sounds like a good idea regardless. Of course users
> user would still be able to provide the page size themselves in the
> first place.

Sounds like a good approach.

> Is the 16 MiB page size available for both POWER8 and POWER9?

No.  That's a big part of what makes this such a mess.  HPT has 16MiB
and 16GiB hugepages, RPT has 2MiB and 1GiB hugepages.  (Well, I guess
tecnically Power9 does have 16MiB pages - but only in hash mode, which
the host won't be).

I've been looking into whether it's feasible to make a 16MiB hugepage
pool for POWER9 RPT.  The hardware can't actually use that as a
pagesize, but we could still allocate them physically contiguous, map
them using a bunch of 2MiB PTEs in RPT mode and allow them to be
mapped by guests in HPT mode.

I *think* it won't be too hard, but I haven't looked close enough to
rule out horrible gotchas yet.

> > > A couple of questions:
> > > 
> > >   * I see the option accepts values 12, 16, 24 and 34, with 16
> > >     being the default.
> > 
> > In fact it should accept any value >= 12, though the ones that you
> > list are the interesting ones.
> 
> Well, I copied them from the QEMU help text, and I kinda assumed
> that you wouldn't just list completely random values there O:-)

Ah, right, of course.

> > This does mean, for example, that if
> > it was just set to the hugepage size on a p9, 21 (2MiB) things should
> > work correctly (in practice it would act identically to setting it to
> > 16).
> 
> Wouldn't that lead to different behavior depending on whether you
> start the guest on a POWER9 or POWER8 machine? The former would be
> able to use 2 MiB hugepages, while the latter would be stuck using
> regular 64 KiB pages.

Well, no, because 2MiB hugepages aren't a thing in HPT mode.  In RPT
mode it'd be able to use 2MiB hugepages either way, because the
limitations only apply to HPT mode.

> Migration of such a guest from POWER9 to
> POWER8 wouldn't work because the hugepage allocation couldn't be
> fulfilled,

Sort of, you couldn't even get as far as staring the incoming qemu
with hpt-mps=21 on the POWER8 (unless you gave it 16MiB hugepages for
backing).

> but the other way around would probably work and lead to
> different page sizes being available inside the guest after a power
> cycle, no?

Well.. there are a few cases here.  If you migrated p8 -> p8 with
hpt-mps=21 on both ends, you couldn't actually start the guest on the
source without giving it hugepage backing.  In which case it'll be
fine on the p9 with hugepage mapping.

If you had hpt-mps=16 on the source and hpt-mps=21 on the other end,
well, you don't get to count on anything because you changed the VM
definition.  In fact it would work in this case, and you wouldn't even
get new page sizes after restart because HPT mode doesn't support any
pagesizes between 64kiB and 16MiB.

> > > I guess 34 corresponds to 1 GiB hugepages?
> > 
> > No, 16GiB hugepages, which is the "colossal page" size on HPT POWER
> > machines.  It's a simple shift, (1 << 34) == 16 GiB, 1GiB pages would
> > be 30 (but wouldn't let the guest do any more than 24 ~ 16 MiB in
> > practice).
> 
> Isn't 1 GiB hugepages support at least being worked on[1]?

That's for radix mode.  Hash mode has 16MiB and 16GiB, no 1GiB.

> > >     Also, in what scenario would 12 be used?
> > 
> > So RHEL, at least, generally configures ppc64 kernels to use 64kiB
> > pages, but 4kiB pages are still supported upstream (not sure if there
> > are any distros that still use that mode).  If your host uses 4kiB
> > pages you wouldn't be able to start a (KVM HV) guest without setting
> > this to 12 (or using a 64kiB hugepage mount).
> 
> Mh, that's annoying, as needing to support 4 KiB pages would most
> likely mean we'd have to turn this into a stand-alone configuration
> knob rather than deriving it entirely from existing ones, which I'd
> prefer as it's clearly much more user-friendly.

Yeah, there's really no way around it though.  Well other than always
restricting to 4kiB pages by default, which would suck for performance
with guests that want to use 64kIB pages.

> I'll check out what other distros are doing: if all the major ones
> are defaulting to 64 KiB pages these days, it might be reasonable
> to do the same and pretend smaller page sizes don't exist at all in
> order to avoid the pain of having to tweak yet another knob, even
> if that means leaving people compiling their own custom kernels
> with 4 KiB page size in the dust.

That's my guess.

> > >   * The name of the property suggests this setting is only relevant
> > >     for HPT guests. libvirt doesn't really have the notion of HPT
> > >     and RPT, and I'm not really itching to introduce it. Can we
> > >     safely use this option for all guests, even RPT ones?
> > 
> > Yes.  The "hpt" in the main is meant to imply that its restriction
> > only applies when the guest is in HPT mode, but it can be safely set
> > in any mode.  In RPT mode guest and host pagesizes are independent of
> > each other, so we don't have to deal with this mess.
> 
> Good :)
> 
> 
> [1] https://patchwork.kernel.org/patch/9729991/

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]