[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-arm] [PATCH v2 10/14] target/arm/kvm64: Add kvm_arch_get/put_s

From: Dave Martin
Subject: Re: [Qemu-arm] [PATCH v2 10/14] target/arm/kvm64: Add kvm_arch_get/put_sve
Date: Thu, 27 Jun 2019 16:02:24 +0100
User-agent: Mutt/1.5.23 (2014-03-12)

On Thu, Jun 27, 2019 at 12:26:06PM +0100, Richard Henderson wrote:
> On 6/27/19 12:59 PM, Dave Martin wrote:
> >> It's a shame that these slices exist at all.  It seems like the kernel 
> >> could
> >> use the negotiated max sve size to grab the data all at once.
> > 
> > The aim here was to be forwards compatible while fitting within the
> > existing ABI.
> > 
> > The ABI doesn't allow variable-sized registers, and if the vq can
> > someday grow above 16 then the individual registers could become pretty
> > big.
> The ABI doesn't appear to have fixed sized data blocks.  Since that's
> the case, it didn't seem to me that variable sized blocks was so
> different, given that the actual size is constrained by the hardware
> on which we're running.

I'm not sure what you mean here.

For KVM_GET_ONE_REG, the size is determined by the reg size field in
the register ID, so size is deemed to be a fixed property of a
particular register.

Having the register IDs vary according to the vector length seemed a
step too far.

> And if VQ does grow large, then do we really want oodles of syscalls in order
> to transfer the data for each register?  With the 9 bits reserved for this
> field, we could require a maximum of 1024 syscalls to transfer the whole
> register set.

A save/restore requires oodles of syscalls in any case, and for SVE
there is a rapid dropoff of probabilities: VQ < 16 is much likelier than
VQ == 32 is likelier than VQ == 64 etc.

The reg access API has some shortcomings, and we might find at some
point that the whole thing needs redesigning.

I suppose we could have taken the view that the KVM ABI would not even
try to support VQ > 16 in a forwards compatible way.  In the end we
decided to at least have things workable.

Either way, it's entirely reasonable for userspace not to try to support
additional slices for now.  We'll have plenty of time to plan away
across that bridge when we spot it on the horizon...

> > It's for QEMU to choose, but does it actually matter what byteorder you
> > store a Z-reg or P-reg in?  Maybe the byteswap isn't really needed.
> I think the only sensible order for the kernel is that in which LDR/STR itself
> saves the data.  Which is a byte stream.

We have a choice of STRs though.  Anyway, yes, it is the way it is, now.

> Within QEMU, it has so far made sense to keep the data in 64-bit hunks in
> host-endian order.  That's how the AdvSIMD code was written originally, and it
> turned out to be easy enough to continue that for SVE.

Fair enough.  It's entirely up to QEMU to decide -- I just wanted to
check that there was no misunderstanding about this issue in the ABI.

> Which does mean that if we want to pass data to the kernel as the
> aforementioned byte stream that a big-endian host does need to bswap the data
> in 64-bit hunks.
> > I don't know how this works when migrating from a little-endian to a
> > big-endian host or vice-versa (or if that is even supported...)
> The data is stored canonically within the vmsave, so such migrations are
> supposed to work.

Right, I was wondering about that.  Could be fun to test :)


reply via email to

[Prev in Thread] Current Thread [Next in Thread]