qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 5/5] aarch64-linux-user: Add support for SVE


From: Richard Henderson
Subject: Re: [Qemu-devel] [PATCH v3 5/5] aarch64-linux-user: Add support for SVE signal frame records
Date: Thu, 22 Feb 2018 12:14:08 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0

On 02/22/2018 08:41 AM, Peter Maydell wrote:
> On 16 February 2018 at 21:56, Richard Henderson
> <address@hidden> wrote:
>> Depending on the currently selected size of the SVE vector registers,
>> we can either store the data within the "standard" allocation, or we
>> may beedn to allocate additional space with an EXTRA record.
>>
>> Signed-off-by: Richard Henderson <address@hidden>
>> ---
>>  linux-user/signal.c | 141 
>> ++++++++++++++++++++++++++++++++++++++++++++++++++--
>>  1 file changed, 138 insertions(+), 3 deletions(-)
>>
>> diff --git a/linux-user/signal.c b/linux-user/signal.c
>> index ca0ba28c98..4c9fef4bb2 100644
>> --- a/linux-user/signal.c
>> +++ b/linux-user/signal.c
>> @@ -1452,6 +1452,30 @@ struct target_extra_context {
>>      uint32_t reserved[3];
>>  };
>>
>> +#define TARGET_SVE_MAGIC    0x53564501
>> +
>> +struct target_sve_context {
>> +    struct target_aarch64_ctx head;
>> +    uint16_t vl;
>> +    uint16_t reserved[3];
> 
> Worth commenting that actual SVE register data will directly follow the 
> struct.

Sure.

>> +static void target_restore_sve_record(CPUARMState *env,
>> +                                      struct target_sve_context *sve, int 
>> vq)
>> +{
>> +    int i, j;
>> +
>> +    /* Note that SVE regs are stored as a byte stream, with each byte 
>> element
>> +     * at a subsequent address.  This corresponds to a little-endian store
>> +     * of our 64-bit hunks.
> 
> We're doing loads in this function, not stores.

Oops.

>> +    /* SVE state needs saving only if it exists.  */
>> +    if (arm_feature(env, ARM_FEATURE_SVE)) {
>> +        vq = (env->vfp.zcr_el[1] & 0xf) + 1;
>> +        sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
>> +
>> +        /* For VQ <= 6, there is room in the standard space.  */
> 
> The kernel header arch/arm64/include/uapi/asm/sigcontext.h
> claims the record is in the standard space if "vl <= 64", which
> doesn't seem to match up with "VQ <= 6" ?

*shrug* I suppose the kernel guys only considered sizes that are powers of two,
even though other sizes are clearly allowed by the implementation?

The 4096 reserved bytes - sizeof(fpsimd) - sizeof(end) = 3560 bytes.
Values for TARGET_SVE_SIG_CONTEXT_SIZE(vq) are:
1       562
2       1108
3       1654
4       2200
5       2746
6       3292
7       3838
8       4384

So there's definitely room for VQ=6, with 268 bytes left over.

> 
>> +        if (sve_size <= std_size) {
>> +            sve_ofs = size;
>> +            size += sve_size;
>> +            end1_ofs = size;
>> +        } else {
>> +            /* Otherwise we need to allocate extra space.  */
>> +            extra_ofs = size;
>> +            size += sizeof(struct target_extra_context);
>> +            end1_ofs = size;
>> +            size += QEMU_ALIGN_UP(sizeof(struct target_aarch64_ctx), 16);
> 
> Why do we add the size of target_aarch64_ctx to size here?
> We already account for the size of the end record later, so
> what is this one?

This is for the end record within the extra space, as opposed to the end record
within the standard space which is what we accounted for before.  A comment
would help, I supposed.

> 
>> +            extra_base = size;
>> +            extra_size = sve_size + sizeof(struct target_aarch64_ctx);
> 
> If we ever get two different kinds of optional record that need to
> live in the extra space, this is going to need refactoring,
> because at the moment it assumes that the SVE record is the first
> and only thing that might live there. I guess that's OK for now, though.

I'm not quite sure how to generalize this; let's just let the next thing make
that change, once we know what's needed?


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]