|
From: | Song Gao |
Subject: | Re: [RFC PATCH v3 01/44] target/loongarch: Add LSX data type VReg |
Date: | Mon, 24 Apr 2023 19:14:10 +0800 |
User-agent: | Mozilla/5.0 (X11; Linux loongarch64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 |
在 2023/4/24 上午3:41, Richard Henderson 写道:
On 4/20/23 09:06, Song Gao wrote:diff --git a/target/loongarch/machine.c b/target/loongarch/machine.c index b1e523ea72..a67b735a32 100644 --- a/target/loongarch/machine.c +++ b/target/loongarch/machine.c @@ -10,6 +10,112 @@ #include "migration/cpu.h" #include "internals.h" +/* FPU state */ +static int get_fpr(QEMUFile *f, void *pv, size_t size, + const VMStateField *field) +{ + fpr_t *v = pv; + + qemu_get_sbe64s(f, &v->vreg.D(0)); + return 0; +} + +static int put_fpr(QEMUFile *f, void *pv, size_t size, + const VMStateField *field, JSONWriter *vmdesc) +{ + fpr_t *v = pv; + + qemu_put_sbe64s(f, &v->vreg.D(0)); + return 0; +} + +static const VMStateInfo vmstate_info_fpr = { + .name = "fpr", + .get = get_fpr, + .put = put_fpr, +};These functions are old style. Compare target/i386/machine.c, vmstate_xmm_reg. I notice you're migrating the same data twice, between fpu and lsx.Compare target/i386/machine.c, vmstate_ymmh_reg, for migrating only the upper half with lsx.
Got it .
I assume lsx without fpu is not a valid cpu configuration?
Yes.
const VMStateDescription vmstate_loongarch_cpu = { .name = "cpu", .version_id = 0, .minimum_version_id = 0, .fields = (VMStateField[]) { - VMSTATE_UINTTL_ARRAY(env.gpr, LoongArchCPU, 32), VMSTATE_UINTTL(env.pc, LoongArchCPU), - VMSTATE_UINT64_ARRAY(env.fpr, LoongArchCPU, 32), - VMSTATE_UINT32(env.fcsr0, LoongArchCPU), - VMSTATE_BOOL_ARRAY(env.cf, LoongArchCPU, 8), /* Remaining CSRs */ VMSTATE_UINT64(env.CSR_CRMD, LoongArchCPU), @@ -99,4 +200,8 @@ const VMStateDescription vmstate_loongarch_cpu = { VMSTATE_END_OF_LIST() }, + .subsections = (const VMStateDescription*[]) { + &vmstate_fpu, + &vmstate_lsx, + }Need to increment version_id and minimum_version_id.
OK. Thanks. Song Gao
[Prev in Thread] | Current Thread | [Next in Thread] |