qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH] PPC: Get MMU state on register sync


From: Alexander Graf
Subject: [Qemu-devel] Re: [PATCH] PPC: Get MMU state on register sync
Date: Tue, 24 Nov 2009 19:14:52 +0100

On 24.11.2009, at 19:12, Jan Kiszka wrote:

> Alexander Graf wrote:
>> On 24.11.2009, at 19:01, Jan Kiszka wrote:
>> 
>>> Alexander Graf wrote:
>>>> While x86 only needs to sync cr0-4 to know all about its MMU state and 
>>>> enable
>>>> qemu to resolve virtual to physical addresses, we need to sync all of the
>>>> segment registers on PPC to know which mapping we're in.
>>>> 
>>>> So let's grab the segment register contents to be able to use the "x" 
>>>> monitor
>>>> command and also enable the gdbstub to resolve virtual addresses.
>>>> 
>>>> I sent the corresponding KVM patch to the KVM ML some minutes ago.
>>>> 
>>>> Signed-off-by: Alexander Graf <address@hidden>
>>>> ---
>>>> target-ppc/kvm.c |   30 ++++++++++++++++++++++++++++++
>>>> 1 files changed, 30 insertions(+), 0 deletions(-)
>>>> 
>>>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>>>> index 4e1c65f..566513f 100644
>>>> --- a/target-ppc/kvm.c
>>>> +++ b/target-ppc/kvm.c
>>>> @@ -98,12 +98,17 @@ int kvm_arch_put_registers(CPUState *env)
>>>> int kvm_arch_get_registers(CPUState *env)
>>>> {
>>>>    struct kvm_regs regs;
>>>> +    struct kvm_sregs sregs;
>>>>    uint32_t i, ret;
>>>> 
>>>>    ret = kvm_vcpu_ioctl(env, KVM_GET_REGS, &regs);
>>>>    if (ret < 0)
>>>>        return ret;
>>>> 
>>>> +    ret = kvm_vcpu_ioctl(env, KVM_GET_SREGS, &sregs);
>>>> +    if (ret < 0)
>>>> +        return ret;
>>>> +
>>>>    env->ctr = regs.ctr;
>>>>    env->lr = regs.lr;
>>>>    env->xer = regs.xer;
>>>> @@ -125,6 +130,31 @@ int kvm_arch_get_registers(CPUState *env)
>>>>    for (i = 0;i < 32; i++)
>>>>        env->gpr[i] = regs.gpr[i];
>>>> 
>>>> +#ifdef KVM_CAP_PPC_SEGSTATE
>>>> +    if (kvm_check_extension(env->kvm_state, KVM_CAP_PPC_SEGSTATE)) {
>>>> +        env->sdr1 = sregs.sdr1;
>>>> +    
>>>> +        /* Sync SLB */
>>>> +        for (i = 0; i < 64; i++) {
>>>> +            ppc_store_slb(env, sregs.ppc64.slb[i].slbe,
>>>> +                               sregs.ppc64.slb[i].slbv);
>>>> +        }
>>>> +    
>>>> +        /* Sync SRs */
>>>> +        for (i = 0; i < 16; i++) {
>>>> +            env->sr[i] = sregs.ppc32.sr[i];
>>>> +        }
>>>> +    
>>>> +        /* Sync BATs */
>>>> +        for (i = 0; i < 8; i++) {
>>>> +            env->DBAT[0][i] = sregs.ppc32.dbat[i] & 0xffffffff;
>>>> +            env->DBAT[1][i] = sregs.ppc32.dbat[i] >> 32;
>>>> +            env->IBAT[0][i] = sregs.ppc32.ibat[i] & 0xffffffff;
>>>> +            env->IBAT[1][i] = sregs.ppc32.ibat[i] >> 32;
>>>> +        }
>>>> +    }
>>>> +#endif
>>>> +
>>>>    return 0;
>>>> }
>>>> 
>>> What about KVM_SET_SREGS in kvm_arch_put_registers? E.g. to play back
>>> potential changes to that special registers someone did via gdb?
>> 
>> I don't think you can actually change the segment values. At least I can't 
>> imagine why.
> 
> Dunno about PPC in this regard and how much value it has, but we have
> segment register access via gdb for x86.

The segments here are more like PLM4 on x86.

>> I definitely will implement SET_SREGS as soon as your sync split is in, as 
>> that's IMHO only really required on migration.
>> 
> 
> Migration is, of course, the major use case.
> 
> Still I wonder why not making this API symmetric when already touching it.

I was afraid to introduce performance regressions - setting the segments means 
flushing the complete shadow MMU.


Alex



reply via email to

[Prev in Thread] Current Thread [Next in Thread]