[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-ppc] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kv
From: |
Alexander Graf |
Subject: |
Re: [Qemu-ppc] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled |
Date: |
Wed, 2 Oct 2013 15:59:36 +0200 |
On 01.10.2013, at 03:27, Aneesh Kumar K.V wrote:
> Alexander Graf <address@hidden> writes:
>
>> On 09/05/2013 10:16 AM, Aneesh Kumar K.V wrote:
>>> From: "Aneesh Kumar K.V"<address@hidden>
>>>
>>> With kvm enabled, we store the hash page table information in the
>>> hypervisor.
>>> Use ioctl to read the htab contents. Without this we get the below error
>>> when
>>> trying to read the guest address
>>>
>>> (gdb) x/10 do_fork
>>> 0xc000000000098660<do_fork>: Cannot access memory at address
>>> 0xc000000000098660
>>> (gdb)
>>>
>>> Signed-off-by: Aneesh Kumar K.V<address@hidden>
>>> ---
>>> target-ppc/kvm.c | 59
>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>> target-ppc/kvm_ppc.h | 12 +++++++++-
>>> target-ppc/mmu-hash64.c | 57
>>> ++++++++++++++++++++++++++++-------------------
>>> 3 files changed, 104 insertions(+), 24 deletions(-)
>>>
>>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>>> index 1838465..05b066c 100644
>>> --- a/target-ppc/kvm.c
>>> +++ b/target-ppc/kvm.c
>>> @@ -1888,3 +1888,62 @@ int kvm_arch_on_sigbus(int code, void *addr)
>>> void kvm_arch_init_irq_routing(KVMState *s)
>>> {
>>> }
>>> +
>>> +hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
>>> + bool secondary, target_ulong ptem,
>>> + target_ulong *hpte0, target_ulong *hpte1)
>>> +{
>>> + int htab_fd;
>>> + uint64_t index;
>>> + hwaddr pte_offset;
>>> + target_ulong pte0, pte1;
>>> + struct kvm_get_htab_fd ghf;
>>> + struct kvm_get_htab_buf {
>>> + struct kvm_get_htab_header header;
>>> + /*
>>> + * Older kernel required one extra byte.
>>
>> Older than what?
>>
>>> + */
>
> Since we decided to drop that kernel patch, that should be updated as
> "kernel requires one extra byte".
>
>>> + unsigned long hpte[(HPTES_PER_GROUP * 2) + 1];
>>> + } hpte_buf;
>>> +
>>> + index = (hash * HPTES_PER_GROUP)& cpu->env.htab_mask;
>>> + *hpte0 = 0;
>>> + *hpte1 = 0;
>>> + if (!cap_htab_fd) {
>>> + return 0;
>>> + }
>>> +
>
> .....
>
>>>
>>> -static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr pteg_off,
>>> +static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr hash,
>>> bool secondary, target_ulong ptem,
>>> ppc_hash_pte64_t *pte)
>>> {
>>> - hwaddr pte_offset = pteg_off;
>>> + hwaddr pte_offset;
>>> target_ulong pte0, pte1;
>>> - int i;
>>> -
>>> - for (i = 0; i< HPTES_PER_GROUP; i++) {
>>> - pte0 = ppc_hash64_load_hpte0(env, pte_offset);
>>> - pte1 = ppc_hash64_load_hpte1(env, pte_offset);
>>> -
>>> - if ((pte0& HPTE64_V_VALID)
>>> -&& (secondary == !!(pte0& HPTE64_V_SECONDARY))
>>> -&& HPTE64_V_COMPARE(pte0, ptem)) {
>>> - pte->pte0 = pte0;
>>> - pte->pte1 = pte1;
>>> - return pte_offset;
>>> + int i, ret = 0;
>>> +
>>> + if (kvm_enabled()) {
>>> + ret = kvmppc_hash64_pteg_search(ppc_env_get_cpu(env), hash,
>>> + secondary, ptem,
>>> +&pte->pte0,&pte->pte1);
>>
>> Instead of duplicating the search, couldn't you just hook yourself into
>> ppc_hash64_load_hpte0/1 and return the respective ptes from there? Just
>> cache the current pteg to ensure things don't become dog slow.
>>
>
> Can you explain this better ?
You're basically doing
hwaddr ppc_hash64_pteg_search(...)
{
if (kvm) {
pteg = read_from_kvm();
foreach pte in pteg {
if (match) return offset;
}
return -1;
} else {
foreach pte in pteg {
pte = read_pte_from_memory();
if (match) return offset;
}
return -1;
}
}
This is massive code duplication. The only difference between kvm and tcg are
the source for the pteg read. David already abstracted the actual pte0/pte1
reads away in ppc_hash64_load_hpte0 and ppc_hash64_load_hpte1 wrapper functions.
Now imagine we could add a temporary pteg store in env. Then have something
like this in ppc_hash64_load_hpte0:
if (need_kvm_htab_access) {
if (env->current_cached_pteg != this_pteg) (
read_pteg(env->cached_pteg);
return env->cached_pteg[x].pte0;
}
} else {
<do what was done before>
}
That way the actual resolver doesn't care where the PTEG comes from, as it only
ever checks pte0/pte1 and leaves all the magic on where those come from to the
load function.
Alex
- Re: [Qemu-ppc] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled,
Alexander Graf <=