[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH 3/4] ppc: Use split I/D mmu modes to avoid f

From: Aurelien Jarno
Subject: Re: [Qemu-devel] [RFC PATCH 3/4] ppc: Use split I/D mmu modes to avoid flushes on interrupts
Date: Mon, 20 Jul 2015 09:11:06 +0200
User-agent: Mutt/1.5.23 (2014-03-12)

On 2015-07-20 09:33, Benjamin Herrenschmidt wrote:
> On Mon, 2015-07-20 at 01:01 +0200, Aurelien Jarno wrote:
> > One way to improve this would be to reduce the size of a TLB entry.
> > Currently we store the page address separately for read, write and
> > code. The information is therefore quite redundant.
> > 
> > We might want to have only one page address entry and encode if it is
> > allowed for read, write or code in the low bits just like we do for
> > invalid, mmio or dirty. This means the TLB entry can be checked with
> > 
> >   env->tlb_table[mmu_idx][page_index].ADDR == 
> >   (addr & (TARGET_PAGE_MASK | (DATA_SIZE - 1))) | READ/WRITE/CODE) 
> > 
> > with READ/WRITE/CODE each being a different bit (they can probably even
> > replace invalid). In practice it means one more instruction in the fast
> > path (one or with a 8-bit immediate), but it allows to divide the size
> > of a TLB entry by two on a 64-bit machine. It might be worth a try.
> It might but that means "fixing" all tcg backends which I'm not necessarily
> looking forward to :-) The cost of that one or might be minimum on some
> processor but I wouldn't bet on it as we have basically all dependent
> instructions.

Understood. I did some tests showing that the number of instructions in
the fast path doesn't not have a big performance impact. In that case,
there is dependency between instructions, but anyway the CPU is likely
to be stalled by the TLB entry to the memory access, so we can add one
instruction before with very little impact.

I'll keep this idea in my todo list for another day.


Aurelien Jarno                          GPG: 4096R/1DDD8C9B
address@hidden                 http://www.aurel32.net

reply via email to

[Prev in Thread] Current Thread [Next in Thread]