[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap
From: |
David Gibson |
Subject: |
Re: [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap |
Date: |
Thu, 19 Mar 2020 16:20:51 +1100 |
On Wed, Mar 18, 2020 at 05:52:32PM +0100, Greg Kurz wrote:
65;5803;1c> On Wed, 18 Mar 2020 14:41:34 +1000
> Nicholas Piggin <address@hidden> wrote:
>
> > slbia must invalidate TLBs even if it does not remove a valid SLB
> > entry, because slbmte can overwrite valid entries without removing
> > their TLBs.
> >
> > As the architecture says, slbia invalidates all lookaside information,
> > not conditionally based on if it removed valid entries.
> >
> > It does not seem possible for POWER8 or earlier Linux kernels to hit
> > this bug because it never changes its kernel SLB translations, and it
> > should always have valid entries if any accesses are made to usespace
>
> s/usespace/userspace
Corrected in my tree, thanks.
>
> > regions. However other operating systems which may modify SLB entry 0
> > or do more fancy things with segments might be affected.
> >
> > When POWER9 slbia support is added in the next patch, this becomes a
> > real problem because some new slbia variants don't invalidate all
> > non-zero entries.
> >
> > Signed-off-by: Nicholas Piggin <address@hidden>
> > ---
>
> LGTM
>
> Reviewed-by: Greg Kurz <address@hidden>
>
> > target/ppc/mmu-hash64.c | 21 +++++++++++++++------
> > 1 file changed, 15 insertions(+), 6 deletions(-)
> >
> > diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> > index 34f6009b1e..373d44de74 100644
> > --- a/target/ppc/mmu-hash64.c
> > +++ b/target/ppc/mmu-hash64.c
> > @@ -100,20 +100,29 @@ void helper_slbia(CPUPPCState *env)
> > PowerPCCPU *cpu = env_archcpu(env);
> > int n;
> >
> > + /*
> > + * slbia must always flush all TLB (which is equivalent to ERAT in ppc
> > + * architecture). Matching on SLB_ESID_V is not good enough, because
> > slbmte
> > + * can overwrite a valid SLB without flushing its lookaside
> > information.
> > + *
> > + * It would be possible to keep the TLB in synch with the SLB by
> > flushing
> > + * when a valid entry is overwritten by slbmte, and therefore slbia
> > would
> > + * not have to flush unless it evicts a valid SLB entry. However it is
> > + * expected that slbmte is more common than slbia, and slbia is usually
> > + * going to evict valid SLB entries, so that tradeoff is unlikely to
> > be a
> > + * good one.
> > + */
> > +
> > /* XXX: Warning: slbia never invalidates the first segment */
> > for (n = 1; n < cpu->hash64_opts->slb_size; n++) {
> > ppc_slb_t *slb = &env->slb[n];
> >
> > if (slb->esid & SLB_ESID_V) {
> > slb->esid &= ~SLB_ESID_V;
> > - /*
> > - * XXX: given the fact that segment size is 256 MB or 1TB,
> > - * and we still don't have a tlb_flush_mask(env, n, mask)
> > - * in QEMU, we just invalidate all TLBs
> > - */
> > - env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> > }
> > }
> > +
> > + env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> > }
> >
> > static void __helper_slbie(CPUPPCState *env, target_ulong addr,
>
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
signature.asc
Description: PGP signature