[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2] target/i386: Fix physical address truncation
From: |
Paolo Bonzini |
Subject: |
Re: [PATCH v2] target/i386: Fix physical address truncation |
Date: |
Fri, 22 Dec 2023 17:52:00 +0100 |
On Fri, Dec 22, 2023 at 5:16 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On Fri, Dec 22, 2023 at 10:04 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
> > > If the extension is not needed, then the a20 mask isn't either.
> >
> > I think it is. The extension is not needed because the masking is
> > applied by either TCG (e.g. in gen_lea_v_seg_dest or gen_add_A0_im) or
> > mmu_translate(); but the a20 mask is never applied elsewhere for
> > either non-paging mode or page table walks.
>
> Hmm, except helpers do not apply the masking. :/
>
> So Michael's patch would for example break something as silly as a
> BOUND, FSAVE or XSAVE operation invoked around the 4GB boundary.
>
> The easiest way to proceed is to introduce a new MMU index
> MMU_PTW_IDX, which is the same as MMU_PHYS_IDX except it does not mask
> 32-bit addresses. Any objections?
Nevermind, I wasn't thinking straight.
Helpers will not use MMU_PHYS_IDX. So those are fine, we just need to
keep the masking before the "break".
The only user of MMU_PHYS_IDX is VMRUN/VMLOAD/VMSAVE. We need to add
checks that the VMCB is aligned there, and same for writing to
MSR_HSAVE_PA.
Paolo