qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] roms: Flush icache when writing roms to guest m


From: Scott Wood
Subject: Re: [Qemu-devel] [PATCH] roms: Flush icache when writing roms to guest memory
Date: Fri, 13 Dec 2013 13:18:50 -0600

On Wed, 2013-12-11 at 13:56 +0000, Peter Maydell wrote:
> On 11 December 2013 13:23, Alexander Graf <address@hidden> wrote:
> > The guest expects that its data and instruction cache view of the world
> > is 100% consistent when it initially boots. This works just fine on
> > initial rom population for the first boot.
> >
> > However, when we reboot and then repopulate the rom region there could
> > be old code still stuck in the instruction cache, giving the guest an
> > inconsistent view of the world when we're using kvm.
> >
> > So we need to invalidate the icache every time we write a rom into guest
> > address space. We do not need to do this for every DMA since the guest
> > expects it has to flush the icache manually in that case.
> 
> > @@ -2033,6 +2034,13 @@ void cpu_physical_memory_write_rom(hwaddr addr,
> >              ptr = qemu_get_ram_ptr(addr1);
> >              memcpy(ptr, buf, l);
> >              invalidate_and_set_dirty(addr1, l);
> > +            if (kvm_enabled()) {
> > +                /*
> > +                 * The guest may want to directly execute from the rom 
> > region,
> > +                 * so we better invalidate its icache
> > +                 */
> > +                flush_icache_range((uintptr_t)ptr, (uintptr_t)ptr + l);
> > +            }
> 
> I bet these aren't the only places where code gets written
> to guest memory. Also are you sure flush_icache_range()
> works correctly when multiple threads (multiple vCPUs,
> potentially executing on different host CPUs) are involved?

On PPC these cache operations broadcast, and are the architecturally
defined way of doing self-modifying code.

>  The TCG case only needs to care about "this thread writes code
> to memory that it will itself later execute", not any kind of
> cross-host-CPU flushing.

Can't the TCG thread get migrated between CPUs?

> There was a huge thread on kvmarm earlier this year
> https://lists.cs.columbia.edu/pipermail/kvmarm/2013-August/006716.html
> about a similar sort of issue, and I think the conclusion was that
> the kernel basically had to deal with the problem itself [though
> the thread is rather confusing...]. I've cc'd Marc Z in the hope
> he remembers the ARM specific detail...

Hmm, a good point is raised in that thread regarding what happens if a
page is swapped out and then back in:
https://lists.cs.columbia.edu/pipermail/kvmarm/2013-August/006738.html

I think the usual mechanism PPC booke uses to handle this is currently
not effective with KVM because we get the pages via __get_user_pages
fast() rather than by enabling execute permission in an ISI (see the
second instance of set_pte_filter() in arch/powerpc/mm/pgtable.c).  Even
if we fix that to invoke the cache cleaning code when KVM acquires a
page, though, QEMU would still need to flush if it modifies/loads code
on a page that may already be marked in the kernel as having been
cleaned.

-Scott





reply via email to

[Prev in Thread] Current Thread [Next in Thread]