qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC/PATCH] Add a memory barrier to guest memory access


From: Benjamin Herrenschmidt
Subject: Re: [Qemu-devel] [RFC/PATCH] Add a memory barrier to guest memory access functions
Date: Thu, 17 May 2012 12:44:45 +1000

On Wed, 2012-05-16 at 21:28 -0500, Anthony Liguori wrote:

> > @@ -2794,6 +2795,9 @@ void *qemu_get_ram_ptr(ram_addr_t addr)
> >   {
> >       RAMBlock *block;
> >
> > +    /* We ensure ordering for all DMA transactions */
> > +    dma_mb();
> > +
> 
> I get being conservative, but I don't think this makes a lot of sense.  There 
> are cases where the return of this function is cached (like the VGA ram 
> area). 
> I think it would make more sense if you explicitly put a barrier after write 
> operations.

Well, it depends ... sure something that caches the result is akin to
map/unmap and responsible for doing its own barriers between accesses,
however as a whole, this means that an entire map/unmap section is
ordered surrounding accesses which is actually not a bad idea.

Anyway, I'll post a different patch that adds the barrier more
selectively to:

 - cpu_physical_memory_rw  (that's the obvious main one)
 - cpu_physical_memory_write_rom (probably overkill but
   not a fast path so no big deal)
 - ld*_* and st*_* (or do you think these should require
   explicit barriers in the callers ?)

Note that with the above, cpu_physical_memory_map and unmap will
imply a barrier when using bounce buffers, it would make sense to also
provide the same barrier when not.

That does actually make sense for the same reason explained above,
ie, when those are used for a DMA transfer via async IO, that guarantees
ordering of the "block" vs. surrounding accesses even if accesses within
the actual map/unmap region are not ordered vs. each other.

Any objection ?

Cheers,
Ben.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]