qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH-RFC 0/3] qemu: memory barriers in virtio


From: Michael S. Tsirkin
Subject: [Qemu-devel] Re: [PATCH-RFC 0/3] qemu: memory barriers in virtio
Date: Wed, 23 Dec 2009 10:55:45 +0200
User-agent: Mutt/1.5.19 (2009-01-05)

On Tue, Dec 22, 2009 at 10:58:16PM +0000, Paul Brook wrote:
> > > Given this is supposed to be portable code, I wonder if we should have
> > > atomic ordered memory accessors instead.
> > >
> > > Paul
> > 
> > Could you clarify please?
> > 
> > The infiniband bits I used as base are very portable,
> > I know they build on a ton of platforms. I just stripped
> > a couple of infiniband specific assumptions from there.
> > 
> > Do you suggest we use __sync_synchronize?
> > Unfortunately this is broken or slow on many platforms.
> > I do use it when it seems safe or when we see a platform
> > we don't know about.
> 
> I mean have a single function that does both the atomic load/store and the 
> memory barrier. Instead of:
> 
>   stw_phys(addr, val)
>   barrier();
> 
> We do:
> 
>   stw_phys_barrier(addr, val).


Well, I think it's a good idea to use Linux APIs instead of
inventing our own. A lot of people are familiar with them,
and there is decent documentation written.

In the example above, the name does not make it clear
whether the barrier is before or after the store?
I think this demonstrates why it's a good idea to stick to Linux
standard.

> 
> This avoids issues in the future (multithreaded TCG) where atomic memory 
> accesses may be nontrivial.
> 
> Paul

Unfortunately I have no real idea how this will work and what the issues
are. I speculate stw_phys, on host platforms that can not write 2 bytes
atomically, will need to take some lock? So possibly this means that we
could optimize the barrier away, but I don't think this amounts to a
serious issue, I guess portability/readability is more important.

-- 
MST




reply via email to

[Prev in Thread] Current Thread [Next in Thread]