qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 1/4] exec: Atomic access to bounce buffer


From: Fam Zheng
Subject: Re: [Qemu-devel] [PATCH v2 1/4] exec: Atomic access to bounce buffer
Date: Fri, 13 Mar 2015 16:16:03 +0800
User-agent: Mutt/1.5.23 (2014-03-12)

On Fri, 03/13 09:09, Paolo Bonzini wrote:
> 
> 
> On 13/03/2015 02:38, Fam Zheng wrote:
> > There could be a race condition when two processes call
> > address_space_map concurrently and both want to use the bounce buffer.
> > 
> > Add an in_use flag in BounceBuffer to sync it.
> > 
> > Signed-off-by: Fam Zheng <address@hidden>
> > ---
> >  exec.c | 5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/exec.c b/exec.c
> > index 60b9752..8d4e134 100644
> > --- a/exec.c
> > +++ b/exec.c
> > @@ -2481,6 +2481,7 @@ typedef struct {
> >      void *buffer;
> >      hwaddr addr;
> >      hwaddr len;
> > +    bool in_use;
> >  } BounceBuffer;
> >  
> >  static BounceBuffer bounce;
> > @@ -2569,9 +2570,10 @@ void *address_space_map(AddressSpace *as,
> >      l = len;
> >      mr = address_space_translate(as, addr, &xlat, &l, is_write);
> >      if (!memory_access_is_direct(mr, is_write)) {
> > -        if (bounce.buffer) {
> > +        if (atomic_cmpxchg(&bounce.in_use, false, true)) {
> 
> atomic_or is enough...

atomic_cmpxchg is here to take the ownership of bounce iff it is not in use, so
I think it is necessary.

Fam

> 
> >              return NULL;
> >          }
> > +        smp_mb();
> 
> ... and it already includes a memory barrier.
> 
> Paolo
> 
> >          /* Avoid unbounded allocations */
> >          l = MIN(l, TARGET_PAGE_SIZE);
> >          bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l);
> > @@ -2639,6 +2641,7 @@ void address_space_unmap(AddressSpace *as, void 
> > *buffer, hwaddr len,
> >      qemu_vfree(bounce.buffer);
> >      bounce.buffer = NULL;
> >      memory_region_unref(bounce.mr);
> > +    atomic_mb_set(&bounce.in_use, false);
> >      cpu_notify_map_clients();
> >  }
> >  
> > 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]