lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] Re: lightweight protection


From: Kieran Mansley
Subject: Re: [lwip-users] Re: lightweight protection
Date: Thu, 20 Feb 2003 18:50:51 +0000 (GMT)

On Thu, 20 Feb 2003, Marc Boucher wrote:

> On Thu, Feb 20, 2003 at 05:24:51PM +0000, Kieran Mansley wrote:
> > On Thu, 20 Feb 2003, Marc Boucher wrote:
> > > Locks should only be held or interrupts blocked during the least possible
> > > time required to manipulate shared data and avoid races..
> >
> > In general, yes, but there is a trade off with the overhead it takes to
> > acquire or free a lock.  ie. If you are continually taking and releasing
> > locks in a tight loop to avoid holding the lock over a piece of code that
> > doesn't require it, you will find you're spending all of your time on the
> > locking.
> >
> > Kieran
>
> Yes, of course. Locks should be used with good judgement.
>
> My point was that encapsulating whole functions that are also entered
> without the lock or call memset() to zero newly allocated private memory
> outside of loops as it is done in the current CVS code is not proper.

The approach I have taken (my code has unfortunately diverged from CVS so
much that it's no longer feasible for me to merge things back in) is to
have a lock on each PCB.  Whenever code is dealing with the state
associated with a connection it takes that lock.  This means that
different connections do not interfere with each other.  The pbuf code
also has a mutex to ensure that the pools are protected.  I notice that
after 0.5.3 there was a move towards having more global variables, which
are a pain as far as locks (and this approach in particular) are
concerned.

One caveat is that I also changed the threading model, so what is working
reasonably well for me, might not suit the main code.  Rather than having
one thread executing all the protocol code, and this interfacing to other
threads through the various api layers, I have the application threads
driving the protocol code directly.  This means you can have many more
threads concurrently executing parts of the stack, but you can get
significantly higher performance.

I also implemented a set of much more lightweight locks than the standard
sys_arch stuff (which signal() to other threads when the lock is released,
and signal can take a long time) that allow many concurrent readers/one
exclusive writer.  This seemed to help a bit performance wise, but I don't
recall how much.

If you want examples of what I've done or how it works I'm happy to
release a snapshot of my code so others can take a look.  Unfortunately it
won't compile or run for anyone else (requires lots of other stuff for my
netif, and proprietary hardware) but it might be illustrative it you're
interested in any of the above.

Kieran





reply via email to

[Prev in Thread] Current Thread [Next in Thread]