lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] Allocating and deallocating an mbox in different thread


From: Andrew Lentvorski
Subject: Re: [lwip-users] Allocating and deallocating an mbox in different threads
Date: Wed, 18 Apr 2007 04:04:52 -0700
User-agent: Thunderbird 1.5.0.10 (Macintosh/20070221)

Kieran Mansley wrote:
On Wed, 2007-04-18 at 03:19 -0700, Andrew Lentvorski wrote:

In this instance, though, the freeing of the mbox is being used as a reverse message between the API layers. That's uncool and makes all kinds of assumptions about the implementation of mbox.

Is there any way to change this to an actual explicit reverse message and let the original thread do the deallocation?

Can you give a bit more detail about what problem this is causing?

The problem this is causing is that I have two mbox types of which only one is "memory-based".

I made a bunch of modifications to separate the two types in the code. Amazingly, this is a pretty straightforward task, if somewhat tedious.

mbox type A is for intrathread communication. No problems--this is the mbox everybody is used to.

mbox type B is for interthread communication. This is an actual hardware FIFO between processors. It is *not* memory-based.

The problem is that deallocation NULL. That's an implicit message and I don't get the ability to send implicit messages through a hardware FIFO.

I only get two threads.  Thread 1 is an ARM7, thread 2 is an ARM9.

I can't "just send a deallocation message" as I am already in a polling loop on the ARM7 inside tcpip_thread(). There is nothing to receive a "deallocated mbox" message, and I'm at the wrong abstraction level anyhow for such a low-level message. I can't abstract the FIFO into another thread as I don't have another thread to use on both sides. And, even if I *could* deallocate, the runtimes on both sides are completely independent so a free() would just throw a segfault or hang anyway (probably just hang, no memory protection and lots of caching collisions).

So, my code trucks along and, at best, starts leaking allocated mboxes on one side. At worst, it hangs (presumably when it runs out of mboxes).

Basically, Thread 2 needs to send a "connection closed" message at a slightly higher level of abstraction which "Thread 1" would receive and then handle its own low-level cleanup.

I don't quite know what the implications of doing this via messages would be. That mbox stuff seems to have some strong conditions around it for when and how it closes in order to avoid race conditions.

-a




reply via email to

[Prev in Thread] Current Thread [Next in Thread]