[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [lwip-users] Ethernet Driver development guidelines (Blackfin BF536
From: |
Stephane Lesage |
Subject: |
Re: [lwip-users] Ethernet Driver development guidelines (Blackfin BF536 BF537 integrated EMAC) |
Date: |
Tue, 18 Mar 2008 19:21:04 +0100 |
User-agent: |
Thunderbird 2.0.0.12 (Windows/20080213) |
Bill Auerbach a écrit :
This was the point I missed, at first, in that discussion. You don’t
need to worry about when the pbuf is freed. Simply replace the filled
one with an empty one (allocate a new one) and when lwIP is done with
the original one, it will free it, placing it in the pool for a future
allocation in your packet receiver.
Yes that's what I wanted to do.
There's no problem with PBUF_POOL buffers.
The problem appears only when we want to use PBUF_REF types.
I don't know Blackfin, but if it has buffer descriptors like the PowerPC,
then this thread answered the question for me for no-copy buffers that the
Ethernet controller can DMA into. Works great for me on a PPC.
http://lists.gnu.org/archive/html/lwip-users/2007-12/msg00071.html
Thanks for the link.
My descriptors list does not work like this.
I have to interleave 2 types of descriptors:
- actual Ethernet Frame data
- detailed frame status information on completion/error + Frame Size +
optional checksum for IP Header and payload (very cool feature, but I'm
not sure I can use this with LwIP)
My documentation does not give a lot of details about the way the EMAC
works. It says you must reserve memory for max size and set COUNT=0
(max) in the DMA descriptors...
Actually after 'deep doc inspection' and experimentation, I discovered
it works normally, the EMAC just sends a FINISH command to the DMA
controller to jump to next descriptor at the end of a frame. Using small
buffers is no problem, the DMA can use as many descriptors as necessary
for a single frame. The last one does not need to have COUNT=0, the EMAC
will tell the DMA to jump to the next anyway for status and next frame.
So I can use PBUF_POOL exactly the way you do it.
The only problem is that it will consume a whole pbuf for status
information (8 bytes). Anyway this should be better than 1516 bytes buffers.
I'm now OK with packet buffer memory.
I know you don't use an OS.
But how are you handling packet Output ? Are you waiting actual
transmission in netif->link_output() ?
Anyone to help me with the blocking/threads/IRQ issues ?
Thanks in Advance.
--
Stephane Lesage
ATEIS International