lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-users] Custom PBUF_RAM for TX?


From: Alun Evans
Subject: [lwip-users] Custom PBUF_RAM for TX?
Date: Tue, 4 Feb 2014 12:28:24 -0800

Hi,

I’ve been going through the archives and code looking for an answer, like a few 
previous users, we have the desire to have the payload of the pbuf point to a 
specific address in memory.

http://lists.nongnu.org/archive/html/lwip-devel/2007-08/msg00112.html
> [lwip-devel] Best method to allocate space for external packets
> I have a scenario where I want to allocate a pbuf for an incoming packet, but 
> I want that pbuf payload to reside in a region of shared memory that is 
> specified at runtime, and is external to the network stack.  This newly 
> allocated pbuf is then passed to the driver, which then DMAs an incoming 
> payload directly into the space. 

http://lists.nongnu.org/archive/html/lwip-users/2011-08/msg00109.html
> [lwip-users] Memory management for packet buffers
> You don't need custom pbufs for that unless you need the memory to be located 
> at specific addresses.

http://lists.nongnu.org/archive/html/lwip-users/2011-10/msg00121.html
> Re: [lwip-users] Automatic Rx DMA ring replenish
> However, I guess providing a way to change memory allocation/deallocation to 
> use custom functions would be a good thing to support many different types of 
> zero copy MACs without having to change the lwIP code for every hardware, so 
> I 
> guess it's well worth a try for your target!


It looks like the solution is given in :

http://lists.nongnu.org/archive/html/lwip-users/2011-10/msg00009.html
> Re: [lwip-users] Custom memory management
> For the RX side, using a *custom* PBUF_REF would be the best solution. That's 
> a pbuf that has a 'freed' callback and references external memory. However, 
> that doesn't work, yet (though I planned to add support for it as I can see 
> it's one possible solution to implement DMA MAC drivers). The problem here is 
> that pbuf_header() can't grow such pbufs (as it doesn't know the original 
> length). This would have to be fixed by changing the struct pbuf (if only for 
> PBUF_REF pbufs).
> 
> As to the TX side: normally, TX pbufs are allocated as PBUF_RAM, the memory 
> for that is taken from the heap by calling mem_malloc(). Now the simple 
> solution would be to just replace mem.c by your own code allocating and 
> freeing from your TX pool: with the correct settings, mem_malloc() isn't used 
> for anything else buf PBUF_RAM pbufs. The only problem might be that you 
> don't know in advance how big the memory blocks are (and thus how big your TX 
> buffer entries should be), but by using TCP_OVERSIZE, you can just use the 
> maximum ethernet frame size (if you don't mind wasting some RAM for smaller 
> packets).

Certainly the PBUF_REF for RX side works well, but for the TX side, I wanted to 
check on two points:

Firstly, there are a few callers to mem_malloc(), other than PBUF_RAM:

http://git.savannah.gnu.org/cgit/lwip.git/tree/src/core/dhcp.c#n654
http://git.savannah.gnu.org/cgit/lwip.git/tree/src/core/ipv4/autoip.c#n308

Though they both have APIs that note:

http://git.savannah.gnu.org/cgit/lwip.git/tree/src/core/dhcp.c#n578
> Using this prevents dhcp_start to allocate it using mem_malloc.

http://git.savannah.gnu.org/cgit/lwip.git/tree/src/core/ipv4/autoip.c#n127
>  * Using this prevents autoip_start to allocate it using mem_malloc.

So I guess this is ok?

The other question, is that really it’s only the *payload* that we’d like 
allocated at this custom address, i.e. we’d be happy to have the meta-data of 
the pbuf live in the heap, or a pool, but the payload point to the custom 
address. I take it that this is not possible?

- It seems that we can live with the entire contiguous PBUF to live in this 
custom address though, so this would just be an optimisation.


thanks for any assistance!


A.

-- 
Alun Evans

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail


reply via email to

[Prev in Thread] Current Thread [Next in Thread]