lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [lwip-users] LWIP configuration to maximize TCPthroughputgivenRAM co


From: Bruce Sutherland
Subject: RE: [lwip-users] LWIP configuration to maximize TCPthroughputgivenRAM constraints
Date: Tue, 21 Oct 2008 16:40:53 +0800

Thank you Mike.

I found the problem. I was correctly splitting up the incoming data into a
chain of multiple pbufs were necessary in the Ethernet driver. However, as I
was writing data in my receive callback, which just echoes the incoming
data, I was deallocating the pbufs before I had finished writing them all.

Bruce.

> -----Original Message-----
> From: 
> address@hidden
> u.org 
> [mailto:lwip-users-bounces+bruce.sutherland=rfinnovations.com.
> address@hidden On Behalf Of Mike Kleshov
> Sent: Tuesday, 21 October 2008 1:41 PM
> To: Mailing list for lwIP users
> Subject: Re: [lwip-users] LWIP configuration to maximize 
> TCPthroughputgivenRAM constraints
> 
> > I would like to change PBUF_POOL_BUFSIZE from the default 
> of TCP_MSS + 
> > 40 + 14, to Piero's value of 128, then increase 
> PBUF_POOL_SIZE as appropriate.
> 
> In my application I chose to go with a small 
> PBUF_POOL_BUFSIZE and increased PBUF_POOL_SIZE too. In 
> theory, this should decrease memory use when you have many 
> small incoming packets. When you have large incoming packets, 
> extra processing power will be required for chained pbufs, 
> and memory use will increase due to the overhead of pbuf headers.
> 
> > However, when I do this, I find that incoming TCP packets are being 
> > truncated to 74 bytes of data (128 - (40 + 14)).
> 
> They are not truncated. The packets contain Ethernet headers, 
> IP headers, TCP headers. So there will be less data in the 
> first pbuf of a packet.
> 
> > In my stream receive callback function:
> >
> > err_t StreamRecvCallback(void *arg, struct tcp_pcb *tpcb, 
> struct pbuf* 
> > p, err_t err);
> >
> > I always receive only a single buffer in variable p. The 
> p->next field 
> > is always null, although it should point to the next portion of the 
> > data. Do I have an issue in my configuration, or is it likely in my 
> > code? Configuration pasted below.
> 
> You should look at your Ethernet driver. The pbufs are filled there.
> Apparently, your driver expects that pbufs from the pbuf pool 
> are large enough to hold a complete packet. With smaller 
> pbufs, the driver should chain them when storing incoming packets.
> 
> 
> _______________________________________________
> lwip-users mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/lwip-users
> 





reply via email to

[Prev in Thread] Current Thread [Next in Thread]