lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [lwip-users] Out Of Order Sequence (Segments?)


From: Bill Auerbach
Subject: RE: [lwip-users] Out Of Order Sequence (Segments?)
Date: Wed, 9 Apr 2008 17:14:56 -0400

> Depends on your lwipopts.h settings, and the expected traffic pattern (or
> at least the level of traffic you want to keep efficient). Clearly in
> general the more memory the better, so it's then a question of where the
> compromises are going to be.

I do have a lot of memory.  I need to balance how much data the stack
buffers (or has to buffer) versus how much data I can receive and buffer
outside of lwIP.  I'm receiving 10 to 20 1-2MB blocks per second, so it's
pretty continuous but the data is pulled from lwIP's pbufs quickly.  I'm
trying to find a way to know the maximum amount buffered in the TSEC memory
and lwIP, worst case.

> Not much I can help with there I'm afraid. Although I've played with a
> powerpc FEC in the past, it's been a while and I imagine the problem is
> specific to your driver design.

The DMA problem is outside the driver - in my receive callback is where I
use dma_memcpy to copy from lwIP's pbuf to a static memory buffer.  If I use
dma_memcpy to store into an outgoing transmit pbuf or for MEMCPY, I have the
lockup problems.  I am waiting for the DMA complete flag so I know I'm not
bad there.  I can sometimes transfer a few million packets before it
freezes.  Yet with using only one dma_memcpy in the receive callback, I've
run for 500,000,000 packets without a hang.  When the hardware is more
stable and I'm past prototype proof of concept, the plan is to create a
linked list of pbuf payload data and not copy the data at all (I can get
over 900mbs without the copy and only 500-600mbs with it).  Then all of my
memory will be allocated to pbufs because I can't free them until I'm done
with the data.  With this method I can also free pbufs as the data is used
but it's more complicated.

> If you're expecting one connection at a time, then TCP_WND divided by
> PBUF_POOL_BUFSIZE should give you something around the desired
> PBUF_POOL_SIZE. Of course there will be inefficiencies unless you also
> tune
> your MSS accordingly too, so received packets closely fit into your pbufs.

It's a requirement to have only one connection and any more than one are
blocked. MSS is 1460, PBUF_POOL_BUFSIZE 1642 (MTU plus some since I have to
align payload to 64) and PBUF_POOL_SIZE is 2900.  Sounds like even 50 is
enough to make sure I don't run out.

> If you are expecting to receive more data than one connection, then you
> have to start making guesses as to peak pbuf use. That's when it's useful
> to start playing with the lwIP statistics (LWIP_STATS) to see what the max
> used resources have been on a particular run.

There is a connection for HTTP, but it's for settings and other stuff and
not part of the main use of the application (high speed commercial
printing).

Thanks for your reply!
Bill





reply via email to

[Prev in Thread] Current Thread [Next in Thread]