|Subject:||[lwip-devel] Significant overhead periodically during TCP receive transfers|
|Date:||Tue, 7 Oct 2008 10:15:42 -0400|
Our lwIP-based RAWAPI application requires consistent bandwidth. It’s pretty simple: we receive a large block of data (1-2MB) from the WinXP PC and send a short (836 byte) response when we’re done with the block. This triggers the next block. We capture the time of the first packet and last packet (the packets that are passed to the tcp_recv callback). We send this timestamp with the 836 byte response so we have a real-time bandwidth display on the PC (this time is minus the first packet of course). We see very consistent numbers, both on a LAN and directly connected for a while. At approximately 9 minutes and 20-30 seconds, the bandwidth falls substantially for one data block (we display min, max and average bandwidth/throughput for diagnostics purposes). Unfortunately, this bandwidth drop is a show-stopper for us.
Does anyone know the internals well enough to possibly explain if perhaps lwIP has some timer that causes it to go off and do something that takes a few milliseconds? I’m positive it’s not our surrounding firmware (I’ve been at this almost 2 weeks assuming it was not lwIP related). I’ve turned off DHCP and even monitor inbound ARP and UDP packet types and these are not the problem (I even resorted to filtering UDP out in Ethernetif.c to eliminate the overhead they might cause higher in the stack). I have suspicions that it’s ARP related but I cannot prove it. I also was not able to build with LWIP_ARP disabled. ARP_QUEUING and other lwipopts have had no effect on this, nor have enabling debug output and assertions shown anything unusual. Stopping in the debugger has shown nothing in lwIP_stats that is out of the normal – no errors or drops.
Thank you for any feedback on this.
|[Prev in Thread]||Current Thread||[Next in Thread]|