[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] Slow release time of closed TCP PCBs

From: address@hidden
Subject: Re: [lwip-users] Slow release time of closed TCP PCBs
Date: Wed, 17 Jun 2009 20:51:32 +0200
User-agent: Thunderbird (Macintosh/20090302)

What you are seeing is that the PCBs are remaining in a wait-state for some time. This is to prevent packets from the old (closed) connection being possibly accept on a new connection. The port stays known to the stack and it can send RST packets when new packets for that port are received. However, while these PCBs are not yet freed, they are *not* in an active state any more, thus tcp_slowtmr is correct.

I guess you'll have to set MEMP_NUM_TCP_PCB higher so that the timeout does not keep all PCBs open...


Lou Cypher wrote:
I'm using lwIP 1.3.0, with an http-server like application (using as base the
raw mode httpd in contrib).

When I have many connections to server, i.e. repetitively reloading a page, I
quickly run into memory errors; enabling debug, and inspecting MEM TCP_PCB in
stats, I find that all the TCP PCBs are in use, while all the connections have
been closed properly -- inspecting them with WireShark.
It needs some seconds (up to around ten), before the PCBs start being freed, and
I have again sufficient PCBs for new connections -- I'm #defin-ing

If I enable TCP_DEBUG I see the message "tcp_slowtmr: no active pcbs", even when
 stats_display() shows 100% of TCP PCBs used.

What affects the time/mode to totally release the PCBs for a new task?


lwip-users mailing list

reply via email to

[Prev in Thread] Current Thread [Next in Thread]