[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [lwip-users] Slow release time of closed TCP PCBs
From: |
Lou Cypher |
Subject: |
Re: [lwip-users] Slow release time of closed TCP PCBs |
Date: |
Fri, 19 Jun 2009 09:23:26 +0200 |
User-agent: |
Thunderbird 2.0.0.21 (Windows/20090302) |
> One thing I don't understand about this discussion: the tcp_alloc()
> function tries to allocate a new PCB, but if that fails, it tries
> killing a PCB in the TIME_WAIT state (picking the oldest one), then it
> retries the allocation.
That's interesting and meaningful.
The number of errors in memory pool, would be incremented in those cases?
Because that's the flag that warned me, so enabling MEMP_DEBUG I started seeing
the memp_malloc out of memory errors.
> This should mean that even if all PCBs are used, it will be possible to
> start a new connection as long as at least one recent connection has
> closed (and has got far enough through the FIN/ACK handshakes, but
> hasn't reached the 2 * TCP_MSL timeout).
>
> This code is similar in LWIP 1.2.0, 1.3.0 and CVS-head, and it works
> fine for us using the raw API, but I haven't looked at implications for
> higher level APIs.
>
> What piece of information am I missing, and why isn't it working for Lou?
That is a good question.
I have an httpd application on lwIP 1.3.0, and to test it I just clicked fast
the 'reload' button on my browser, to see if it was responding well. At that
point I noticed that some requests failed, and WireShark was reporting broken
connections (failing on SYN, if I remember that well).
Then the debug showed memp_malloc errors, and a later printout of stats
displayed the accumulated errors, along with the used/max PCBs equal to total
ones.
I will make some new tests, with all these new hints in mind
Thanks again,
Lou