lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [lwip-users] Dropping existing TCP connections to service new ones


From: Art R.
Subject: RE: [lwip-users] Dropping existing TCP connections to service new ones
Date: Wed, 20 Feb 2008 12:42:41 -0800 (PST)

Setting to max+1 would probably work. I was thinking that the MAX value of
127 is the limit of the prio value (an int), but it's an unsigned 8 so 128
would be ok.



Bill Auerbach wrote:
> 
> 
>> -----Original Message-----
>> From: address@hidden
>> [mailto:address@hidden On
>> Behalf
>> Of Art R.
>> Sent: Wednesday, February 20, 2008 2:04 PM
>> To: address@hidden
>> Subject: [lwip-users] Dropping existing TCP connections to service new
>> ones
>> 
>> 
>> The tcp.c module has function tcp_kill_prio which is used to kill
>> existing
>> TCP connections when a new connection is being attempted and there is an
>> 'out of PCBs' situation. The comment in that function says "kill the
>> oldest
>> active connection that has lower priority than prio", but the test it
>> performs is "...pcb->prio <= prio". This appears to allow a new
>> connection
>> to kill an existing one of the same priority.
>> 
>> The default case is that all pcbs will have the same priority ("normal"),
>> so
>> the oldest is killed.
>> 
>> Is this a bug? Should the code read "if (pcb->prio < prio && ..."  (less
>> than instead of less or equal)?
>> Or is it intentional?
>> 
>> What would be the best way to disable the killing of active connections?
>> (Preferably without modifying the lwIP source code.)
> 
> Can't you call tcp_setprio with a value of TCP_PRIO_MAX+1 for each pcb?
> 
> Bill
> 
> 
> 
> 
> _______________________________________________
> lwip-users mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/lwip-users
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Dropping-existing-TCP-connections-to-service-new-ones-tp15596047p15598157.html
Sent from the lwip-users mailing list archive at Nabble.com.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]