Yes, but if it gets lost, TCP should retransmit it. If you're using UDP, and expecting all the data to get through, then that's a problem. None of the lwIP APIs have a "send" function that waits unti
I agree with the latter. I think it's pretty straight forward to start a timer for some period (1-5S) on each character to be sent and on a timeout of that timer, or on a full buffer, call send(). B
Hey, that is the LwIP implementation of the Nagle Algorithm. There were some discussions on this list about it already. You can use TCP_NODELAY socketoption in sequential APIs. (setsockopt) Also TCP_
I'm not sure if there's a supported way of doing this with the API but the above will work for now. I think it defaults to 0, so the Nagle algorithm will be used unless you set that flag. Kieran
Are you using RAWAPI? I have data only for that mode. Chances are you need to speed up your Ethernet driver and choose a faster checksum algorithm or write or find one in assembly language. Both make
Hi, I am trying to figure out if I can increase my TCP bandwidth. Using a PC timer I see that as I send TCP Packets that the delay is in the range of 500 us on average. There may be a couple at the s
I opened a bug for this ( https://savannah.nongnu.org/bugs/index.php?24212 ) as I think this is a severe bug that should not be forgotten. Simon Kieran asked me to do further investigations consideri
Thanks for taking the time to produce such a detailed and helpful analysis. Yes, that's a problem. We'll need to fix that somehow. And that looks to be the fundamental cause of this bug. Your solutio
Hi there Kieran asked me to do further investigations considering the topic "Deadlocked tcp_retransmit due to exceeded pcb->cwnd" (see http://lists.gnu.org/archive/html/lwip-users/2008-07/msg00098.ht
Muhamad Ikhwan Ismail wrote: I found out the problem already. My driver was set up to transfer out one buffer per frame only (1520 bytes) since we want to spare processing power as much as we can and
Jifl, I found the problem. First, just for understanding, my lwip application, receives packets FROM serial port, and forward to a socket using send function. Yes, the problem is in my driver... I ha
In any case, his problem will result by the socket interface calling it. He's seeing what I was dealing with 2 weeks ago and I (respectfully) agree that this is a problem that can (and should) be av
i think that i found a work around, instead of increasing TCP_SNDQUEUELEN (useless for this problem) i tried to disable nagle alg. and now it seems that problem doesn't happen i'm doing some addition
It is implemented. How are you calling setsockopt? Jifl -- eCosCentric Limited http://www.eCosCentric.com/ The eCos experts ** Visit us at ESC Silicon Valley <http://www.embedded.com/esc/sv> ** ** Ap
Hi. I want to try to disable nagle alg. I saw in lwip_setsockopt_internal function that it's possible to use TCP_NODELAY option, but this option seems unimplemented in lwip_setsockopt does someone ex
With the other problems fixed, which were causing every segment to be flushed, I now see no segments except for the first one being sent: Here’s a cut of my debug serial output: Initializing Et
I think that increase TCP_SND_QUEUELEN is not the solution in this case, since the problem is that application have to run when one of the peer is unplugged. So, a bigger TCP_SND_QUEUELEN should just
You can get problems this way, of course, but nevertheless, it should work... Until a client stops responding. Oh, THAT queue! That's something different, of course! :-) There are two defines in opt