lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] pbuf proccessing in another thread


From: Kieran Mansley
Subject: Re: [lwip-users] pbuf proccessing in another thread
Date: Thu, 08 Apr 2010 09:33:39 +0100

On Thu, 2010-04-08 at 08:56 +0200, ncoage wrote:
> 
> I was thinking about this and it seems that I have chosen the wrong
> path. I have another question. Is it possible, for a specific TCP
> connection, to change the parameters such as MSS and the receive
> window in order to have flow control? Let's assume that our
> application sends data from TCP to the serial port. We have a queue of
> 200 bytes. Each byte received from the tcp stream is sent to the
> queue. Another thread retrieves the bytes from the queue and sends it
> to the serial port. To prevent overflow of the queue, the window
> should be 200 bytes (MSS should also be reduced.) After each received
> byte from the queue thread should call: tcp_recved(1) (if we can call
> this function in another thread, or we can call it using
> tcpip_callback). Does this make sense?

It makes sense, but it's not the way I would do it.  I would make the
thread that writes to the queue responsible for not overflowing it.
Have some state about the queue (e.g. read and write pointers) shared
between the two threads so it knows how much space there is.  If it has
more data from TCP than it can fit in the queue, it just holds on to the
TCP data until there is space.  It would do this by only calling
tcp_recved for data that it has put on the queue; this would keep the
TCP receive window closed until it is ready for more data but not
require a 200 byte TCP window which would be very bad for network
performance.  You can use tcp_poll() to get a periodic callback to allow
you to test the queue again and see if any of the data you delayed
putting in the queue could now be written out (and of course call
tcp_recved() for if so).

Kieran





reply via email to

[Prev in Thread] Current Thread [Next in Thread]