lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] TCP server app, using select


From: mick s
Subject: Re: [lwip-users] TCP server app, using select
Date: Thu, 29 May 2008 16:02:57 +0800

Thanks for your reply.  I think I already understood this, though it probably wasn't clear from the code fragment I posted.  In my code, iClientSock was the result of an accept().

I should have posted something more like;

      FD_ZERO(&readset);
      FD_SET(iServerSock, &readset);
      iNumSocks = iServerSock + 1;
      if ( iClientSock >= 0 )
      {
         FD_SET(iClientSock, &readset);
         if ( iClientSock + 1 > iNumSocks )
            iNumSocks = iClientSock + 1;
      }

      selectTimeout.tv_sec  = 0;
      selectTimeout.tv_usec = POLL_TIMEOUT * 1000;

      if ( lwip_select(iNumSocks, &readset, NULL, NULL, &selectTimeout) == 0 )
         return 0;

      if ( iClientSock >= 0 && ( FD_ISSET(iClientSock, &readset) ) )
      {
         /* Do a read .... */

      }
      else if ( FD_ISSET(iServerSock, &readset) )
      {
         /* Server socket readable, we have a new connection request */
         tAddrlen = sizeof(tAddr);
         iClientSock = lwip_accept(iServerSock, (struct sockaddr *) &tAddr, &tAddrlen);

The fragment is called periodically, with the iServerSock already in the listen state, and iClientSock == -1.


On Thu, May 29, 2008 at 3:48 PM, Muhamad Ikhwan Ismail <address@hidden> wrote:

Hi,


Once a client socket has been accepted, lwip_select always returns 1.  I then test the client socket with

  if ( iClientSock >= 0 && FD_ISSET(iClientSock, &readset) )

Which is always true (after the client socket has been accepted).

I can read data once it has been sent, and send on the socket. The SO_RCVTIMEO option doesn't seem to change this behaviour.  I've tried lwip 1.3.0, and the version from cvs today.  I'm using FreeRTOS on a small ARM chip, the atmel AT91SAM7X256.

When a connection is accepted, you get a new socket with which you execute the I/O operations. You do not do I/O with the same socket used to established connection
which I see you are doing. Which is why you always get one when you used select with the iClientSocket. This is standard for all socket API.

You should have during the accept (after select returns 1 for accetpt) :

int iQClientSock = accept(....);

and for future I/O operration used the iQClientSocket :
 
 if ( iQClientSock >= 0 && FD_ISSET(iQClientSock, &readset) )

I hope I understood your questions and my answer could help you.

Greetings
Ikhwan




Date: Thu, 29 May 2008 15:21:08 +0800
From: address@hidden
To: address@hidden
Subject: [lwip-users] TCP server app, using select


Hi

I'm having a problem using select() in my TCP server application, and I hope someone can point out where I'm mistaken.  It seems to always mark my accepted client sockets as readable, even when there is no data to be read.

I'd like my task to:
- periodically service some of my own functions,
- accept incoming tcp connections and
- service any already received connection. 
It should drop current connections in favour of new connections if the old connection doesn't have any data.

I've designed my code to first do a select on the server socket after the bind() and a listen().  The select has a timeout.

Once the server socket is readable, I do an lwip_accept() to get the client socket number.  I then use the SO_RCVTIMEO socket option to set the read timeout for the socket.

I try a lwip_recvfrom() on the client socket, which returns 0.

If I then call lwip_select() with both the server and client sockets in the readset, it will always return the readset with the client socket marked.  A subsequent recv of the client socket returns 0, unless there actually is data to read.

I can't follow the lwip_select() code entirely, but it seems that the problem may be in the accept.   There is a line;

nsock->rcvevent += -1 - newconn->socket;

This affects the subsequent select, which tests the socket with

 if (p_sock && (p_sock->lastdata || p_sock->rcvevent))


My call to select is as follows;

      FD_ZERO(&readset);
      FD_SET(iServerSock, &readset);
      iNumSocks = iServerSock + 1;
      if ( iClientSock >= 0 )
      {
         FD_SET(iClientSock, &readset);
         if ( iClientSock + 1 > iNumSocks )
            iNumSocks = iClientSock + 1;
      }

      selectTimeout.tv_sec  = 0;
      selectTimeout.tv_usec = POLL_TIMEOUT * 1000;

      if ( lwip_select(iNumSocks, &readset, NULL, NULL, &selectTimeout) == 0 )
         return 0;

Once a client socket has been accepted, lwip_select always returns 1.  I then test the client socket with

  if ( iClientSock >= 0 && FD_ISSET(iClientSock, &readset) )

Which is always true (after the client socket has been accepted).

I can read data once it has been sent, and send on the socket. The SO_RCVTIMEO option doesn't seem to change this behaviour.  I've tried lwip 1.3.0, and the version from cvs today.  I'm using FreeRTOS on a small ARM chip, the atmel AT91SAM7X256.


Thanks in advance




E-mail for the greater good. Join the i'm Initiative from Microsoft.

_______________________________________________
lwip-users mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/lwip-users


reply via email to

[Prev in Thread] Current Thread [Next in Thread]