lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-users] R: R: R: R: socket slow down


From: Rastislav Uhrin
Subject: [lwip-users] R: R: R: R: socket slow down
Date: Fri, 20 May 2016 16:23:12 +0200

Hi Jens,

Of course I tried your code, but the result was that delay started to show 
immediately.

If low_level_input() reads all packets and puts it to pbuf pool why you prefer 
to use 

Do{
}while(p!=NULL)

in ethernetif_input and lowlevelinput to read just one packet?

I even try use "counting semaphore" to make sure nothing is missed.


Thank you a lot for support, I am spending on this issue a month or more.

Rastislav

-----Messaggio originale-----
Da: lwip-users [mailto:address@hidden Per conto di Jens Nielsen
Inviato: venerdì 20 maggio 2016 16:13
A: address@hidden
Oggetto: Re: [lwip-users] R: R: R: socket slow down

Hi

I'm sorry but I don't know how to explain this any clearer, I even sent you a 
piece of code you could try... Did you test that?

BR /Jens


On 2016-05-20 15:48, Rastislav Uhrin wrote:
> Hi Jens,
>
> Sorry, I don't understand what you are trying to say, can you please explain 
> more what should I do?
>
> Thanks
>
> -----Messaggio originale-----
> Da: lwip-users 
> [mailto:address@hidden 
> Per conto di Jens Nielsen
> Inviato: venerdì 20 maggio 2016 15:16
> A: address@hidden
> Oggetto: Re: [lwip-users] R: R: socket slow down
>
> Hi
>
> I'd say that low_level_input() reads one packet and puts it in a 
> (chain
> of) pbufs, then your ethernetif_input() hand over that packet to lwip and 
> wait for semaphore again.
>
> BR /Jens
>
> On 2016-05-20 15:09, Rastislav Uhrin wrote:
>> Hi Jens,
>>
>> I really appreciate your help.
>>
>> Sorry I think that "lowlevelinput" already takes care about reading all 
>> packets, see here... What else could be wrong?
>>
>> static struct pbuf *
>> low_level_input(void)
>> {
>>     struct pbuf *p = NULL;
>>     struct pbuf *q;
>>     uint32_t len;
>>
>>     len = XMC_ETH_MAC_GetRxFrameSize(&eth_mac);
>>
>> #if ETH_PAD_SIZE
>>     len += ETH_PAD_SIZE;    /* allow room for Ethernet padding */
>> #endif
>>
>>     if (len < XMC_ETH_MAC_BUF_SIZE)
>>     {
>>
>>       /* We allocate a pbuf chain of pbufs from the pool. */
>>       p = pbuf_alloc(PBUF_RAW, len, PBUF_POOL);
>>     
>>       if (p != NULL)
>>       {
>> #if ETH_PAD_SIZE
>>         pbuf_header(p, -ETH_PAD_SIZE);  /* drop the padding word */ 
>> #endif
>>
>>         XMC_ETH_MAC_ReadFrame(&eth_mac, buffer, len);
>>
>>         len = 0;
>>         /* We iterate over the pbuf chain until we have read the entire
>>          * packet into the pbuf. */
>>         for (q = p; q != NULL; q = q->next)
>>         {
>>           /* Read enough bytes to fill this pbuf in the chain. The
>>            * available data in the pbuf is given by the q->len
>>            * variable.
>>            * This does not necessarily have to be a memcpy, you can also 
>> preallocate
>>            * pbufs for a DMA-enabled MAC and after receiving truncate it to 
>> the
>>            * actually received size. In this case, ensure the tot_len member 
>> of the
>>            * pbuf is the sum of the chained pbuf len members.
>>            */
>>            memcpy(q->payload, &buffer[len], q->len);
>>            len += q->len;
>>         }
>>
>> #if ETH_PAD_SIZE
>>         pbuf_header(p, ETH_PAD_SIZE);    /* Reclaim the padding word */
>> #endif
>>
>>       }
>>       else
>>       {
>>         XMC_ETH_MAC_ReadFrame(&eth_mac, NULL, 0);
>>       }
>>     }
>>     else
>>     {
>>       XMC_ETH_MAC_ReadFrame(&eth_mac, NULL, 0);
>>     }
>>
>>     return p;
>> }
>>
>>
>>
>> -----Messaggio originale-----
>> Da: lwip-users
>> [mailto:address@hidden
>> Per conto di Jens Nielsen
>> Inviato: venerdì 20 maggio 2016 14:55
>> A: address@hidden
>> Oggetto: Re: [lwip-users] R: socket slow down
>>
>> Hi
>>
>> In your "Task waiting on semaphore" you have the exact same problem as many 
>> others. You wait for the semaphore and then handle only one packet, then you 
>> wait for semaphore again. When the semaphore is signalled you have to loop 
>> until all pending packets are served.
>>
>> I don't know your controller but if you're lucky something like this 
>> might be enough
>>
>>      while(1)
>>      {
>>        sys_arch_sem_wait(&eth_rx_semaphore, 0);
>>
>>        do {
>>          p = low_level_input();
>>          if (p != NULL)
>>          {
>>               ... all the code you had here ...
>>          }
>>        } while ( p != NULL );
>>      }
>>
>>
>>
>> Best regards
>> Jens
>>
>> On 2016-05-20 14:29, Rastislav Uhrin wrote:
>>> Hello Jens,
>>>
>>> I am still on this problem.
>>>
>>> Yes. I have interrupt and I have semaphore. See bellow.
>>>
>>> But what can be wrong?  How do I detect that packets are behind?
>>>
>>> Thanks a lot
>>>
>>> Rastislav
>>>
>>>
>>> Interrupt
>>> void ETH0_0_IRQHandler(void)
>>> {
>>>      uint32_t status;
>>>
>>>      status = XMC_ETH_MAC_GetEventStatus(&eth_mac);
>>>
>>>      if (status & XMC_ETH_MAC_EVENT_RECEIVE)
>>>      {
>>>        sys_sem_signal_isr(&eth_rx_semaphore);
>>>      }
>>>
>>>      XMC_ETH_MAC_ClearEventStatus(&eth_mac, status);
>>>
>>> }
>>>
>>> Task waiting on semaphore
>>> static void
>>> ethernetif_input(void *arg)
>>> {
>>>      struct pbuf *p = NULL;
>>>      struct eth_hdr *ethhdr;
>>>      struct netif *netif = (struct netif *)arg;
>>>
>>>      while(1)
>>>      {
>>>        sys_arch_sem_wait(&eth_rx_semaphore, 0);
>>>
>>>        p = low_level_input();
>>>
>>>        if (p != NULL)
>>>        {
>>>                     ethhdr = p->payload;
>>>                     switch (htons(ethhdr->type))
>>>                     {
>>>                       case ETHTYPE_IP:
>>>                      case ETHTYPE_ARP:
>>>                         /* full packet send to tcpip_thread to process */
>>>              if (netif->input( p, netif) != ERR_OK)
>>>              {
>>>                pbuf_free(p);
>>>              }
>>>
>>>              break;
>>>
>>>                       default:
>>>                         pbuf_free(p);
>>>                         break;
>>>                     }
>>>        }
>>>      }
>>> }
>>>
>>> Input processing
>>> static struct pbuf *
>>> low_level_input(void)
>>> {
>>>      struct pbuf *p = NULL;
>>>      struct pbuf *q;
>>>      uint32_t len;
>>>
>>>      len = XMC_ETH_MAC_GetRxFrameSize(&eth_mac);
>>>
>>> #if ETH_PAD_SIZE
>>>      len += ETH_PAD_SIZE;    /* allow room for Ethernet padding */
>>> #endif
>>>
>>>      if (len < XMC_ETH_MAC_BUF_SIZE)
>>>      {
>>>
>>>        /* We allocate a pbuf chain of pbufs from the pool. */
>>>        p = pbuf_alloc(PBUF_RAW, len, PBUF_POOL);
>>>      
>>>        if (p != NULL)
>>>        {
>>> #if ETH_PAD_SIZE
>>>          pbuf_header(p, -ETH_PAD_SIZE);  /* drop the padding word */ 
>>> #endif
>>>
>>>          XMC_ETH_MAC_ReadFrame(&eth_mac, buffer, len);
>>>
>>>          len = 0;
>>>          /* We iterate over the pbuf chain until we have read the entire
>>>           * packet into the pbuf. */
>>>          for (q = p; q != NULL; q = q->next)
>>>          {
>>>            /* Read enough bytes to fill this pbuf in the chain. The
>>>             * available data in the pbuf is given by the q->len
>>>             * variable.
>>>             * This does not necessarily have to be a memcpy, you can also 
>>> preallocate
>>>             * pbufs for a DMA-enabled MAC and after receiving truncate it 
>>> to the
>>>             * actually received size. In this case, ensure the tot_len 
>>> member of the
>>>             * pbuf is the sum of the chained pbuf len members.
>>>             */
>>>             memcpy(q->payload, &buffer[len], q->len);
>>>             len += q->len;
>>>          }
>>>
>>> #if ETH_PAD_SIZE
>>>          pbuf_header(p, ETH_PAD_SIZE);    /* Reclaim the padding word */
>>> #endif
>>>
>>>        }
>>>        else
>>>        {
>>>          XMC_ETH_MAC_ReadFrame(&eth_mac, NULL, 0);
>>>        }
>>>      }
>>>      else
>>>      {
>>>        XMC_ETH_MAC_ReadFrame(&eth_mac, NULL, 0);
>>>      }
>>>
>>>      return p;
>>> }
>>>
>>> -----Messaggio originale-----
>>> Da: lwip-users
>>> [mailto:address@hidden
>>> Per conto di Jens Nielsen
>>> Inviato: mercoledì 11 maggio 2016 16:52
>>> A: address@hidden
>>> Oggetto: Re: [lwip-users] socket slow down
>>>
>>> Hi
>>>
>>> If you search the list you will find a lot of people with the same 
>>> question, it's impossible to tell where your packets are delayed without 
>>> you doing some analysis (traces? breakpoints?) but one thing I can say for 
>>> sure is that your problem is quite certainly not within lwip. A common 
>>> error is to assume that one packet equals one interrupt which equals one 
>>> signalled semaphore which equals one processed packet, whenever you receive 
>>> a second packet before the previous is processed you'll be "one packet 
>>> behind" and experience delays like you describe.
>>> Where did you get your driver?
>>>
>>> Best regards
>>> Jens
>>>
>>>
>>> On 2016-05-11 12:42, Rastislav Uhrin wrote:
>>>> Hello,
>>>>
>>>> I need advice and help on one issue with lwip stack version 1.4.1. 
>>>> I am new to this stack and to networking in general. Nevertheless I 
>>>> have integrated it to application on Infineon xmc processor 
>>>> together with FreeRTOS.
>>>>
>>>> Looking on many different examples on the internet and many trial 
>>>> and error. I am using netconn sockets. Application works!
>>>>
>>>> The only problem is that after some time, better say after 
>>>> exchanging several 10-100 packets of different sizes, response gets 
>>>> slow. From 2ms down to 2-3seconds. It still works but slow. Same if I use 
>>>> ping.
>>>>
>>>> I tried all possible setting of lwip options but of course since I 
>>>> don't have deep insight of what they influence I was not able to 
>>>> improve this behavior.
>>>>
>>>> I would appreciate if you can give me a hint what could be wrong, 
>>>> what could I check, how to proceed to debug this strange behavior.
>>>>
>>>> I tried also new version 2.0 of stack but behavior is same.
>>>>
>>>> rum
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> lwip-users mailing list
>>>> address@hidden
>>>> https://lists.nongnu.org/mailman/listinfo/lwip-users
>>> _______________________________________________
>>> lwip-users mailing list
>>> address@hidden
>>> https://lists.nongnu.org/mailman/listinfo/lwip-users
>>>
>>> _______________________________________________
>>> lwip-users mailing list
>>> address@hidden
>>> https://lists.nongnu.org/mailman/listinfo/lwip-users
>> _______________________________________________
>> lwip-users mailing list
>> address@hidden
>> https://lists.nongnu.org/mailman/listinfo/lwip-users
>>
>> _______________________________________________
>> lwip-users mailing list
>> address@hidden
>> https://lists.nongnu.org/mailman/listinfo/lwip-users
>
> _______________________________________________
> lwip-users mailing list
> address@hidden
> https://lists.nongnu.org/mailman/listinfo/lwip-users
>
> _______________________________________________
> lwip-users mailing list
> address@hidden
> https://lists.nongnu.org/mailman/listinfo/lwip-users


_______________________________________________
lwip-users mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/lwip-users



reply via email to

[Prev in Thread] Current Thread [Next in Thread]