lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] UDP and TCP concurrent operation causing fault


From: Patrick Klos
Subject: Re: [lwip-users] UDP and TCP concurrent operation causing fault
Date: Thu, 15 Nov 2018 15:00:56 -0500
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1

On 11/15/2018 12:59 PM, Applebee, Robert wrote:

My application is built with LWIP 1.4.1 running on a TI MCU and uses raw TCP and raw UDP calls, no RTOS.

 

The TCP is used to send commands to the hardware and is always acknowledged with an “ACK” message.

 

The UDP is used to send sensor data to the client every 10ms.


FWIW, I have been using LwIP 1.4.1 for over 5 years on a TI Tiva platform with both TCP and UDP without any trouble.

I can send messages and receive “ACK” without error and I can enable the UDP output without error but when I combine TCP and UDP my application will eventually generate a hardware fault.  Looking at the stack it seems to be in the “plug_holes” function.

 

I am a first time user of LWIP.  Any assistance would be appreciated.

 

This is my main loop that outputs the UDP:

 

    // allocate pbuf for UDP

    p=pbuf_alloc(PBUF_TRANSPORT, 32, PBUF_RAM);

 

    //

    // Loop forever, processing the LED blinking.  All the work is done in

    // interrupt handlers.

    //

    while(1)

    {

        // get latest input

        change = updateUdpPacket(udpOut);

 

        /* can't send UDP untill TCP connected */

        if (clientIpAddr.addr != 0) {

            // send UPD if data delay expires or input changed state

            if (change || g_100usTick == 0) {

                // only output UDP if data delay is enabled (-1 is disabled)

                if (data_stream_delay_ms >= 0) {

                    if (p) {

                        // Toggle the red LED.

                        MAP_GPIOPinWrite(GPIO_PORTP_BASE, GPIO_PIN_5,

                                         (MAP_GPIOPinRead(GPIO_PORTP_BASE, GPIO_PIN_5) ^

                                          GPIO_PIN_5));

 

                        sprintf(p->payload, "%s %04X %s\n", VM.PodId, g_usSeq, udpOut);

 

                        // increment the UDP number

                        g_usSeq++;

 

                        // send the UDP message

                        udp_sendto(UDPpcb, p, &clientIpAddr, UDP_PORT);

                    }

                }

 

                // reset the data delay counter

                VM.DataStreamDelay = data_stream_delay_ms;  // convert to float

                g_100usTick = VM.DataStreamDelay / 0.1;     // set data delay counter to 100us ticks

            }

        }

   }


How "stripped down" is this sample from your actual code? 

From the looks of it, you're only ever allocating a single pbuf, then constantly reusing it?  If the stack hasn't completed the previous UDP send by the time you try to use it again, that could break things?

Patrick Klos
Klos Technologies, Inc.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]