lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-users] UDP and Raw API, lwip running with RTOS


From: zulu4711
Subject: [lwip-users] UDP and Raw API, lwip running with RTOS
Date: Mon, 1 Oct 2018 00:55:08 -0700 (MST)

I have lwip up and running in an embbeded system using a RTOS (Keil RTX kernel) and this seems to work alright. I'm testing with sockets and netconn UDP, all ok (over PPP). I'm also beginning to look at the raw API (this will fit the project best in the end), and this works also. Now, I know that the raw API is only to be called from one single thread. However, I need to be able to call send etc from several threads. I was wondering if the following code is ok for that, before using the raw API I call LOCK_TCPIP_CORE()/UNLOCK_TCPIP_CORE(). I followed the netconn API and it seems this is the way the locking is done there ? Am I right in this, and is this a "allowed" way of doing what I want ? Or, are there a better approach to this ? //--------------------------------------------------------------------------------------- // Callback function for received data //--------------------------------------------------------------------------------------- static void rxUDP(void *arg, struct udp_pcb *upcb, struct pbuf *p, struct ip_addr *addr, u16_t port) { char str[128]; // if packet is valid if (p != NULL) { // WARNING: p can be a chain of buffers (in this example we pretend it is just a single buffer :)! if (p->len < sizeof(str)) { memcpy(str, p->payload, p->len); str[p->len]=0; messageDebug(DBG_WAR, __MODULE__, __LINE__, "UDP Packet Received! Payload: [%s], port=%i", str, port); } pbuf_free(p); } } //--------------------------------------------------------------------------------------- // Thread that sends UDP message every 1000 to 2000 mSec //--------------------------------------------------------------------------------------- void thRawUDP(void) { extern struct netif ppp_netif; err_t error; ip_addr_t ip_remote; struct udp_pcb *pUDPPCB; struct pbuf *pBuf; char data[] = "Hello world"; setNameRTXMON(__FUNCTION__); // wait for netif to come up (a little dirty) messageDebug(DBG_WAR, __MODULE__, __LINE__, "Waiting for PPP Netif to come up.."); while (netif_is_link_up(&ppp_netif)==0) OS_WAIT(1000); messageDebug(DBG_WAR, __MODULE__, __LINE__, "PPP Netif is up"); // Convert from ASCII "xxx.xxx.xxx.xxx" to IP ipaddr_aton(SERVER_IP_ADDR, &ip_remote); // Lock the stack.. LOCK_TCPIP_CORE(); pUDPPCB = udp_new(); // Bind to any local port error = udp_bind(pUDPPCB, IP_ADDR_ANY, 0); messageDebug(DBG_WAR, __MODULE__, __LINE__, "udp_bind=%i", error); error = udp_connect(pUDPPCB, &ip_remote, SERVER_PORT_NUM); messageDebug(DBG_WAR, __MODULE__, __LINE__, "udp_connect=%i", error); udp_recv(pUDPPCB, rxUDP, NULL); UNLOCK_TCPIP_CORE(); while (1) { // Allocate pbuf (might end up being a chain of buffers!) pBuf = pbuf_alloc(PBUF_TRANSPORT, sizeof(data), PBUF_POOL); if (!pBuf) { messageDebug(DBG_ERR, __MODULE__, __LINE__, "error allocating buffer"); OS_WAIT(1000); continue; } // The pBuf we get can be a chain of buffers int bytesLeft = sizeof(data); // Number of bytes we still need to move to buffer(s) struct pbuf *packetTempBuffer; // used to traverse the (possible) list of buffers int chunk; // NUmber of bytes we copy to the current buffer int index = 0; // Index into the source buffer packetTempBuffer = pBuf; while ( (bytesLeft) && (packetTempBuffer != NULL) ) { chunk = bytesLeft; if ( chunk > packetTempBuffer->len ) { chunk = packetTempBuffer->len; } // copy one part memcpy(packetTempBuffer->payload, &data[index], chunk); // next buffer in chain (if any) packetTempBuffer = packetTempBuffer->next; bytesLeft -= chunk; index += chunk; }; //memcpy(pBuf->payload, data, sizeof(data)); // WARNING: No gurantee that pbuf is not multiple buffers in order to hold the data! messageDebug(DBG_WAR, __MODULE__, __LINE__,"Sending"); // Lock stack LOCK_TCPIP_CORE(); udp_send(pUDPPCB, pBuf); UNLOCK_TCPIP_CORE(); pbuf_free(pBuf); OS_WAIT(1000+(rand()%1000)); } }

Sent from the lwip-users mailing list archive at Nabble.com.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]