Hello,
I've developed an application that is sending pings. I did this with:
#define MEMP_OVERFLOW_CHECK 2
in opt.h. The application works correctly, for example now, after a few hours, 1,100,000 ping packets have been captured by Wireshark, increasing.
However, now I've discovered that if I change the *_CHECK define to 0 or 1, then ca. 13000 packets are correctly sent, and after that the only packets sent are frequent (in each second) ARP queries (discovery of gate's MAC), as captured by Wireshark, confirmed by MCU's Ethernet transfer diode.
Why such correlation between a debug option and crash-behavior? What can be wrong? Some code snippets of how I send the pings:
======= 8< ========== 8< =======
pkt_echo = ( struct icmp_echo_hdr * ) mem_malloc( ( mem_size_t ) ping_size );
if ( !pkt_echo ) {
return ERR_MEM;
}
form_echo_packet( pkt_echo, (u16_t) ping_size );
#if LWIP_IPV4
if ( IP_IS_V4( addr ) ) {
if ( override_last > 0 ) {
set_ip4_addr_4( addr, override_last ); // override last (fourth) IP address number
}
struct sockaddr_in *to4 = ( struct sockaddr_in* ) &to;
to4->sin_len = sizeof( to4 );
to4->sin_family = AF_INET;
inet_addr_from_ip4addr( &to4->sin_addr, ip_2_ip4( addr ) );
}
#endif /* LWIP_IPV4 */
err = lwip_sendto( s, pkt_echo, ping_size, 0, ( struct sockaddr * ) &to, sizeof( to ) );
======= 8< ========== 8< =======
struct timeval timeout;
timeout.tv_sec = PING_TIMEOUT / 1000;
timeout.tv_usec = ( PING_TIMEOUT % 1000 ) * 1000;
s = lwip_socket( AF_INET, SOCK_RAW, IP_PROTO_ICMP );
ret = lwip_setsockopt( s, SOL_SOCKET, SO_RCVTIMEO, &timeout, sizeof( timeout ) );
sys_msleep(PING_DELAY);
======= 8< ========== 8< =======
--