lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-users] Possible bug in tcp_input


From: Akos Vandra-Meyer
Subject: [lwip-users] Possible bug in tcp_input
Date: Wed, 16 Nov 2022 14:48:39 +0100

Hello,

I am somewhat experienced in general embedded programming, but I am
fairly new to lwip.

I am using rust for development on an ESP32, and including lwip as
part of the esp-idf unikernel.

I am currently working on a VPN-ish piece of code for allowing a
device installed behind a NAT to be directly accessible. The protocol
in it's current form is pretty simple: open a connection, send MAC
address, receive IP address and netmask, and then basically use the
TCP stream as a PHY layer by sending the raw IP packets back and
forth.

Something like IP over TCP/IP.


I am using the RAW API for this.

The current PoC works great up until somebody opens a TCP connection
(over the VPN, sends some data, and then closes it, which results in
the outer connection dropping as well(?!) .

The issue is that when the IP packet is received that contains the TCP
FIN for the inner connection, tcp_recv() is called on the tunnel
interface.

My implementation of tcp_recv (and by that I mean tcp_pcb->recv) reads
the next ip header, checks the data size, and reads the whole ip
packet to memory, then - still within the tcp_recv function -, calls
ip_input() with the appropriate pbuf, which is usually the pbuf that
was passed to tcp_recv.

The ip_input processes that, and discovers that it is a TCP FIN, and
the inner connection is dropped.

tcp_recv returns, and at that point something weird happens, the outer
connection reports that it got a FIN as well, and calls tcp_recv again
with a NULL pbuf.

The bug seems to boil down to the way recv_flags are stored. They are
in a static global variable, and when the inner TCP packet is handled,
the FIN flag is set, which is not reset after returning from tcp_recv,
and the processing continues as if the outer connection had got a FIN
flag.

Not sure if calling ip_input from within tcp_recv is legal, and if not
then what would be a good way to notify LWIP to process an ip packet
that was received via tcp_recv.

I was almost entirely sure this was a stack overflow thing, up until I
discovered that recv_flags got overridden:

/* Notify application that data has been received. */
volatile int x = recv_flags;
  LWIP_DEBUGF(TCP_INPUT_DEBUG, ("tcp_input: %x, %d flags = %x\n", pcb,
__LINE__, recv_flags));
TCP_EVENT_RECV(pcb, recv_data, ERR_OK, err);
  LWIP_DEBUGF(TCP_INPUT_DEBUG, ("tcp_input: %x, %d err = %d flags=%x
oldflags = %x\n", pcb, __LINE__, err, recv_flags, x));

This basically output:

tcp_input: 3fcb95c0, 535 flags = 0 (outer connection, the carrier connection)
tcp_input: got a FIN for pcb 3fcbe57c flags = 20 (inner connection
that was closed)
tcp_input: 3fcb95c0, 537 err = 0 flags=20 (?!)
tcp_input: got a FIN for pcb 3fcb95c0 flags = 20  (?!)



Code fragments, happy to share more:

// The handler for the carrier connection, rust code, pretty ugly, but
it is just a PoC, also had a really hard time debugging this.


pub extern "C" fn tunnel_recv(tunnel: *mut c_void, pcb: *mut tcp_pcb,
pbuf: *mut pbuf, err: err_t) -> err_t {
    warn!("Tunnel::recv called with pcb = {:p}, pbuf = {:p}, err =
{}", pcb, pbuf, err);

    let tunnel = unsafe { &mut *(tunnel as *mut Tunnel) };

    if pbuf == ptr::null_mut() {
        error!("Tunnel::recv called with null pbuf: {}", err);
        return tunnel.closed()
    }

    let pbuf = unsafe { &mut *pbuf };
    let pload = unsafe { slice::from_raw_parts(pbuf.payload as *mut
u8, pbuf.len as _) };

    info!("TCP DATA = {}, {}, {}, {}, {}, {}, {:02X?}", pbuf.len,
pbuf.flags, pbuf.tot_len, pbuf.if_idx, pbuf.ref_, pbuf.type_internal,
pload);

    let tot_len = pbuf.tot_len;

    unsafe { debug!((*tunnel.iface).input.unwrap()(pbuf,
tunnel.iface)) }.unwrap();

    unsafe { tcp_recved(tunnel.socket, tot_len as _) };

    return 0;
}

Thanks for your help,
  Akos



reply via email to

[Prev in Thread] Current Thread [Next in Thread]