lwip-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-devel] [ppp-new]ppp_reopen bug!


From: Sylvain Rochet
Subject: Re: [lwip-devel] [ppp-new]ppp_reopen bug!
Date: Fri, 24 Aug 2012 20:54:13 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Hi,

On Fri, Aug 24, 2012 at 11:27:20PM +0800, ?? wrote:
> Hi
> 
> ppp_reopen() clean ppp_pcb_rx structure, and this can cause memory 
> leak and other error!

I know, PPP PCB recycling with PPPoS is not supported yet, as stated 
with the currently empty ppp_over_serial_reopen().

I don't like the way ppp.c is currently designed, with the PPPoS low 
layer code mixed with PPPoE and PPPoL2TP calls, because the original 
port was only designed to handle PPPoS.

In my opinion this should be designed with a PPP core code only doing 
the PPP stuff without knowing anything about PPPoS(HDLC), PPPoE or 
PPPoL2TP lower layer stuff. With each of them having a different API 
because they have different needs (this is actually the case, we have 
more or less 3 different API regarding ppp_over_*_open + functions which 
use the current PPP context to call the appropriate low layer function).

About PPPoS, PPP_INPROC_OWNTHREAD and PPP_INPROC_MULTITHREADED are kind 
of a problem for me, ppp_input_thread() looping around PPP phase is 
probably not that thread safe, next it requires a way to be destroyed, 
which is not supported in all schedulers and this sys_msleep(1) sounds 
ugly. All threads stuff are only required for PPPoS input, I guess this 
should be better pushed to the PPP(oS) sequential thread-safe API like 
TCPIP API/tcpip_input() does.

This is not so hard to do and it will improve a lot the ppp.c readiness, 
but require to change the whole API, therefore I am balanced between 
"that would be great, lets do that!" and "it works this way but this is 
not in its best shape, should I bother about that ?".

Sylvain

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]