lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] lwip full duplex?


From: Joel Cunningham
Subject: Re: [lwip-users] lwip full duplex?
Date: Tue, 11 Aug 2015 18:35:55 +0000 (GMT)

If you look further down the call past sys_arch_mbox_fetch() in netconn_recv_data(), you can see for the TCP case it calls into the core to execute do_recv() with TCPIP_APIMSG().  This uses the op_completed semaphore as I mentioned.

It just happens to be that UDP doesn't need to do this additional function call that may allow a UDP recv() and UDP send() on the same netconn/socket to work, but the calls weren't designed to support this use case.

Joel

On Aug 11, 2015, at 11:14 AM, Michael Steinberg <address@hidden> wrote:

Hi Joel,

though I did see that semaphore in the structure definition, I could not see a reference to it on the path from recv to sys_arch_mailbox_fetch... I'll dig deeper... Probably it's eventually used in tcp-ip-thread context? ChibiOS allows to close a mailbox while somebody is waiting on it, notifying the waiter. Is that generally possible on other platforms?

What I'm currently also thinking about... is it really the most elegant solution to put all the guards into the connection stuff, instead of making the lwip-core itself threadsafe (conditionally)? For now I can't really compare the both approaches... I'll dig deeper once again...
Would it prove helpful to bring all the touched "shared" resources into a diagram? I could create such a diagram while digging... Perhaps it makes reasoning easier...

Kind Regards,
Michael


Am 11.08.2015 um 17:09 schrieb Joel Cunningham:
Michael,

Historically there hasn't been support for multiple threads to issue operations within the LwIP core context at the same time for the same netconn.  Each netconn only has a single op_completed semaphore.  This prevents a simultaneous read and write from occurring.  This limitation also prevents one thread blocking on send/recv and another thread issuing a close.

The new option LWIP_NETCONN_SEM_PER_THREAD enables each thread operating on the netconn to have its own semaphore

Joel

On Aug 11, 2015, at 09:54 AM, Michael Steinberg <address@hidden> wrote:

Hi Joel,
well I had not anticipated that it's common to have three threads on a socket running, each doing their own task on it. The problems stem from the third deleting thread, right? Reading + Writing seem to be unproblematic as it stands?

Kind Regards,
Michael


Am 11.08.2015 um 16:38 schrieb Joel Cunningham:
LwIP has not had supported for full duplex sockets in any release versions.  On the master branch, there is some initial support under the flag LWIP_NETCONN_FULLDUPLEX, but I think the feature is pretty early in development.  Here is the comment from the opt.h, noting the alpha state of this:

/** LWIP_NETCONN_FULLDUPLEX==1: Enable code that allows reading from one thread,
 * writing from a 2nd thread and closing from a 3rd thread at the same time.
 * ATTENTION: This is currently really alpha! Some requirements:
 * - LWIP_NETCONN_SEM_PER_THREAD==1 is required to use one socket/netconn from
 *   multiple threads at once
 * - sys_mbox_free() has to unblock receive tasks waiting on recvmbox/acceptmbox
 *   and prevent a task pending on this during/after deletion
 */
#ifndef LWIP_NETCONN_FULLDUPLEX
#define LWIP_NETCONN_FULLDUPLEX         0
#endif

Joel

On Aug 10, 2015, at 05:08 PM, Michael Steinberg <address@hidden> wrote:

Hello,

you make it sound like this was a limitation of lwip... but if at all,
it would only be one of the berkeley socket API emulation layer...
That being said, the socket API uses the netconn API, which in turn uses
mailboxes for receiving packets from the lwip core/driver. I cannot see
any additional locking operation on the path from recv to
sys_mailbox_fetch, so I don't think a send would block during a
receive... Socket state is touched though, one would have to see, if
there's conflicting state overlap on the send and receive paths...

In constrained environments, I would argue that berkeley socket API is
not the weapon of choice anyways (actually I would argue that for any
environment, hehe)

Kind Regards,
Michael


_______________________________________________
lwip-users mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/lwip-users


_______________________________________________
lwip-users mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/lwip-users
 
 

_______________________________________________
lwip-users mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/lwip-users


_______________________________________________
lwip-users mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/lwip-users
 

_______________________________________________
lwip-users mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/lwip-users

reply via email to

[Prev in Thread] Current Thread [Next in Thread]