|
From: | Anton Ivanov |
Subject: | Re: [Qemu-devel] [PATCH 1/3] Unified Datagram Socket Transport |
Date: | Fri, 21 Jul 2017 18:50:08 +0100 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 |
[snip]
+ NetUnifiedState *s = (NetUnifiedState *) us; + L2TPV3TunnelParams *p = (L2TPV3TunnelParams *) s->params;How about embedding NetUnifiedState into this structure and keep using NetL2TPV3State? Then:- 's' could be kept and lots of lines of changes could be saved here and l2tpv3_verify_header() - each transport could have their own type instead of using NET_CLIENT_DRIVER_L2TPV3
That means each of them having their own read/write functions in each transport, destroy functions, etc.
I am trying to achieve exactly the opposite which across all transports should save more code. There should be nothing in a transport which leverages the common datagram processing backend except:
1. Init and parse arguments 2. Form Header 3. Verify HeaderAll the rest can be common for a large family of datagram based transports - L2TPv3, GRE, RAW (both full interface and just pulling a specific vlan out of it), etc.
It is trivial to do that for fixed size headers (as in the current patchset family). It is a bit more difficult to that for variable headers, but still datagram (GUE, Geneve, etc).
These may also add 4 - I/O to control plane, but it remains to be seen if that is needed.
This also makes any improvements to the backend - f.e. switching from send() to sendmmsg() automatically available for all transports.
What cannot be done is to shoehorn into this stream based. I believe we have only one of those - the original socket.c in tcp mode and we can leave it to stay that way and switch only the datagram mode to a better backend.
I am going through the other comments in the meantime to see if I missed something else and fixing the omissions.
A. [snip] -- Anton R. Ivanov Cambridge Greys Limited, England and Wales company No 10273661 http://www.cambridgegreys.com/
[Prev in Thread] | Current Thread | [Next in Thread] |