[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 0/5] mptcp support

From: Daniel P . Berrangé
Subject: Re: [RFC PATCH 0/5] mptcp support
Date: Fri, 9 Apr 2021 10:34:30 +0100
User-agent: Mutt/2.0.5 (2021-01-21)

On Thu, Apr 08, 2021 at 08:11:54PM +0100, Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> Hi,
>   This RFC set adds support for multipath TCP (mptcp),
> in particular on the migration path - but should be extensible
> to other users.
>   Multipath-tcp is a bit like bonding, but at L3; you can use
> it to handle failure, but can also use it to split traffic across
> multiple interfaces.
>   Using a pair of 10Gb interfaces, I've managed to get 19Gbps
> (with the only tuning being using huge pages and turning the MTU up).
>   It needs a bleeding-edge Linux kernel (in some older ones you get
> false accept messages for the subflows), and a C lib that has the
> constants defined (as current glibc does).
>   To use it you just need to append ,mptcp to an address;
>   -incoming tcp:0:4444,mptcp
>   migrate -d tcp:,mptcp

What happens if you only enable mptcp flag on one side of the
stream (whether client or server), does it degrade to boring
old single path TCP, or does it result in an error ?

>   I had a quick go at trying NBD as well, but I think it needs
> some work with the parsing of NBD addresses.

In theory this is applicable to anywhere that we use sockets.
Anywhere that is configured with the QAPI  SocketAddress /
SocketAddressLegacy type will get it for free AFAICT.

Anywhere that is configured via QemuOpts will need an enhancement.

IOW, I would think NBD already works if you configure NBD via
QMP with nbd-server-start, or block-export-add.  qemu-nbd will
need cli options added.

The block layer clients for NBD, Gluster, Sheepdog and SSH also
all get it for free when configured va QMP, or -blockdev AFAICT

Legacy blocklayer filename syntax would need extra parsing, or
we can just not bother and say if you want new features, use

Overall this is impressively simple.

It feels like it obsoletes the multifd migration code, at least
if you assume Linux platform and new enough kernel ?

Except TLS... We already bottleneck on TLS encryption with
a single FD, since userspace encryption is limited to a
single thread.

There is the KTLS feature which offloads TLS encryption/decryption
to the kernel. This benefits even regular single FD performance,
because the encrytion work can be done by the kernel in a separate
thread from the userspace IO syscalls.

Any idea if KTLS is fully compatible with MPTCP ?  If so, then that
would look like it makes it a full replacementfor multifd on Linux.

|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

reply via email to

[Prev in Thread] Current Thread [Next in Thread]