qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/4] Multiple interface support on top of Multi-FD


From: Daniel P . Berrangé
Subject: Re: [PATCH 0/4] Multiple interface support on top of Multi-FD
Date: Thu, 16 Jun 2022 09:16:40 +0100
User-agent: Mutt/2.2.1 (2022-02-19)

On Wed, Jun 15, 2022 at 08:14:26PM +0100, Dr. David Alan Gilbert wrote:
> * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > On Fri, Jun 10, 2022 at 05:58:31PM +0530, manish.mishra wrote:
> > > 
> > > On 09/06/22 9:17 pm, Daniel P. Berrangé wrote:
> > > > On Thu, Jun 09, 2022 at 07:33:01AM +0000, Het Gala wrote:
> > > > > As of now, the multi-FD feature supports connection over the default 
> > > > > network
> > > > > only. This Patchset series is a Qemu side implementation of providing 
> > > > > multiple
> > > > > interfaces support for multi-FD. This enables us to fully utilize 
> > > > > dedicated or
> > > > > multiple NICs in case bonding of NICs is not possible.
> > > > > 
> > > > > 
> > > > > Introduction
> > > > > -------------
> > > > > Multi-FD Qemu implementation currently supports connection only on 
> > > > > the default
> > > > > network. This forbids us from advantages like:
> > > > > - Separating VM live migration traffic from the default network.
> > > 
> > > Hi Daniel,
> > > 
> > > I totally understand your concern around this approach increasing 
> > > compexity inside qemu,
> > > 
> > > when similar things can be done with NIC teaming. But we thought this 
> > > approach provides
> > > 
> > > much more flexibility to user in few cases like.
> > > 
> > > 1. We checked our customer data, almost all of the host had multiple NIC, 
> > > but LACP support
> > > 
> > >     in their setups was very rare. So for those cases this approach can 
> > > help in utilise multiple
> > > 
> > >     NICs as teaming is not possible there.
> > 
> > AFAIK,  LACP is not required in order to do link aggregation with Linux.
> > Traditional Linux bonding has no special NIC hardware or switch 
> > requirements,
> > so LACP is merely a "nice to have" in order to simplify some aspects.
> > 
> > IOW, migration with traffic spread across multiple NICs is already
> > possible AFAICT.
> 
> Are we sure that works with multifd?  I've seen a lot of bonding NIC
> setups which spread based on a hash of source/destination IP and port
> numbers; given that we use the same dest port and IP at the moment what
> happens in reality?  That hashing can be quite delicate for high
> bandwidth single streams.

The simplest Linux bonding mode does per-packet round-robin across 
NICs, so traffic from the collection of multifd connections should
fill up all the NICs in the bond. There are of course other modes
which may be sub-optimal for the reasons you describe. Which mode
to pick depends on the type of service traffic patterns you're
aiming to balance.

> > > > > Multi-interface with Multi-FD
> > > > > -----------------------------
> > > > > Multiple-interface support over basic multi-FD has been implemented 
> > > > > in the
> > > > > patches. Advantages of this implementation are:
> > > > > - Able to separate live migration traffic from default network 
> > > > > interface by
> > > > >    creating multiFD channels on ip addresses of multiple non-default 
> > > > > interfaces.
> > > > > - Can optimize the number of multi-FD channels on a particular 
> > > > > interface
> > > > >    depending upon the network bandwidth limit on a particular 
> > > > > interface.
> > > > Manually assigning individual channels to different NICs is a pretty
> > > > inefficient way to optimizing traffic. Feels like you could easily get
> > > > into a situation where one NIC ends up idle while the other is busy,
> > > > especially if the traffic patterns are different. For example with
> > > > post-copy there's an extra channel for OOB async page requests, and
> > > > its far from clear that manually picking NICs per chanel upfront is
> > > > going work for that.  The kernel can continually dynamically balance
> > > > load on the fly and so do much better than any static mapping QEMU
> > > > tries to apply, especially if there are multiple distinct QEMU's
> > > > competing for bandwidth.
> > > > 
> > > Yes, Daniel current solution is only for pre-copy. As with postcopy
> > > multiFD is not yet supported but in future we can extend it for postcopy
> 
> I had been thinking about explicit selection of network device for NUMA
> use though; ideally I'd like to be able to associate a set of multifd
> threads to each NUMA node, and then associate a NIC with that set of
> threads; so that the migration happens down the NIC that's on the node
> the RAM is on.  On a really good day you'd have one NIC per top level
> NUMA node.

Now that's an interesting idea, and not one that can be dealt with
by bonding, since the network layer won't be aware of the NUMA
affinity constraints.


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]