qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/4] Multiple interface support on top of Multi-FD


From: Dr. David Alan Gilbert
Subject: Re: [PATCH 0/4] Multiple interface support on top of Multi-FD
Date: Wed, 15 Jun 2022 20:14:26 +0100
User-agent: Mutt/2.2.5 (2022-05-16)

* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Fri, Jun 10, 2022 at 05:58:31PM +0530, manish.mishra wrote:
> > 
> > On 09/06/22 9:17 pm, Daniel P. Berrangé wrote:
> > > On Thu, Jun 09, 2022 at 07:33:01AM +0000, Het Gala wrote:
> > > > As of now, the multi-FD feature supports connection over the default 
> > > > network
> > > > only. This Patchset series is a Qemu side implementation of providing 
> > > > multiple
> > > > interfaces support for multi-FD. This enables us to fully utilize 
> > > > dedicated or
> > > > multiple NICs in case bonding of NICs is not possible.
> > > > 
> > > > 
> > > > Introduction
> > > > -------------
> > > > Multi-FD Qemu implementation currently supports connection only on the 
> > > > default
> > > > network. This forbids us from advantages like:
> > > > - Separating VM live migration traffic from the default network.
> > 
> > Hi Daniel,
> > 
> > I totally understand your concern around this approach increasing compexity 
> > inside qemu,
> > 
> > when similar things can be done with NIC teaming. But we thought this 
> > approach provides
> > 
> > much more flexibility to user in few cases like.
> > 
> > 1. We checked our customer data, almost all of the host had multiple NIC, 
> > but LACP support
> > 
> >     in their setups was very rare. So for those cases this approach can 
> > help in utilise multiple
> > 
> >     NICs as teaming is not possible there.
> 
> AFAIK,  LACP is not required in order to do link aggregation with Linux.
> Traditional Linux bonding has no special NIC hardware or switch requirements,
> so LACP is merely a "nice to have" in order to simplify some aspects.
> 
> IOW, migration with traffic spread across multiple NICs is already
> possible AFAICT.

Are we sure that works with multifd?  I've seen a lot of bonding NIC
setups which spread based on a hash of source/destination IP and port
numbers; given that we use the same dest port and IP at the moment what
happens in reality?  That hashing can be quite delicate for high
bandwidth single streams.

> I can understand that some people may not have actually configured
> bonding on their hosts, but it is not unreasonable to request that
> they do so, if they want to take advantage fo aggrated bandwidth.
> 
> It has the further benefit that it will be fault tolerant. With
> this proposal if any single NIC has a problem, the whole migration
> will get stuck. With kernel level bonding, if any single NIC haus
> a problem, it'll get offlined by the kernel and migration will
> continue to  work across remaining active NICs.
> 
> > 2. We have seen requests recently to separate out traffic of storage, VM 
> > netwrok, migration
> > 
> >     over different vswitch which can be backed by 1 or more NICs as this 
> > give better
> > 
> >     predictability and assurance. So host with multiple ips/vswitches can 
> > be very common
> > 
> >     environment. In this kind of enviroment this approach gives per vm or 
> > migration level
> > 
> >     flexibilty, like for critical VM we can still use bandwidth from all 
> > available vswitch/interface
> > 
> >     but for normal VM they can keep live migration only on dedicated NICs 
> > without changing
> > 
> >     complete host network topology.
> > 
> >     At final we want it to be something like this [<ip-pair>, 
> > <multiFD-channels>, <bandwidth_control>]
> > 
> >     to provide bandwidth_control per interface.
> 
> Again, it is already possible to separate migration traffic from storage
> traffic, from other network traffic. The target IP given will influence
> which NIC is used based on routing table and I know this is already
> done widely with OpenStack deployments.
> 
> > 3. Dedicated NIC we mentioned as a use case, agree with you it can be done 
> > without this
> > 
> >     approach too.
> 
> 
> > > > Multi-interface with Multi-FD
> > > > -----------------------------
> > > > Multiple-interface support over basic multi-FD has been implemented in 
> > > > the
> > > > patches. Advantages of this implementation are:
> > > > - Able to separate live migration traffic from default network 
> > > > interface by
> > > >    creating multiFD channels on ip addresses of multiple non-default 
> > > > interfaces.
> > > > - Can optimize the number of multi-FD channels on a particular interface
> > > >    depending upon the network bandwidth limit on a particular interface.
> > > Manually assigning individual channels to different NICs is a pretty
> > > inefficient way to optimizing traffic. Feels like you could easily get
> > > into a situation where one NIC ends up idle while the other is busy,
> > > especially if the traffic patterns are different. For example with
> > > post-copy there's an extra channel for OOB async page requests, and
> > > its far from clear that manually picking NICs per chanel upfront is
> > > going work for that.  The kernel can continually dynamically balance
> > > load on the fly and so do much better than any static mapping QEMU
> > > tries to apply, especially if there are multiple distinct QEMU's
> > > competing for bandwidth.
> > > 
> > Yes, Daniel current solution is only for pre-copy. As with postcopy
> > multiFD is not yet supported but in future we can extend it for postcopy

I had been thinking about explicit selection of network device for NUMA
use though; ideally I'd like to be able to associate a set of multifd
threads to each NUMA node, and then associate a NIC with that set of
threads; so that the migration happens down the NIC that's on the node
the RAM is on.  On a really good day you'd have one NIC per top level
NUMA node.

> > channels too.
> > 
> > > > Implementation
> > > > --------------
> > > > 
> > > > Earlier the 'migrate' qmp command:
> > > > { "execute": "migrate", "arguments": { "uri": "tcp:0:4446" } }
> > > > 
> > > > Modified qmp command:
> > > > { "execute": "migrate",
> > > >               "arguments": { "uri": "tcp:0:4446", "multi-fd-uri-list": 
> > > > [ {
> > > >               "source-uri": "tcp::6900", "destination-uri": 
> > > > "tcp:0:4480",
> > > >               "multifd-channels": 4}, { "source-uri": "tcp:10.0.0.0: ",
> > > >               "destination-uri": "tcp:11.0.0.0:7789",
> > > >               "multifd-channels": 5} ] } }
> > > > ------------------------------------------------------------------------------
> > > > 
> > > > Earlier the 'migrate-incoming' qmp command:
> > > > { "execute": "migrate-incoming", "arguments": { "uri": "tcp::4446" } }
> > > > 
> > > > Modified 'migrate-incoming' qmp command:
> > > > { "execute": "migrate-incoming",
> > > >              "arguments": {"uri": "tcp::6789",
> > > >              "multi-fd-uri-list" : [ {"destination-uri" : "tcp::6900",
> > > >              "multifd-channels": 4}, {"destination-uri" : 
> > > > "tcp:11.0.0.0:7789",
> > > >              "multifd-channels": 5} ] } }
> > > > ------------------------------------------------------------------------------
> > > These examples pretty nicely illustrate my concern with this
> > > proposal. It is making QEMU configuration of migration
> > > massively more complicated, while duplicating functionality
> > > the kernel can provide via NIC teaming, but without having
> > > ability to balance it on the fly as the kernel would.
> > 
> > Yes, agree Daniel this raises complexity but we will make sure that it does 
> > not
> > 
> > change/imapct anything existing and we provide new options as optional.
> 
> The added code is certainly going to impact ongoing maint of QEMU I/O
> layer and migration in particular. I'm not convinced this complexity
> is compelling enough compared to leveraging kernel native bonding
> to justify the maint burden it will impose.

Dave

> With regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]