On Fri, Jun 10, 2022 at 05:58:31PM +0530, manish.mishra wrote:
On 09/06/22 9:17 pm, Daniel P. Berrangé wrote:
On Thu, Jun 09, 2022 at 07:33:01AM +0000, Het Gala wrote:
As of now, the multi-FD feature supports connection over the default network
only. This Patchset series is a Qemu side implementation of providing multiple
interfaces support for multi-FD. This enables us to fully utilize dedicated or
multiple NICs in case bonding of NICs is not possible.
Multi-FD Qemu implementation currently supports connection only on the default
network. This forbids us from advantages like:
- Separating VM live migration traffic from the default network.
I totally understand your concern around this approach increasing compexity
when similar things can be done with NIC teaming. But we thought this approach
much more flexibility to user in few cases like.
1. We checked our customer data, almost all of the host had multiple NIC, but
in their setups was very rare. So for those cases this approach can help
in utilise multiple
NICs as teaming is not possible there.
AFAIK, LACP is not required in order to do link aggregation with Linux.
Traditional Linux bonding has no special NIC hardware or switch requirements,
so LACP is merely a "nice to have" in order to simplify some aspects.
IOW, migration with traffic spread across multiple NICs is already
I can understand that some people may not have actually configured
bonding on their hosts, but it is not unreasonable to request that
they do so, if they want to take advantage fo aggrated bandwidth.
It has the further benefit that it will be fault tolerant. With
this proposal if any single NIC has a problem, the whole migration
will get stuck. With kernel level bonding, if any single NIC haus
a problem, it'll get offlined by the kernel and migration will
continue to work across remaining active NICs.
2. We have seen requests recently to separate out traffic of storage, VM
over different vswitch which can be backed by 1 or more NICs as this give
predictability and assurance. So host with multiple ips/vswitches can be
environment. In this kind of enviroment this approach gives per vm or
flexibilty, like for critical VM we can still use bandwidth from all
but for normal VM they can keep live migration only on dedicated NICs
complete host network topology.
At final we want it to be something like this [<ip-pair>, <multiFD-channels>,
to provide bandwidth_control per interface.
Again, it is already possible to separate migration traffic from storage
traffic, from other network traffic. The target IP given will influence
which NIC is used based on routing table and I know this is already
done widely with OpenStack deployments.