[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [POC] colo-proxy in qemu

From: Yang Hongyang
Subject: Re: [Qemu-devel] [POC] colo-proxy in qemu
Date: Mon, 27 Jul 2015 15:49:55 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0

On 07/27/2015 03:37 PM, Jason Wang wrote:

On 07/27/2015 01:51 PM, Yang Hongyang wrote:
On 07/27/2015 12:49 PM, Jason Wang wrote:

On 07/27/2015 11:54 AM, Yang Hongyang wrote:

On 07/27/2015 11:24 AM, Jason Wang wrote:

On 07/24/2015 04:04 PM, Yang Hongyang wrote:
Hi Jason,

On 07/24/2015 10:12 AM, Jason Wang wrote:

On 07/24/2015 10:04 AM, Dong, Eddie wrote:
Hi Stefan:
       Thanks for your comments!

On Mon, Jul 20, 2015 at 02:42:33PM +0800, Li Zhijian wrote:
We are planning to implement colo-proxy in qemu to cache and

I thought there is a kernel module to do that?
       Yes, that is the previous solution the COLO sub-community
to go, but we realized it might be not the best choices, and
thus we
want to bring discussion back here :)  More comments are welcome.


Could you pls describe more details on this decision? What's the
that you realize it was not the best choice?

Below is my opinion:

We realized that there're disadvantages do it in kernel spaces:
1. We need to recompile kernel: the colo-proxy kernel module is
      implemented as a nf conntrack extension. Adding a extension
need to
      modify the extension struct in-kernel, so recompile kernel is

There's no need to do all in kernel, you can use a separate process to
do the comparing and trigger the state sync through monitor.

I don't get it, colo-proxy kernel module using a kthread do the
comparing and
trigger the state sync. We implemented it as a nf conntrack extension
so we need to extend the extension struct in-kernel, although it just
few lines changes to kernel, but a recompile of kernel is needed.
Are you
talking about not implement it as a nf conntrack extension?

Yes, I mean implement the comparing in userspace but not in qemu.

Yes, it is an alternative, that requires other components such as
netfilter userspace tools, it will add the complexity I think, we
wanted to implement a simple solution in QEMU. Another reason is
that using other userspace tools will affect the performance, the
context switch between kernel and userspace may be an overhead.

2. We need to recompile iptables/nftables to use together with the
      kernel module.
3. Need to configure primary host to forward input packets to
secondary as
      well as configure secondary to forward output packets to primary
host, the
      network topology and configuration is too complex for a regular

You can use current kernel primitives to mirror the traffic of both
and SVM to another process without any modification of kernel. And
can offload all network configuration to management in this case.  And
what's more import, this works for vhost. Filtering in qemu won't work
for vhost.

We are using tc to mirror/forward packets now. Implement in QEMU do
have some
limits, but there're also limits in kernel, if the packet do not pass
the host kernel TCP/IP stack, such as vhost-user.

But the limits are much less than userspace, no? For vhost-user, maybe
we could extend the backed to mirror the traffic also.

IMO the limits are more or less. Besides, for mirror/forward packets,
using tc requires a separate physical nic or a vlan, the nic should not
be used for other purpose. if we implement it in QEMU, using an socket
connection to forward packets, we no longer need an separate nic, it will
reduce the network topology complexity.

It depends on how do you design your user space. If you want using
userspace to forward the packet, you can 1) use packet socket to capture
all traffic on the tap that is used by VM 2) mirror the traffic to a new
tap device, the user space can then read all traffic from this new tap

Yes, but we can also do it in QEMU space, right? This will make life easier
because we do all in one solution within QEMU.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]