qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 3/3] migration: multifd: Enable zerocopy


From: Jason Wang
Subject: Re: [PATCH v1 3/3] migration: multifd: Enable zerocopy
Date: Thu, 2 Sep 2021 15:23:14 +0800
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.13.0


在 2021/9/1 下午11:35, Peter Xu 写道:
On Wed, Sep 01, 2021 at 09:53:07AM +0100, Daniel P. Berrangé wrote:
On Tue, Aug 31, 2021 at 04:29:09PM -0400, Peter Xu wrote:
On Tue, Aug 31, 2021 at 02:16:42PM +0100, Daniel P. Berrangé wrote:
On Tue, Aug 31, 2021 at 08:02:39AM -0300, Leonardo Bras wrote:
Call qio_channel_set_zerocopy(true) in the start of every multifd thread.

Change the send_write() interface of multifd, allowing it to pass down
flags for qio_channel_write*().

Pass down MSG_ZEROCOPY flag for sending memory pages, while keeping the
other data being sent at the default copying approach.

Signed-off-by: Leonardo Bras <leobras@redhat.com>
---
  migration/multifd-zlib.c | 7 ++++---
  migration/multifd-zstd.c | 7 ++++---
  migration/multifd.c      | 9 ++++++---
  migration/multifd.h      | 3 ++-
  4 files changed, 16 insertions(+), 10 deletions(-)
@@ -675,7 +676,8 @@ static void *multifd_send_thread(void *opaque)
              }
if (used) {
-                ret = multifd_send_state->ops->send_write(p, used, &local_err);
+                ret = multifd_send_state->ops->send_write(p, used, 
MSG_ZEROCOPY,
+                                                          &local_err);
I don't think it is valid to unconditionally enable this feature due to the
resource usage implications

https://www.kernel.org/doc/html/v5.4/networking/msg_zerocopy.html

   "A zerocopy failure will return -1 with errno ENOBUFS. This happens
    if the socket option was not set, the socket exceeds its optmem
    limit or the user exceeds its ulimit on locked pages."

The limit on locked pages is something that looks very likely to be
exceeded unless you happen to be running a QEMU config that already
implies locked memory (eg PCI assignment)
Yes it would be great to be a migration capability in parallel to multifd. At
initial phase if it's easy to be implemented on multi-fd only, we can add a
dependency between the caps.  In the future we can remove that dependency when
the code is ready to go without multifd.  Thanks,
Also, I'm wondering how zerocopy support interacts with kernel support
for kTLS and multipath-TCP, both of which we want to be able to use
with migration.
Copying Jason Wang for net implications between these features on kernel side


Note that the MSG_ZEROCOPY is contributed by Google :)


and whether they can be enabled together (MSG_ZEROCOPY, mptcp, kTLS).


I think they can. Anyway kernel can choose to do datacopy when necessary.

Note that the "zerocopy" is probably not correct here. What's better is "Enable MSG_ZEROCOPY" since:

1) kernel supports various kinds of zerocopy, for TX, it has supported sendfile() for many years.
2) MSG_ZEROCOPY is only used for TX but not RX
3) TCP rx zerocopy is only supported via mmap() which requires some extra configurations e.g 4K MTU, driver support for header split etc.

[1] https://www.youtube.com/watch?v=_ZfiQGWFvg0

Thanks



 From the safe side we may want to only enable one of them until we prove
they'll work together I guess..

Not a immediate concern as I don't really think any of them is really
explicitly supported in qemu.

KTLS may be implicitly included by a new gnutls, but we need to mark TLS and
ZEROCOPY mutual exclusive anyway because at least the userspace TLS code of
gnutls won't has a way to maintain the tls buffers used by zerocopy.  So at
least we need some knob to detect whether kTLS is enabled in gnutls.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]