qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 3/3] migration: multifd: Enable zerocopy


From: Dr. David Alan Gilbert
Subject: Re: [PATCH v1 3/3] migration: multifd: Enable zerocopy
Date: Tue, 7 Sep 2021 12:17:17 +0100
User-agent: Mutt/2.0.7 (2021-05-04)

* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Thu, Sep 02, 2021 at 05:52:15AM -0300, Leonardo Bras Soares Passos wrote:
> > On Thu, Sep 2, 2021 at 5:21 AM Daniel P. Berrangé <berrange@redhat.com> 
> > wrote:
> > >
> > > On Thu, Sep 02, 2021 at 04:22:55AM -0300, Leonardo Bras Soares Passos 
> > > wrote:
> > > > Hello Daniel, thanks for the feedback !
> > > >
> > > > On Tue, Aug 31, 2021 at 10:17 AM Daniel P. Berrangé 
> > > > <berrange@redhat.com> wrote:
> > > > >
> > > > > On Tue, Aug 31, 2021 at 08:02:39AM -0300, Leonardo Bras wrote:
> > > > > > Call qio_channel_set_zerocopy(true) in the start of every multifd 
> > > > > > thread.
> > > > > >
> > > > > > Change the send_write() interface of multifd, allowing it to pass 
> > > > > > down
> > > > > > flags for qio_channel_write*().
> > > > > >
> > > > > > Pass down MSG_ZEROCOPY flag for sending memory pages, while keeping 
> > > > > > the
> > > > > > other data being sent at the default copying approach.
> > > > > >
> > > > > > Signed-off-by: Leonardo Bras <leobras@redhat.com>
> > > > > > ---
> > > > > >  migration/multifd-zlib.c | 7 ++++---
> > > > > >  migration/multifd-zstd.c | 7 ++++---
> > > > > >  migration/multifd.c      | 9 ++++++---
> > > > > >  migration/multifd.h      | 3 ++-
> > > > > >  4 files changed, 16 insertions(+), 10 deletions(-)
> > > > >
> > > > > > @@ -675,7 +676,8 @@ static void *multifd_send_thread(void *opaque)
> > > > > >              }
> > > > > >
> > > > > >              if (used) {
> > > > > > -                ret = multifd_send_state->ops->send_write(p, used, 
> > > > > > &local_err);
> > > > > > +                ret = multifd_send_state->ops->send_write(p, used, 
> > > > > > MSG_ZEROCOPY,
> > > > > > +                                                          
> > > > > > &local_err);
> > > > >
> > > > > I don't think it is valid to unconditionally enable this feature due 
> > > > > to the
> > > > > resource usage implications
> > > > >
> > > > > https://www.kernel.org/doc/html/v5.4/networking/msg_zerocopy.html
> > > > >
> > > > >   "A zerocopy failure will return -1 with errno ENOBUFS. This happens
> > > > >    if the socket option was not set, the socket exceeds its optmem
> > > > >    limit or the user exceeds its ulimit on locked pages."
> > > >
> > > > You are correct, I unfortunately missed this part in the docs :(
> > > >
> > > > > The limit on locked pages is something that looks very likely to be
> > > > > exceeded unless you happen to be running a QEMU config that already
> > > > > implies locked memory (eg PCI assignment)
> > > >
> > > > Do you mean the limit an user has on locking memory?
> > >
> > > Yes, by default limit QEMU sees will be something very small.
> > >
> > > > If so, that makes sense. I remember I needed to set the upper limit of 
> > > > locked
> > > > memory for the user before using it, or adding a capability to qemu 
> > > > before.
> > > >
> > > > Maybe an option would be trying to mlock all guest memory before setting
> > > > zerocopy=on in qemu code. If it fails, we can print an error message 
> > > > and fall
> > > > back to not using zerocopy (following the idea of a new 
> > > > io_async_writev()
> > > > I told you in the previous mail).
> > >
> > > Currently ability to lock memory is something that has to be configured
> > > when QEMU starts, and it requires libvirt to grant suitable permissions
> > > to QEMU. Memory locking is generally undesirable because it prevents
> > > memory overcommit. Or rather if you are allowing memory overcommit, then
> > > allowing memory locking is a way to kill your entire host.
> > 
> > You mean it's gonna consume too much memory, or something else?
> 
> Essentially yes. 
> 
> > > I don't think we can unconditionally grant ability to lock arbitrary
> > > guest RAM at startup, just to satisfy a possible desire to use zerocopy
> > > migration later. Granting it at runtime feels questionable as you now
> > > need to track and predict how much locked memory you've allowed, and
> > > also have possible problems with revokation.
> > 
> > (I am really new to this, so please forgive me if I am asking dumb or
> > overly basic questions)
> > 
> > What does revokation means in this context?
> > You give the process hability to lock n MB of memory, and then you take it?
> > Why would that happen? Is Locked memory a limited resource?
> 
> Consider a VM host with 64 GB of RAM and 64 GB of swap, and an
> overcommit ratio of 1.5. ie we'll run VMs with 64*1.5 GB of RAM
> total.
> 
> So we can run 3 VMs each with 32 GB of RAM, giving 96 GB of usage
> which exceeds physical RAM. Most of the time this may well be fine
> as the VMs don't concurrently need their full RAM allocation, and
> worst case they'll get pushed to swap as the kernel re-shares
> memory in respose to load. So perhaps each VM only needs 20 GB
> resident at any time, but over time one VM can burst upto 32 GB
> and then 12 GB of it get swapped out later when inactive.
> 
> But now consider if we allowed 2 of the VMs to lock memory for
> purposes of migration. Those 2 VMs can now pin 64 GB of memory
> in the worst case, leaving no free memory for the 3rd VM or
> for the OS. This will likely take down the entire host, regardless
> of swap availability.
> 
> IOW, if you are overcomitting RAM you have to be extremely
> careful about allowing any VM to lock memory. If you do decide
> to allow memory locking, you need to make sure that the worst
> case locked memory amount still leaves enough unlocked memory
> for the OS to be able to effectively manage the overcommit
> load via swap.  We definitely can't grant memory locking to
> VMs at startup in this scenario, and if we grant it at runtime,
> we need to be able to revoke it again later.
> 
> These overcommit numbers are a bit more extreme that you'd 
> usually do in real world, but it illustrates the genreal
> problem. Also bear in mind that QEMU has memory overhead
> beyond the guest RAM block, which varies over time, making
> accounting quite hard. We have to also assume that QEMU
> could have been compromised by a guest breakout, so we
> can't assume that migration will play nice - we have to
> assume the worst case possible, given the process ulimits.

We already have the same problem for RDMA; 
(Although it has some options for doing rolling locking in chunks
and recently there's code for use with new cards that don't need
locking).

I think the thing here is to offer zerocopy as an option; then people
can decide on the costs etc.

Dave

> 
> > > Overall the memory locking needs look like a significant constraint that
> > > will affect ability to use this feature.
> > >
> > 
> > I Agree, there is a lot to take in account.
> > In any way, those constraints could be checked at the same function as
> > the setsockopt() right?
> 
> QEMU could possibly check its ulimits to see if it is possible, but it
> would be safer to require a migration capability to be explicitly set
> by the mgmt app to opt-in to zerocopy.
> 
> > (Setting up configs to improve the chance of zerocopy would probably only
> > happen at/before qemu starting, right?)
> 
> Usually, but you can change ulimits on the fly for a process. I'm just
> not sure of semantics if you reduce limits and existing usage exceeds
> the reduced value.
> 
> Regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]