qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RFC] memory: Don't allow to resize RAM while migrating


From: Peter Xu
Subject: Re: [PATCH RFC] memory: Don't allow to resize RAM while migrating
Date: Fri, 14 Feb 2020 10:56:23 -0500

On Fri, Feb 14, 2020 at 10:17:07AM +0100, David Hildenbrand wrote:
> On 13.02.20 21:56, Peter Xu wrote:
> > On Thu, Feb 13, 2020 at 08:42:23PM +0100, David Hildenbrand wrote:
> >> On 13.02.20 19:32, Peter Xu wrote:
> >>> On Thu, Feb 13, 2020 at 06:20:16PM +0100, David Hildenbrand wrote:
> >>>> Resizing while migrating is dangerous and does not work as expected.
> >>>> The whole migration code works on the usable_length of ram blocks and 
> >>>> does
> >>>> not expect this to change at random points in time.
> >>>>
> >>>> Precopy: The ram block size must not change on the source, after
> >>>> ram_save_setup(), so as long as the guest is still running on the source.
> >>>>
> >>>> Postcopy: The ram block size must not change on the target, after
> >>>> synchronizing the RAM block list (ram_load_precopy()).
> >>>>
> >>>> AFAIKS, resizing can be trigger *after* (but not during) a reset in
> >>>> ACPI code by the guest
> >>>> - hw/arm/virt-acpi-build.c:acpi_ram_update()
> >>>> - hw/i386/acpi-build.c:acpi_ram_update()
> >>>
> >>> What can be the pre-condition of triggering this after reset?  I'm
> >>> thinking whether QEMU can be aware of this "resizing can happen"
> >>> condition, then we could simply stop the migration from happening even
> >>> before the resizing happens.  Thanks,
> >>
> >> I think the condition is not known before the guest actually tries to
> >> read the relevant memory areas (which trigger the rebuilt+resize, and
> >> AFAIK, the new size depends on fw config done by the guest after the
> >> reset). So it's hard to "predict".
> > 
> > I chimmed in without much context, sorry if I'm going to ask naive
> > questions. :)
> 
> I think the problem is quite involved and not obvious, so there are no
> naive questions :)
> 
> > 
> > What I was asking is about why the resizing can happen.  A quick read
> > told me that it was majorly for easier extension of ROMs (firmware
> > updates?).  With that, I'm imaging a common case for memory
> > resizing...
> > 
> >   (1) Source QEMU runs VM on old host, with old firmware
> > 
> >   (2) Migrate source QEMU to destination new host, with new and bigger
> >       firmware
> > 
> >   (3) During the migration, the ROM size on the destination will still
> >       be the old, referring to ram_load_precopy(), as long as no
> >       system reset
> > 
> >   (4) After migration finished, when the system reboots, memory
> >       resizing happens with the new and bigger firmware loaded
> 
> AFAIK it could trigger
> 
> a) In precopy during the second migration.
> b) In postcopy during the first migration.

After reading your reply - even the 1st migration of precopy?  Say,
when source QEMU resets and found changed FW during the precopy?

I'll comment postcopy below.

> 
> > 
> > And is this patch trying to fix/warn when there's a reboot during (3)
> > so the new size is discovered at a wrong time?  Is my understanding
> > correct?
> 
> It's trying to bail out early instead of failing at other random points
> (with an unclear outcome).

Yeah, I am just uncertain on whether in some cases it could be a
silent success (when used_length changed, however migration still
completed without error reported) and now we're changing it to a VM
crash... Could that happen?

  - before the patch, when precopy triggers this,

    - when it didn't encounter issue with the changed used_length, it
      could get silently ignored.  Lucky enough & good case.

    - when it triggered an error, precopy failed.  _However_, we can
      simply restart... so still not so bad.

  - after the patch, when precopy detects this, we abort
    immediately...  Which is really not good...

If you see, that's the major thing I was worrying about...

And since used_length is growing in most cases as you said (at least
before virtio-mem comes? :), I'm suspecting that could be the major
case that there will be a silent success.

> 
> >>
> >> In the precopy case it would be easier to abort (although, not simple
> >> AFAIKS), in the postcopy not so easy - because you're already partially
> >> running on the migration target.
> > 
> > Prior to this patch, would a precopy still survive with such an
> > accident (asked because I _feel_ like migrating a ramblock with
> > smaller used_length to the same ramblock with bigger used_length seems
> > to be fine?)?  Or we can stop the precopy and restart.  After this
> 
> I assume growing the region is the usual case (not shrinking). FW blobs
> tend to get bigger.
> 
> Migrating while growing a ram block on the source won't work. The source
> would try to send a dirt page that's outside of the used_length on the
> target, making e.g., ram_load_postcopy()/ram_load_precopy() fail with
> "Illegal RAM offset...".

Right.

> 
> In the postcopy case, e.g., ram_dirty_bitmap_reload() will fail in case
> there is a mismatch between ram block size on source/target.

IMHO that's an extreme rare case when (one example I can think of):

  - we start a postcopy after a precopy
  - system reset, noticed a firmware update
  - we got a network failure, postcopy interrupted
  - we try to recover a postcopy

So are you using postcopy recovery?  I will be surprised if so because
then you'll be the first user I know that really used that besides QE. :)

> 
> Another issue is if the used_length changes while in ram_save_setup(),
> just between storing ram_bytes_total_common(true) and storing
> block->used_length. A mismatch will screw up the migration stream.

Yes this seems to be another issue then.  IIUC the ramblocks are
protected by RCU, the migration code has always been with the read
lock there so logically it should see a consistent view of system
ramblocks in ram_save_setup().  IMHO the thing that was inconsistent
is that RCU is not safe enough for changing used_length for a ramblock.

> 
> But these are just the immediately visible issues. I am more concerned
> about used_length changing at random points in time, resulting in more
> harm. (e.g., non-obvious load-store tearing when accessing the used length)
> 
> Migration code is inherently racy when it comes to ram block resizes.
> And that might become more dangerous once we want to size the migration
> bitmaps smaller (used_length instead of max_length) or disallow access
> to ram blocks beyond the used_length. Both are things I am working on :)

Right. Now I start to wonder whether migration is the only special guy
here.  I noticed at least we've got:

struct RAMBlockNotifier {
    void (*ram_block_added)(RAMBlockNotifier *n, void *host, size_t size);
    void (*ram_block_removed)(RAMBlockNotifier *n, void *host, size_t size);
    QLIST_ENTRY(RAMBlockNotifier) next;
};

I suspect at least all these users could also break in some way if
resize happens.

> 
> > patch, it'll crash the source VM (&error_abort specified in
> > memory_region_ram_resize()), which seems a bit more harsh?
> 
> There seems to be no easy way to abort migration from outside the
> migration thread. As Juan said, you actually don't want to fail
> migration but instead soft-abort migration and continue running the
> guest on the target on a reset. But that's not easy as well.
> 
> One could think about extending ram block notifiers to notify migration
> code (before) resizes, so that migration code can work around the
> resize (how is TBD). Not easy as well :)

True.  But if you see my worry still stands, on whether such a patch
would make things worse by crashing it when it could still have a
chance to survive.  Shall we loose the penalty of that even if we want
to warn the user earlier?

> 
> But then, I am not sure
> a) If we run into this issue in real life a lot.

/me curious about this too.  I'd bet is it's not happening a lot, even
hardly noticed/triggered.  Since I noticed resizing is there since
2014.  :)

> b) If we actually need an elaborate solutions within QEMU to handle this
> case. For now, it's sufficient to restart the VM on the migration
> target. No data was lost. Not nice, but very simple.

Thanks,

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]