qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 1/3] multifd: Create property multifd-flush-after-each-sec


From: Juan Quintela
Subject: Re: [PATCH v6 1/3] multifd: Create property multifd-flush-after-each-section
Date: Thu, 16 Feb 2023 18:13:38 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)

Markus Armbruster <armbru@redhat.com> wrote:
> Juan Quintela <quintela@redhat.com> writes:
>
>>>> @@ -478,6 +478,24 @@
>>>>  #                    should not affect the correctness of postcopy 
>>>> migration.
>>>>  #                    (since 7.1)
>>>>  #
>>>> +# @multifd-flush-after-each-section: flush every channel after each
>>>> +#                                    section sent.  This assures that
>>>> +#                                    we can't mix pages from one
>>>> +#                                    iteration through ram pages with
>
> RAM

OK.

>>>> +#                                    pages for the following
>>>> +#                                    iteration.  We really only need
>>>> +#                                    to do this flush after we have go
>
> to flush after we have gone

OK

>>>> +#                                    through all the dirty pages.
>>>> +#                                    For historical reasons, we do
>>>> +#                                    that after each section.  This is

> we flush after each section

OK

>>>> +#                                    suboptimal (we flush too many
>>>> +#                                    times).

> inefficient: we flush too often.

OK

>>>> +#                                    Default value is false.
>>>> +#                                    Setting this capability has no
>>>> +#                                    effect until the patch that
>>>> +#                                    removes this comment.
>>>> +#                                    (since 8.0)
>>>
>>> IMHO the core of this new "cap" is the new RAM_SAVE_FLAG_MULTIFD_FLUSH bit
>>> in the stream protocol, but it's not referenced here.  I would suggest
>>> simplify the content but highlight the core change:
>>
>> Actually it is the other way around.  What this capability will do is
>> _NOT_ use RAM_SAVE_FLAG_MULTIFD_FLUSH protocol.
>>
>>>  @multifd-lazy-flush:  When enabled, multifd will only do sync flush after
>
> Spell out "synchronrous".
ok.

>>>                        each whole round of bitmap scan.  Otherwise it'll be
>
> Suggest to scratch "whole".

ok.

>>>                        done per RAM save iteration (which happens with a 
>>> much
>>>                        higher frequency).
>
> Less detail than Juan's version.  I'm not sure how much detail is
> appropriate for QMP reference documentation.
>
>>>                        Please consider enable this as long as possible, and
>>>                        keep this off only if any of the src/dst QEMU binary
>>>                        doesn't support it.
>
> Clear guidance on how to use it, good!
>
> Perhaps state it more forcefully: "Enable this when both source and
> destination support it."
>
>>>
>>>                        This capability is bound to the new RAM save flag
>>>                        RAM_SAVE_FLAG_MULTIFD_FLUSH, the new flag will only
>>>                        be used and recognized when this feature bit set.
>
> Is RAM_SAVE_FLAG_MULTIFD_FLUSH visible in the QMP interface?  Or in the
> migration stream?

No.  Only migration stream.

> I'm asking because doc comments are QMP reference documentation, but
> when writing them, it's easy to mistake them for internal documentation,
> because, well, they're comments.

>> Name is wrong.  It would be multifd-non-lazy-flush.  And I don't like
>> negatives.  Real name is:
>>
>> multifd-I-messed-and-flush-too-many-times.
>
> If you don't like "non-lazy", say "eager".

more than eager it is unnecesary.

>>> I know you dislike multifd-lazy-flush, but that's still the best I can come
>>> up with when writting this (yeah I still like it :-p), please bare with me
>>> and take whatever you think the best.
>>
>> Libvirt assumes that all capabilities are false except if enabled.
>> We want RAM_SAVE_FLAG_MULTFD_FLUSH by default (in new machine types).
>>
>> So, if we can do
>>
>> capability_use_new_way = true
>>
>> We change that to
>>
>> capability_use_old_way = true
>>
>> And then by default with false value is what we want.
>
> Eventually, all supported migration peers will support lazy flush.  What
> then?  Will we flip the default?  Or will we ignore the capability and
> always flush lazily?

I have to take a step back.  Cope with me.

How we fix problems in migration that make the stream incompatible.
We create a property.

static Property migration_properties[] = {
    ...
    DEFINE_PROP_BOOL("decompress-error-check", MigrationState,
                      decompress_error_check, true),
    ....
}

In this case it is true by default.

GlobalProperty hw_compat_2_12[] = {
    { "migration", "decompress-error-check", "off" },
    ...
};

We introduced it on whatever machine that is newer than 2_12.
Then we make it "off" for older machine types, that way we make sure
that migration from old qemu to new qemu works.

And we can even left libvirt, if they know that both qemus are new, they
can setup the property to true even for old machine types.

So, what we have:

Machine 2_13 and newer use the new code.
Machine 2_12 and older use the old code (by default).
We _can_ migrate machine 2_12 with new code, but we need to setup it
correctly on both sides.
We can run the old code with machine type 2_13.  But I admit than that
is only useful for testing, debugging, meassuring performance, etc.

So, the idea here is that we flush a lot of times for old machine types,
and we only flush when needed for new machine types.  Libvirt (or
whoever) can use the new method if it sees fit just using the
capability.

Now that I am telling this, I can switch back to a property instead of a
capability:
- I can have the any default value that I want
- So I can name it multifd_lazy_flush or whatever.

Later, Juan.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]