qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 1/3] multifd: Create property multifd-flush-after-each-sec


From: Markus Armbruster
Subject: Re: [PATCH v6 1/3] multifd: Create property multifd-flush-after-each-section
Date: Thu, 16 Feb 2023 16:15:35 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)

Juan Quintela <quintela@redhat.com> writes:

> Peter Xu <peterx@redhat.com> wrote:
>> On Wed, Feb 15, 2023 at 07:02:29PM +0100, Juan Quintela wrote:
>>> We used to flush all channels at the end of each RAM section
>>> sent.  That is not needed, so preparing to only flush after a full
>>> iteration through all the RAM.
>>> 
>>> Default value of the property is false.  But we return "true" in
>>> migrate_multifd_flush_after_each_section() until we implement the code
>>> in following patches.
>>> 
>>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>>
>> This line can be dropped, after (I assume) git commit helped to add the
>> other one below. :)
>
> Gree, git and trailers is always so much fun.  Will try to fix them (again)
>
>>
>> Normally that's also why I put R-bs before my SoB because I should have two
>> SoB but then I merge them into the last; git is happy with that too.
>>
>>> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>>
>> Acked-by: Peter Xu <peterx@redhat.com>
>
> Thanks.
>
>> But some nitpicks below (I'll leave those to you to decide whether to
>> rework or keep them as is..).
>>
>>>
>>> ---
>>> 
>>> Rename each-iteration to after-each-section
>>> Rename multifd-sync-after-each-section to
>>>        multifd-flush-after-each-section
>>> ---
>>>  qapi/migration.json   | 21 ++++++++++++++++++++-
>>>  migration/migration.h |  1 +
>>>  hw/core/machine.c     |  1 +
>>>  migration/migration.c | 17 +++++++++++++++--
>>>  4 files changed, 37 insertions(+), 3 deletions(-)
>>> 
>>> diff --git a/qapi/migration.json b/qapi/migration.json
>>> index c84fa10e86..3afd81174d 100644
>>> --- a/qapi/migration.json
>>> +++ b/qapi/migration.json
>>> @@ -478,6 +478,24 @@
>>>  #                    should not affect the correctness of postcopy 
>>> migration.
>>>  #                    (since 7.1)
>>>  #
>>> +# @multifd-flush-after-each-section: flush every channel after each
>>> +#                                    section sent.  This assures that
>>> +#                                    we can't mix pages from one
>>> +#                                    iteration through ram pages with

RAM

>>> +#                                    pages for the following
>>> +#                                    iteration.  We really only need
>>> +#                                    to do this flush after we have go

to flush after we have gone

>>> +#                                    through all the dirty pages.
>>> +#                                    For historical reasons, we do
>>> +#                                    that after each section.  This is

we flush after each section

>>> +#                                    suboptimal (we flush too many
>>> +#                                    times).

inefficient: we flush too often.

>>> +#                                    Default value is false.
>>> +#                                    Setting this capability has no
>>> +#                                    effect until the patch that
>>> +#                                    removes this comment.
>>> +#                                    (since 8.0)
>>
>> IMHO the core of this new "cap" is the new RAM_SAVE_FLAG_MULTIFD_FLUSH bit
>> in the stream protocol, but it's not referenced here.  I would suggest
>> simplify the content but highlight the core change:
>
> Actually it is the other way around.  What this capability will do is
> _NOT_ use RAM_SAVE_FLAG_MULTIFD_FLUSH protocol.
>
>>  @multifd-lazy-flush:  When enabled, multifd will only do sync flush after

Spell out "synchronrous".

>>                        each whole round of bitmap scan.  Otherwise it'll be

Suggest to scratch "whole".

>>                        done per RAM save iteration (which happens with a much
>>                        higher frequency).

Less detail than Juan's version.  I'm not sure how much detail is
appropriate for QMP reference documentation.

>>                        Please consider enable this as long as possible, and
>>                        keep this off only if any of the src/dst QEMU binary
>>                        doesn't support it.

Clear guidance on how to use it, good!

Perhaps state it more forcefully: "Enable this when both source and
destination support it."

>>
>>                        This capability is bound to the new RAM save flag
>>                        RAM_SAVE_FLAG_MULTIFD_FLUSH, the new flag will only
>>                        be used and recognized when this feature bit set.

Is RAM_SAVE_FLAG_MULTIFD_FLUSH visible in the QMP interface?  Or in the
migration stream?

I'm asking because doc comments are QMP reference documentation, but
when writing them, it's easy to mistake them for internal documentation,
because, well, they're comments.

> Name is wrong.  It would be multifd-non-lazy-flush.  And I don't like
> negatives.  Real name is:
>
> multifd-I-messed-and-flush-too-many-times.

If you don't like "non-lazy", say "eager".

>> I know you dislike multifd-lazy-flush, but that's still the best I can come
>> up with when writting this (yeah I still like it :-p), please bare with me
>> and take whatever you think the best.
>
> Libvirt assumes that all capabilities are false except if enabled.
> We want RAM_SAVE_FLAG_MULTFD_FLUSH by default (in new machine types).
>
> So, if we can do
>
> capability_use_new_way = true
>
> We change that to
>
> capability_use_old_way = true
>
> And then by default with false value is what we want.

Eventually, all supported migration peers will support lazy flush.  What
then?  Will we flip the default?  Or will we ignore the capability and
always flush lazily?

[...]




reply via email to

[Prev in Thread] Current Thread [Next in Thread]