[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v6 1/3] multifd: Create property multifd-flush-after-each-section
From: |
Juan Quintela |
Subject: |
[PATCH v6 1/3] multifd: Create property multifd-flush-after-each-section |
Date: |
Wed, 15 Feb 2023 19:02:29 +0100 |
We used to flush all channels at the end of each RAM section
sent. That is not needed, so preparing to only flush after a full
iteration through all the RAM.
Default value of the property is false. But we return "true" in
migrate_multifd_flush_after_each_section() until we implement the code
in following patches.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
Rename each-iteration to after-each-section
Rename multifd-sync-after-each-section to
multifd-flush-after-each-section
---
qapi/migration.json | 21 ++++++++++++++++++++-
migration/migration.h | 1 +
hw/core/machine.c | 1 +
migration/migration.c | 17 +++++++++++++++--
4 files changed, 37 insertions(+), 3 deletions(-)
diff --git a/qapi/migration.json b/qapi/migration.json
index c84fa10e86..3afd81174d 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -478,6 +478,24 @@
# should not affect the correctness of postcopy migration.
# (since 7.1)
#
+# @multifd-flush-after-each-section: flush every channel after each
+# section sent. This assures that
+# we can't mix pages from one
+# iteration through ram pages with
+# pages for the following
+# iteration. We really only need
+# to do this flush after we have go
+# through all the dirty pages.
+# For historical reasons, we do
+# that after each section. This is
+# suboptimal (we flush too many
+# times).
+# Default value is false.
+# Setting this capability has no
+# effect until the patch that
+# removes this comment.
+# (since 8.0)
+#
# Features:
# @unstable: Members @x-colo and @x-ignore-shared are experimental.
#
@@ -492,7 +510,8 @@
'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
{ 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
'validate-uuid', 'background-snapshot',
- 'zero-copy-send', 'postcopy-preempt'] }
+ 'zero-copy-send', 'postcopy-preempt',
+ 'multifd-flush-after-each-section'] }
##
# @MigrationCapabilityStatus:
diff --git a/migration/migration.h b/migration/migration.h
index 2da2f8a164..7f0f4260ba 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -424,6 +424,7 @@ int migrate_multifd_channels(void);
MultiFDCompression migrate_multifd_compression(void);
int migrate_multifd_zlib_level(void);
int migrate_multifd_zstd_level(void);
+bool migrate_multifd_flush_after_each_section(void);
#ifdef CONFIG_LINUX
bool migrate_use_zero_copy_send(void);
diff --git a/hw/core/machine.c b/hw/core/machine.c
index f73fc4c45c..602e775f34 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -54,6 +54,7 @@ const size_t hw_compat_7_1_len = G_N_ELEMENTS(hw_compat_7_1);
GlobalProperty hw_compat_7_0[] = {
{ "arm-gicv3-common", "force-8-bit-prio", "on" },
{ "nvme-ns", "eui64-default", "on"},
+ { "migration", "multifd-flush-after-each-section", "on"},
};
const size_t hw_compat_7_0_len = G_N_ELEMENTS(hw_compat_7_0);
diff --git a/migration/migration.c b/migration/migration.c
index 90fca70cb7..cfba0da005 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -167,7 +167,8 @@ INITIALIZE_MIGRATE_CAPS_SET(check_caps_background_snapshot,
MIGRATION_CAPABILITY_XBZRLE,
MIGRATION_CAPABILITY_X_COLO,
MIGRATION_CAPABILITY_VALIDATE_UUID,
- MIGRATION_CAPABILITY_ZERO_COPY_SEND);
+ MIGRATION_CAPABILITY_ZERO_COPY_SEND,
+ MIGRATION_CAPABILITY_MULTIFD_FLUSH_AFTER_EACH_SECTION);
/* When we add fault tolerance, we could have several
migrations at once. For now we don't need to add
@@ -2701,6 +2702,17 @@ bool migrate_use_multifd(void)
return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD];
}
+bool migrate_multifd_flush_after_each_section(void)
+{
+ MigrationState *s = migrate_get_current();
+
+ /*
+ * Until the patch that remove this comment, we always return that
+ * the capability is enabled.
+ */
+ return true ||
s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD_FLUSH_AFTER_EACH_SECTION];
+}
+
bool migrate_pause_before_switchover(void)
{
MigrationState *s;
@@ -4535,7 +4547,8 @@ static Property migration_properties[] = {
DEFINE_PROP_MIG_CAP("x-zero-copy-send",
MIGRATION_CAPABILITY_ZERO_COPY_SEND),
#endif
-
+ DEFINE_PROP_MIG_CAP("multifd-flush-after-each-section",
+ MIGRATION_CAPABILITY_MULTIFD_FLUSH_AFTER_EACH_SECTION),
DEFINE_PROP_END_OF_LIST(),
};
--
2.39.1
[PATCH v6 2/3] multifd: Protect multifd_send_sync_main() calls, Juan Quintela, 2023/02/15