qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v4 18/23] qapi: backup: disable copy_range by default


From: Vladimir Sementsov-Ogievskiy
Subject: [PATCH v4 18/23] qapi: backup: disable copy_range by default
Date: Sun, 17 Jan 2021 00:47:00 +0300

Further commit will add a benchmark
(scripts/simplebench/bench-backup.py), which will show that backup
works better with async parallel requests (previous commit) and
disabled copy_range. So, let's disable copy_range by default.

Note: the option was added several commits ago with default to true,
to follow old behavior (the feature was enabled unconditionally), and
only now we are going to change the default behavior.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 qapi/block-core.json | 2 +-
 blockdev.c           | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/qapi/block-core.json b/qapi/block-core.json
index c0e9d119d2..933c2327c8 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1377,7 +1377,7 @@
 # Optional parameters for backup. These parameters don't affect
 # functionality, but may significantly affect performance.
 #
-# @use-copy-range: Use copy offloading. Default true.
+# @use-copy-range: Use copy offloading. Default false.
 #
 # @max-workers: Maximum number of parallel requests for the sustained 
background
 #               copying process. Doesn't influence copy-before-write 
operations.
diff --git a/blockdev.c b/blockdev.c
index 6db433cef8..41d1431210 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -2794,7 +2794,7 @@ static BlockJob *do_backup_common(BackupCommon *backup,
 {
     BlockJob *job = NULL;
     BdrvDirtyBitmap *bmap = NULL;
-    BackupPerf perf = { .use_copy_range = true, .max_workers = 64 };
+    BackupPerf perf = { .max_workers = 64 };
     int job_flags = JOB_DEFAULT;
 
     if (!backup->has_speed) {
-- 
2.29.2




reply via email to

[Prev in Thread] Current Thread [Next in Thread]