qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 3/3] cpus-common: implement dirty page limit on vCPU


From: Hyman
Subject: Re: [PATCH v6 3/3] cpus-common: implement dirty page limit on vCPU
Date: Sat, 27 Nov 2021 00:24:16 +0800
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.3.2



在 2021/11/26 15:03, Markus Armbruster 写道:
huangy81@chinatelecom.cn writes:

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Implement dirtyrate calculation periodically basing on
dirty-ring and throttle vCPU until it reachs the quota
dirty page rate given by user.

Introduce qmp commands set-dirty-limit/cancel-dirty-limit to
set/cancel dirty page limit on vCPU.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
  cpus-common.c         | 41 +++++++++++++++++++++++++++++++++++++++++
  include/hw/core/cpu.h |  9 +++++++++
  qapi/migration.json   | 47 +++++++++++++++++++++++++++++++++++++++++++++++
  softmmu/vl.c          |  1 +
  4 files changed, 98 insertions(+)

diff --git a/cpus-common.c b/cpus-common.c
index 6e73d3e..3c156b3 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -23,6 +23,11 @@
  #include "hw/core/cpu.h"
  #include "sysemu/cpus.h"
  #include "qemu/lockable.h"
+#include "sysemu/dirtylimit.h"
+#include "sysemu/cpu-throttle.h"
+#include "sysemu/kvm.h"
+#include "qapi/error.h"
+#include "qapi/qapi-commands-migration.h"
static QemuMutex qemu_cpu_list_lock;
  static QemuCond exclusive_cond;
@@ -352,3 +357,39 @@ void process_queued_cpu_work(CPUState *cpu)
      qemu_mutex_unlock(&cpu->work_mutex);
      qemu_cond_broadcast(&qemu_work_cond);
  }
+
+void qmp_set_dirty_limit(int64_t idx,
+                         uint64_t dirtyrate,
+                         Error **errp)
+{
+    if (!kvm_enabled() || !kvm_dirty_ring_enabled()) {
+        error_setg(errp, "setting a dirty page limit requires support from dirty 
ring");

Can we phrase the message in a way that gives the user a chance to guess
what he needs to do to avoid it?
> Perhaps: "setting a dirty page limit requires KVM with accelerator
property 'dirty-ring-size' set".
Sound good, this make things more clear.

+        return;
+    }
+
+    dirtylimit_calc();
+    dirtylimit_vcpu(idx, dirtyrate);
+}
+
+void qmp_cancel_dirty_limit(int64_t idx,
+                            Error **errp)
+{

Three cases:

Case 1: enable is impossible, so nothing to do.

Case 2: enable is possible and we actually enabled.

Case 3: enable is possible, but we didn't.  Nothing to do.

+    if (!kvm_enabled() || !kvm_dirty_ring_enabled()) {
+        error_setg(errp, "no need to cancel a dirty page limit as dirty ring not 
enabled");
+        return;

This is case 1.  We error out.

+    }
+
+    if (unlikely(!dirtylimit_cancel_vcpu(idx))) {

I don't think unlikely() matters here.

+        dirtylimit_calc_quit();
+    }

In case 2, dirtylimit_calc_quit() returns zero if this was the last
limit, else non-zero. If the former, we request the thread to stop.I am wildly guessing you misunderstood the function
dirtylimit_cancel_vcpu, see below.

In case 3, dirtylimit_calc_quit() returns zero, and we do nothing.
In this case, we cancel the "dirtylimit thread" in function dirtylimit_cancel_vcpu actually, if it was the last limit thread of the whole vm, dirtylimit_cancel_vcpu return zero and we request the dirtyrate calculation thread to stop, so we call the function dirtylimit_calc_quit , which stop the "dirtyrate calculation thread" internally.

Why is case 1 and error, but case 3 isn't?

Both could silently do nothing, like case 3 does now.

Both could error out, like case 1 does now.  A possible common error
message: "there is no dirty page limit to cancel".

I'd be okay with consistently doing nothing, and with consistently
erroring out.

+}
+
+void dirtylimit_setup(int max_cpus)
+{
+    if (!kvm_enabled() || !kvm_dirty_ring_enabled()) {
+        return;
+    }
+
+    dirtylimit_calc_state_init(max_cpus);
+    dirtylimit_state_init(max_cpus);
+}
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index e948e81..11df012 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -881,6 +881,15 @@ void end_exclusive(void);
   */
  void qemu_init_vcpu(CPUState *cpu);
+/**
+ * dirtylimit_setup:
+ *
+ * Initializes the global state of dirtylimit calculation and
+ * dirtylimit itself. This is prepared for vCPU dirtylimit which
+ * could be triggered during vm lifecycle.
+ */
+void dirtylimit_setup(int max_cpus);
+
  #define SSTEP_ENABLE  0x1  /* Enable simulated HW single stepping */
  #define SSTEP_NOIRQ   0x2  /* Do not use IRQ while single stepping */
  #define SSTEP_NOTIMER 0x4  /* Do not Timers while single stepping */
diff --git a/qapi/migration.json b/qapi/migration.json
index bbfd48c..2b0fe19 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -1850,6 +1850,53 @@
  { 'command': 'query-dirty-rate', 'returns': 'DirtyRateInfo' }
##
+# @set-dirty-limit:
+#
+# Set the upper limit of dirty page rate for a vCPU.
+#
+# This command could be used to cap the vCPU memory load, which is also

"Could be used" suggests there are other uses.  I don't think there are.
>> +# refered as "dirty page rate". Users can use set-dirty-limit
unconditionally,
+# but if one want to know which vCPU is in high memory load and which vCPU

"one wants"

+# should be limited, using calc-dirty-rate command with dirty-ring mode maybe

"may be"

+# an availiable method. Note that this command requires support from dirty

"available".  Consider using a spell checker.

+# ring, which means property "dirty-ring-size" of accelerator object "kvm"
+# must be set.

Please limit doc comment line length to approximately 70 characters for
legibility.

Here's my try:

    ##
    # @set-dirty-limit:
    #
    # Set the upper limit of dirty page rate for a virtual CPU.
    #
    # Requires KVM with accelerator property "dirty-ring-size" set.
    # A virtual CPU's dirty page rate is a measure of its memory load.
    # To observe dirty page rates, use @calc-dirty-rate.

+#
+# @cpu-index: vCPU index to set dirty page limit.

I'd go with

    # @cpu-index: index of the virtual CPU.

Indeed, this make the document more legible. Thank Markus very much for the careful code review, i'll pick up all this next version.
+#
+# @dirty-rate: upper limit for the specified vCPU's dirty page rate (MB/s)
+#
+# Since: 7.0
+#
+# Example:
+#   {"execute": "set-dirty-limit"}
+#    "arguments": { "cpu-index": 0,
+#                   "dirty-rate": 200 } }
+#
+##
+{ 'command': 'set-dirty-limit',
+  'data': { 'cpu-index': 'int', 'dirty-rate': 'uint64' } }
+
+##
+# @cancel-dirty-limit:
+#
+# Cancel the dirty page limit for the vCPU which has been set with
+# set-dirty-limit command. Note that this command requires support from
+# dirty ring, same as the "set-dirty-limit" command.
+#
+# @cpu-index: vCPU index to cancel the dirty page limit
+#
+# Since: 7.0
+#
+# Example:
+#   {"execute": "cancel-dirty-limit"}
+#    "arguments": { "cpu-index": 0 } }
+#
+##
+{ 'command': 'cancel-dirty-limit',
+  'data': { 'cpu-index': 'int' } }
+
+##
  # @snapshot-save:
  #
  # Save a VM snapshot
diff --git a/softmmu/vl.c b/softmmu/vl.c
index 620a1f1..0f83ce3 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -3777,5 +3777,6 @@ void qemu_init(int argc, char **argv, char **envp)
      qemu_init_displays();
      accel_setup_post(current_machine);
      os_setup_post();
+    dirtylimit_setup(current_machine->smp.max_cpus);
      resume_mux_open();
  }




reply via email to

[Prev in Thread] Current Thread [Next in Thread]