qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RFC 8/9] KVM: Add dirty-ring-size property


From: Dr. David Alan Gilbert
Subject: Re: [PATCH RFC 8/9] KVM: Add dirty-ring-size property
Date: Wed, 25 Mar 2020 20:00:31 +0000
User-agent: Mutt/1.13.3 (2020-01-12)

* Peter Xu (address@hidden) wrote:
> Add a parameter for size of dirty ring.  If zero, dirty ring is
> disabled.  Otherwise dirty ring will be enabled with the per-vcpu size
> as specified.  If dirty ring cannot be enabled due to unsupported
> kernel, it'll fallback to dirty logging.  By default, dirty ring is
> not enabled (dirty-ring-size==0).
> 
> Signed-off-by: Peter Xu <address@hidden>
> ---
>  accel/kvm/kvm-all.c | 64 +++++++++++++++++++++++++++++++++++++++++++++
>  qemu-options.hx     |  3 +++
>  2 files changed, 67 insertions(+)
> 
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index ea7b8f7ca5..6d145a8b98 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -127,6 +127,8 @@ struct KVMState
>          KVMMemoryListener *ml;
>          AddressSpace *as;
>      } *as;
> +    int kvm_dirty_ring_size;
> +    int kvm_dirty_gfn_count;    /* If nonzero, then kvm dirty ring enabled */
>  };
>  
>  KVMState *kvm_state;
> @@ -2077,6 +2079,33 @@ static int kvm_init(MachineState *ms)
>      s->memory_listener.listener.coalesced_io_add = kvm_coalesce_mmio_region;
>      s->memory_listener.listener.coalesced_io_del = 
> kvm_uncoalesce_mmio_region;
>  
> +    /*
> +     * Enable KVM dirty ring if supported, otherwise fall back to
> +     * dirty logging mode
> +     */
> +    if (s->kvm_dirty_ring_size > 0) {
> +        /* Read the max supported pages */
> +        ret = kvm_vm_check_extension(kvm_state, KVM_CAP_DIRTY_LOG_RING);
> +        if (ret > 0) {
> +            if (s->kvm_dirty_ring_size > ret) {
> +                error_report("KVM dirty ring size %d too big (maximum is 
> %d). "
> +                             "Please use a smaller value.",
> +                             s->kvm_dirty_ring_size, ret);
> +                goto err;
> +            }
> +
> +            ret = kvm_vm_enable_cap(s, KVM_CAP_DIRTY_LOG_RING, 0,
> +                                    s->kvm_dirty_ring_size);
> +            if (ret) {
> +                error_report("Enabling of KVM dirty ring failed: %d", ret);
> +                goto err;
> +            }
> +
> +            s->kvm_dirty_gfn_count =
> +                s->kvm_dirty_ring_size / sizeof(struct kvm_dirty_gfn);

What happens if I was to pass dirty-ring-size=1 ?
Then the count would be 0 and things would get upset somewhere?
Do you need to check for a minimum postive value?

> +        }
> +    }
> +
>      kvm_memory_listener_register(s, &s->memory_listener,
>                                   &address_space_memory, 0);
>      memory_listener_register(&kvm_io_listener,
> @@ -3037,6 +3066,33 @@ bool kvm_kernel_irqchip_split(void)
>      return kvm_state->kernel_irqchip_split == ON_OFF_AUTO_ON;
>  }
>  
> +static void kvm_get_dirty_ring_size(Object *obj, Visitor *v,
> +                                    const char *name, void *opaque,
> +                                    Error **errp)
> +{
> +    KVMState *s = KVM_STATE(obj);
> +    int64_t value = s->kvm_dirty_ring_size;
> +
> +    visit_type_int(v, name, &value, errp);
> +}
> +
> +static void kvm_set_dirty_ring_size(Object *obj, Visitor *v,
> +                                    const char *name, void *opaque,
> +                                    Error **errp)
> +{
> +    KVMState *s = KVM_STATE(obj);
> +    Error *error = NULL;
> +    int64_t value;
> +
> +    visit_type_int(v, name, &value, &error);
> +    if (error) {
> +        error_propagate(errp, error);
> +        return;
> +    }
> +
> +    s->kvm_dirty_ring_size = value;
> +}
> +
>  static void kvm_accel_instance_init(Object *obj)
>  {
>      KVMState *s = KVM_STATE(obj);
> @@ -3044,6 +3100,8 @@ static void kvm_accel_instance_init(Object *obj)
>      s->kvm_shadow_mem = -1;
>      s->kernel_irqchip_allowed = true;
>      s->kernel_irqchip_split = ON_OFF_AUTO_AUTO;
> +    /* By default off */
> +    s->kvm_dirty_ring_size = 0;
>  }
>  
>  static void kvm_accel_class_init(ObjectClass *oc, void *data)
> @@ -3065,6 +3123,12 @@ static void kvm_accel_class_init(ObjectClass *oc, void 
> *data)
>          NULL, NULL, &error_abort);
>      object_class_property_set_description(oc, "kvm-shadow-mem",
>          "KVM shadow MMU size", &error_abort);
> +
> +    object_class_property_add(oc, "dirty-ring-size", "int",
> +        kvm_get_dirty_ring_size, kvm_set_dirty_ring_size,
> +        NULL, NULL, &error_abort);

I don't think someone passing in a non-number should cause an abort;
it should exit, but I don't think it should abort/core.

> +    object_class_property_set_description(oc, "dirty-ring-size",
> +        "KVM dirty ring size (<=0 to disable)", &error_abort);
>  }
>  
>  static const TypeInfo kvm_accel_type = {
> diff --git a/qemu-options.hx b/qemu-options.hx
> index 224a8e8712..140bd38726 100644
> --- a/qemu-options.hx
> +++ b/qemu-options.hx
> @@ -119,6 +119,7 @@ DEF("accel", HAS_ARG, QEMU_OPTION_accel,
>      "                kernel-irqchip=on|off|split controls accelerated 
> irqchip support (default=on)\n"
>      "                kvm-shadow-mem=size of KVM shadow MMU in bytes\n"
>      "                tb-size=n (TCG translation block cache size)\n"
> +    "                dirty-ring-size=n (KVM dirty ring size in Bytes)\n"
>      "                thread=single|multi (enable multi-threaded TCG)\n", 
> QEMU_ARCH_ALL)
>  STEXI
>  @item -accel @var{name}[,prop=@var{value}[,...]]
> @@ -140,6 +141,8 @@ irqchip completely is not recommended except for 
> debugging purposes.
>  Defines the size of the KVM shadow MMU.
>  @item tb-size=@var{n}
>  Controls the size (in MiB) of the TCG translation block cache.
> +@item dirty-ring-size=@val{n}
> +Controls the size (in Bytes) of KVM dirty ring (<=0 to disable).

I don't see the point in allowing < 0 ; I'd ban it.

Dave


>  @item thread=single|multi
>  Controls number of TCG threads. When the TCG is multi-threaded there will be 
> one
>  thread per vCPU therefor taking advantage of additional host cores. The 
> default
> -- 
> 2.24.1
> 
> 
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]