[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking
From: |
Salil Mehta |
Subject: |
RE: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code |
Date: |
Thu, 16 May 2024 12:12:29 +0000 |
Hi Harsh,
> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> Sent: Thursday, May 16, 2024 11:15 AM
>
> Hi Salil,
>
> Thanks for your email.
> Your patch 1/8 is included here based on review comments on my previous
> patch from one of the maintainers in the community and therefore I had
> kept you in CC to be aware of the desire of having this independent patch to
> get merged earlier even if your other patches in the series may go through
> further reviews.
I really don’t know which discussion are you pointing at? Please understand
you are fixing a bug and we are pushing a feature which has got large series.
It will break the patch-set which is about t be merged.
There will be significant overhead of testing on us for the work we have been
carrying forward for large time. This will be disruptive. Please dont!
>
> I am hoping to see your v9 soon and thereafter maintainer(s) may choose to
> pick the latest independent patch if needs to be merged earlier.
I don’t think you are understanding what problem it is causing. For your
small bug fix you are causing significant delays at our end.
Thanks
Salil.
>
> Thanks for your work and let's be hopeful it gets merged soon.
>
> regards,
> Harsh
>
> On 5/16/24 14:00, Salil Mehta wrote:
> > Hi Harsh,
> >
> > Thanks for your interest in the patch-set but taking away patches like
> > this from other series without any discussion can disrupt others work
> > and its acceptance on time. This is because we will have to put lot of
> > effort in rebasing bigger series and then testing overhead comes along
> > with it.
> >
> > The patch-set (from where this patch has been taken) is part of even
> > bigger series and there have been many people and companies toiling to
> > fix the bugs collectively in that series and for years.
> >
> > I'm about float the V9 version of the Arch agnostic series which this
> > patch is part of and you can rebase your patch-set from there. I'm
> > hopeful that it will get accepted in this cycle.
> >
> >
> > Many thanks
> > Salil.
> >
> >> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> >> Sent: Thursday, May 16, 2024 6:32 AM
> >>
> >> From: Salil Mehta <salil.mehta@huawei.com>
> >>
> >> KVM vCPU creation is done once during the vCPU realization when
> Qemu
> >> vCPU thread is spawned. This is common to all the architectures as of
> now.
> >>
> >> Hot-unplug of vCPU results in destruction of the vCPU object in QOM
> but
> >> the corresponding KVM vCPU object in the Host KVM is not destroyed
> as
> >> KVM doesn't support vCPU removal. Therefore, its representative KVM
> >> vCPU object/context in Qemu is parked.
> >>
> >> Refactor architecture common logic so that some APIs could be reused
> by
> >> vCPU Hotplug code of some architectures likes ARM, Loongson etc.
> Update
> >> new/old APIs with trace events instead of DPRINTF. No functional
> change is
> >> intended here.
> >>
> >> Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> >> Reviewed-by: Gavin Shan <gshan@redhat.com>
> >> Tested-by: Vishnu Pajjuri <vishnu@os.amperecomputing.com>
> >> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> >> Tested-by: Xianglai Li <lixianglai@loongson.cn>
> >> Tested-by: Miguel Luis <miguel.luis@oracle.com>
> >> Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
> >> [harshpb: fixed rebase failures in include/sysemu/kvm.h]
> >> Signed-off-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
> >> ---
> >> include/sysemu/kvm.h | 15 ++++++++++
> >> accel/kvm/kvm-all.c | 64 ++++++++++++++++++++++++++++++++---
> -----
> >> --
> >> accel/kvm/trace-events | 5 +++-
> >> 3 files changed, 68 insertions(+), 16 deletions(-)
> >>
> >> diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h index
> >> eaf801bc93..fa3ec74442 100644
> >> --- a/include/sysemu/kvm.h
> >> +++ b/include/sysemu/kvm.h
> >> @@ -434,6 +434,21 @@ void kvm_set_sigmask_len(KVMState *s,
> unsigned
> >> int sigmask_len);
> >>
> >> int kvm_physical_memory_addr_from_host(KVMState *s, void
> >> *ram_addr,
> >> hwaddr *phys_addr);
> >> +/**
> >> + * kvm_create_vcpu - Gets a parked KVM vCPU or creates a KVM
> vCPU
> >> + * @cpu: QOM CPUState object for which KVM vCPU has to be
> >> fetched/created.
> >> + *
> >> + * @returns: 0 when success, errno (<0) when failed.
> >> + */
> >> +int kvm_create_vcpu(CPUState *cpu);
> >> +
> >> +/**
> >> + * kvm_park_vcpu - Park QEMU KVM vCPU context
> >> + * @cpu: QOM CPUState object for which QEMU KVM vCPU context
> has to
> >> be parked.
> >> + *
> >> + * @returns: none
> >> + */
> >> +void kvm_park_vcpu(CPUState *cpu);
> >>
> >> #endif /* COMPILING_PER_TARGET */
> >>
> >> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index
> >> d7281b93f3..30d42847de 100644
> >> --- a/accel/kvm/kvm-all.c
> >> +++ b/accel/kvm/kvm-all.c
> >> @@ -128,6 +128,7 @@ static QemuMutex kml_slots_lock; #define
> >> kvm_slots_unlock() qemu_mutex_unlock(&kml_slots_lock)
> >>
> >> static void kvm_slot_init_dirty_bitmap(KVMSlot *mem);
> >> +static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id);
> >>
> >> static inline void kvm_resample_fd_remove(int gsi) { @@ -340,14
> +341,53
> >> @@ err:
> >> return ret;
> >> }
> >>
> >> +void kvm_park_vcpu(CPUState *cpu)
> >> +{
> >> + struct KVMParkedVcpu *vcpu;
> >> +
> >> + trace_kvm_park_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> >> +
> >> + vcpu = g_malloc0(sizeof(*vcpu));
> >> + vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> >> + vcpu->kvm_fd = cpu->kvm_fd;
> >> + QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu,
> node); }
> >> +
> >> +int kvm_create_vcpu(CPUState *cpu)
> >> +{
> >> + unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
> >> + KVMState *s = kvm_state;
> >> + int kvm_fd;
> >> +
> >> + trace_kvm_create_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> >> +
> >> + /* check if the KVM vCPU already exist but is parked */
> >> + kvm_fd = kvm_get_vcpu(s, vcpu_id);
> >> + if (kvm_fd < 0) {
> >> + /* vCPU not parked: create a new KVM vCPU */
> >> + kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
> >> + if (kvm_fd < 0) {
> >> + error_report("KVM_CREATE_VCPU IOCTL failed for vCPU %lu",
> >> vcpu_id);
> >> + return kvm_fd;
> >> + }
> >> + }
> >> +
> >> + cpu->kvm_fd = kvm_fd;
> >> + cpu->kvm_state = s;
> >> + cpu->vcpu_dirty = true;
> >> + cpu->dirty_pages = 0;
> >> + cpu->throttle_us_per_full = 0;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> static int do_kvm_destroy_vcpu(CPUState *cpu) {
> >> KVMState *s = kvm_state;
> >> long mmap_size;
> >> - struct KVMParkedVcpu *vcpu = NULL;
> >> int ret = 0;
> >>
> >> - trace_kvm_destroy_vcpu();
> >> + trace_kvm_destroy_vcpu(cpu->cpu_index,
> kvm_arch_vcpu_id(cpu));
> >>
> >> ret = kvm_arch_destroy_vcpu(cpu);
> >> if (ret < 0) {
> >> @@ -373,10 +413,7 @@ static int do_kvm_destroy_vcpu(CPUState
> *cpu)
> >> }
> >> }
> >>
> >> - vcpu = g_malloc0(sizeof(*vcpu));
> >> - vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> >> - vcpu->kvm_fd = cpu->kvm_fd;
> >> - QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu,
> node);
> >> + kvm_park_vcpu(cpu);
> >> err:
> >> return ret;
> >> }
> >> @@ -397,6 +434,8 @@ static int kvm_get_vcpu(KVMState *s, unsigned
> long
> >> vcpu_id)
> >> if (cpu->vcpu_id == vcpu_id) {
> >> int kvm_fd;
> >>
> >> + trace_kvm_get_vcpu(vcpu_id);
> >> +
> >> QLIST_REMOVE(cpu, node);
> >> kvm_fd = cpu->kvm_fd;
> >> g_free(cpu);
> >> @@ -404,7 +443,7 @@ static int kvm_get_vcpu(KVMState *s, unsigned
> long
> >> vcpu_id)
> >> }
> >> }
> >>
> >> - return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
> >> + return -ENOENT;
> >> }
> >>
> >> int kvm_init_vcpu(CPUState *cpu, Error **errp) @@ -415,19 +454,14
> @@
> >> int kvm_init_vcpu(CPUState *cpu, Error **errp)
> >>
> >> trace_kvm_init_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> >>
> >> - ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
> >> + ret = kvm_create_vcpu(cpu);
> >> if (ret < 0) {
> >> - error_setg_errno(errp, -ret, "kvm_init_vcpu: kvm_get_vcpu
> failed
> >> (%lu)",
> >> + error_setg_errno(errp, -ret,
> >> + "kvm_init_vcpu: kvm_create_vcpu failed (%lu)",
> >> kvm_arch_vcpu_id(cpu));
> >> goto err;
> >> }
> >>
> >> - cpu->kvm_fd = ret;
> >> - cpu->kvm_state = s;
> >> - cpu->vcpu_dirty = true;
> >> - cpu->dirty_pages = 0;
> >> - cpu->throttle_us_per_full = 0;
> >> -
> >> mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
> >> if (mmap_size < 0) {
> >> ret = mmap_size;
> >> diff --git a/accel/kvm/trace-events b/accel/kvm/trace-events index
> >> 681ccb667d..75c1724e78 100644
> >> --- a/accel/kvm/trace-events
> >> +++ b/accel/kvm/trace-events
> >> @@ -9,6 +9,10 @@ kvm_device_ioctl(int fd, int type, void *arg) "dev fd
> %d,
> >> type 0x%x, arg %p"
> >> kvm_failed_reg_get(uint64_t id, const char *msg) "Warning: Unable to
> >> retrieve ONEREG %" PRIu64 " from KVM: %s"
> >> kvm_failed_reg_set(uint64_t id, const char *msg) "Warning: Unable to
> set
> >> ONEREG %" PRIu64 " to KVM: %s"
> >> kvm_init_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d
> id:
> >> %lu"
> >> +kvm_create_vcpu(int cpu_index, unsigned long arch_cpu_id) "index:
> %d
> >> id: %lu"
> >> +kvm_get_vcpu(unsigned long arch_cpu_id) "id: %lu"
> >> +kvm_destroy_vcpu(int cpu_index, unsigned long arch_cpu_id) "index:
> %d
> >> id: %lu"
> >> +kvm_park_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d
> id:
> >> %lu"
> >> kvm_irqchip_commit_routes(void) ""
> >> kvm_irqchip_add_msi_route(char *name, int vector, int virq) "dev %s
> >> vector %d virq %d"
> >> kvm_irqchip_update_msi_route(int virq) "Updating MSI route
> virq=%d"
> >> @@ -25,7 +29,6 @@ kvm_dirty_ring_reaper(const char *s) "%s"
> >> kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64"
> pages
> >> (took %"PRIi64" us)"
> >> kvm_dirty_ring_reaper_kick(const char *reason) "%s"
> >> kvm_dirty_ring_flush(int finished) "%d"
> >> -kvm_destroy_vcpu(void) ""
> >> kvm_failed_get_vcpu_mmap_size(void) ""
> >> kvm_cpu_exec(void) ""
> >> kvm_interrupt_exit_request(void) ""
> >> --
> >> 2.39.3
> >
- [PATCH v2 0/4] target/ppc: vcpu hotplug failure handling fixes, Harsh Prateek Bora, 2024/05/16
- [PATCH v2 2/4] accel/kvm: Introduce kvm_create_and_park_vcpu() helper, Harsh Prateek Bora, 2024/05/16
- [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code, Harsh Prateek Bora, 2024/05/16
- RE: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code, Salil Mehta, 2024/05/16
- Re: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code, Harsh Prateek Bora, 2024/05/16
- RE: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code,
Salil Mehta <=
- Re: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code, Harsh Prateek Bora, 2024/05/16
- RE: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code, Salil Mehta, 2024/05/16
- Message not available
- [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code, Salil Mehta, 2024/05/16
- Re: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code, Harsh Prateek Bora, 2024/05/16
- Re: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code, Nicholas Piggin, 2024/05/16
- RE: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code, Salil Mehta, 2024/05/17
[PATCH v2 3/4] cpu-common.c: export cpu_get_free_index to be reused later, Harsh Prateek Bora, 2024/05/16
[PATCH v2 4/4] target/ppc: handle vcpu hotplug failure gracefully, Harsh Prateek Bora, 2024/05/16