[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v4 03/10] kvm: dirty-ring: Fix race with vcpu creation
From: |
Peter Xu |
Subject: |
Re: [PATCH v4 03/10] kvm: dirty-ring: Fix race with vcpu creation |
Date: |
Tue, 4 Apr 2023 12:36:47 -0400 |
On Tue, Apr 04, 2023 at 06:08:41PM +0200, Paolo Bonzini wrote:
> Il mar 4 apr 2023, 16:11 Peter Xu <peterx@redhat.com> ha scritto:
>
> > Hi, Paolo!
> >
> > On Tue, Apr 04, 2023 at 03:32:38PM +0200, Paolo Bonzini wrote:
> > > On 2/16/23 17:18, huangy81@chinatelecom.cn wrote:
> > > > diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> > > > index 9b26582655..47483cdfa0 100644
> > > > --- a/accel/kvm/kvm-all.c
> > > > +++ b/accel/kvm/kvm-all.c
> > > > @@ -685,6 +685,15 @@ static uint32_t kvm_dirty_ring_reap_one(KVMState
> > *s, CPUState *cpu)
> > > > uint32_t ring_size = s->kvm_dirty_ring_size;
> > > > uint32_t count = 0, fetch = cpu->kvm_fetch_index;
> > > > + /*
> > > > + * It's possible that we race with vcpu creation code where the
> > vcpu is
> > > > + * put onto the vcpus list but not yet initialized the dirty ring
> > > > + * structures. If so, skip it.
> > > > + */
> > > > + if (!cpu->created) {
> > > > + return 0;
> > > > + }
> > > > +
> > >
> > > Is there a lock that protects cpu->created?
> > >
> > > If you don't want to use a lock you need to use qatomic_load_acquire
> > > together with
> > >
> > > diff --git a/softmmu/cpus.c b/softmmu/cpus.c
> > > index fed20ffb5dd2..15b64e7f4592 100644
> > > --- a/softmmu/cpus.c
> > > +++ b/softmmu/cpus.c
> > > @@ -525,7 +525,7 @@ void qemu_cond_timedwait_iothread(QemuCond *cond,
> > int ms)
> > > /* signal CPU creation */
> > > void cpu_thread_signal_created(CPUState *cpu)
> > > {
> > > - cpu->created = true;
> > > + qatomic_store_release(&cpu->created, true);
> > > qemu_cond_signal(&qemu_cpu_cond);
> > > }
> >
> > Makes sense.
> >
> > When looking at such a possible race, I also found that when destroying the
> > vcpu we may have another relevant issue, where we flip "vcpu->created"
> > after destroying the vcpu. IIUC it means the same issue can occur when
> > vcpu unplugged?
> >
> > Meanwhile I think the memory ordering trick won't play there, because
> > firstly to do that we'll need to update created==false:
> >
> > - kvm_destroy_vcpu(cpu);
> > cpu_thread_signal_destroyed(cpu);
> > + kvm_destroy_vcpu(cpu);
> >
> > And even if we order the operations we still cannot assume the data is safe
> > to access even if created==true.
> >
>
> Yes, this would need some kind of synchronize_rcu() before clearing
> created, and rcu_read_lock() when reading the dirty ring.
>
> (Note that synchronize_rcu can only be used outside BQL. The alternative
> would be to defer what's after created=false using call_rcu().
>
> Maybe we'd better need (unfortunately) a per-vcpu mutex to protect both
> > cases?
>
>
> If RCU can work it's obviously better, but if not then yes. It's per-CPU so
> it's only about the complexity, not the overhead.
Oh.. I just noticed that both vcpu creation and destruction will require
BQL, while right now dirty ring reaping also requires BQL (taken at all
callers of kvm_dirty_ring_reap()).. so I assume even the current patch will
be race-free already?
I'm not sure whether it's ideal, though, I think having BQL at least makes
sure there's no concurrent memory updates so the slot IDs will be solid
during the dirty ring reaping, but I can't remember the details. However
that seems to be a separate topic to be discussed..
Thanks,
--
Peter Xu