qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Question on dirty sync before kvm memslot removal


From: Peter Xu
Subject: Re: Question on dirty sync before kvm memslot removal
Date: Thu, 2 Apr 2020 18:32:40 -0400

On Thu, Apr 02, 2020 at 04:47:58PM -0400, Peter Xu wrote:
> On Wed, Apr 01, 2020 at 07:09:28PM -0400, Peter Xu wrote:
> > On Wed, Apr 01, 2020 at 01:12:04AM +0200, Paolo Bonzini wrote:
> > > On 31/03/20 18:51, Peter Xu wrote:
> > > > On Tue, Mar 31, 2020 at 05:34:43PM +0200, Paolo Bonzini wrote:
> > > >> On 31/03/20 17:23, Peter Xu wrote:
> > > >>>> Or KVM_MEM_READONLY.
> > > >>> Yeah, I used a new flag because I thought READONLY was a bit tricky to
> > > >>> be used directly here.  The thing is IIUC if guest writes to a
> > > >>> READONLY slot then KVM should either ignore the write or trigger an
> > > >>> error which I didn't check, however here what we want to do is to let
> > > >>> the write to fallback to the userspace so it's neither dropped (we
> > > >>> still want the written data to land gracefully on RAM), nor triggering
> > > >>> an error (because the slot is actually writable).
> > > >>
> > > >> No, writes fall back to userspace with KVM_MEM_READONLY.
> > > > 
> > > > I read that __kvm_write_guest_page() will return -EFAULT when writting
> > > > to the read-only memslot, and e.g. kvm_write_guest_virt_helper() will
> > > > return with X86EMUL_IO_NEEDED, which will be translated into a
> > > > EMULATION_OK in x86_emulate_insn().  Then in x86_emulate_instruction()
> > > > it seems to get a "1" returned (note that I think it does not set
> > > > either vcpu->arch.pio.count or vcpu->mmio_needed).  Does that mean
> > > > it'll retry the write forever instead of quit into the userspace?  I
> > > > may possibly have misread somewhere, though..
> > > 
> > > We are definitely relying on KVM_MEM_READONLY to exit to userspace, in
> > > order to emulate flash memory.
> > > 
> > > > However... I think I might find another race with this:
> > > > 
> > > >           main thread                       vcpu thread
> > > >           -----------                       -----------
> > > >                                             dirty GFN1, cached in PML
> > > >                                             ...
> > > >           remove memslot1 of GFN1
> > > >             set slot READONLY (whatever, or INVALID)
> > > >             sync log (NOTE: no GFN1 yet)
> > > >                                             vmexit, flush PML with RCU
> > > >                                             (will flush to old bitmap) 
> > > > <------- [1]
> > > >             delete memslot1 (old bitmap freed)                         
> > > > <------- [2]
> > > >           add memslot2 of GFN1 (memslot2 could be smaller)
> > > >             add memslot2
> > > > 
> > > > I'm not 100% sure, but I think GFN1's dirty bit will be lost though
> > > > it's correctly applied at [1] but quickly freed at [2].
> > > 
> > > Yes, we probably need to do a mass vCPU kick when a slot is made
> > > READONLY, before KVM_SET_USER_MEMORY_REGION returns (and after releasing
> > > slots_lock).  It makes sense to guarantee that you can't get any more
> > > dirtying after KVM_SET_USER_MEMORY_REGION returns.
> > 
> > Sounds doable.  Though we still need a synchronous way to kick vcpus
> > in KVM to make sure the PML is flushed before KVM_SET_MEMORY_REGION
> > returns, am I right?  Is there an existing good way to do this?
> 
> Paolo,
> 
> I'm not sure whether it's anything close to acceptable, but this is
> something I was thinking about below (pesudo code).  Do you think it
> makes any sense?  Thanks,
> 
> 8<-------------------------------------------
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 1b6d9ac9533c..437d669dde42 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -8161,6 +8161,22 @@ void __kvm_request_immediate_exit(struct kvm_vcpu 
> *vcpu)
>  }
>  EXPORT_SYMBOL_GPL(__kvm_request_immediate_exit);
> 
> +void kvm_vcpu_sync(struct kvm_vcpu *vcpu)
> +{
> +       DECLARE_WAITQUEUE(wait, current);
> +
> +       add_wait_queue(&vcpu->sync_wq, &wait);
> +       set_current_state(TASK_UNINTERRUPTIBLE);
> +       kvm_make_request(KVM_REQ_SYNC_VCPU, vcpu);
> +       schedule();
> +       remove_wait_queue(&vcpu->sync_wq, &wait);
> +}
> +
> +void kvm_vcpu_sync_ack(struct kvm_vcpu *vcpu)
> +{
> +       wake_up(&vcpu->sync_wq);
> +}
> +
>  /*
>   * Returns 1 to let vcpu_run() continue the guest execution loop without
>   * exiting to the userspace.  Otherwise, the value will be returned to the
> @@ -8274,6 +8290,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>                         kvm_hv_process_stimers(vcpu);
>                 if (kvm_check_request(KVM_REQ_APICV_UPDATE, vcpu))
>                         kvm_vcpu_update_apicv(vcpu);
> +               if (kvm_check_request(KVM_REQ_SYNC_VCPU, vcpu))
> +                       kvm_vcpu_sync_ack(vcpu);
>         }
> 
>         if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win) {
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index f6a1905da9bf..e825d2e0a221 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -146,6 +146,7 @@ static inline bool is_error_page(struct page *page)
>  #define KVM_REQ_MMU_RELOAD        (1 | KVM_REQUEST_WAIT | 
> KVM_REQUEST_NO_WAKEUP)
>  #define KVM_REQ_PENDING_TIMER     2
>  #define KVM_REQ_UNHALT            3
> +#define KVM_REQ_SYNC_VCPU         4
>  #define KVM_REQUEST_ARCH_BASE     8
> 
>  #define KVM_ARCH_REQ_FLAGS(nr, flags) ({ \
> @@ -278,6 +279,7 @@ struct kvm_vcpu {
>         struct kvm_run *run;
> 
>         struct swait_queue_head wq;
> +       struct wait_queue_head sync_wq;
>         struct pid __rcu *pid;
>         int sigset_active;
>         sigset_t sigset;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index f744bc603c53..35216aeb0365 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -342,6 +342,7 @@ static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct 
> kvm *kvm, unsigned id)
>         vcpu->vcpu_id = id;
>         vcpu->pid = NULL;
>         init_swait_queue_head(&vcpu->wq);
> +       init_wait_queue_head(&vcpu->sync_wq);
>         kvm_async_pf_vcpu_init(vcpu);
> 
>         vcpu->pre_pcpu = -1;
> @@ -1316,9 +1317,20 @@ int kvm_set_memory_region(struct kvm *kvm,
>                           const struct kvm_userspace_memory_region *mem)
>  {
>         int r;
> +       unsigned int i;
> +       struct kvm_vcpu *vcpu;
> 
>         mutex_lock(&kvm->slots_lock);
> +
>         r = __kvm_set_memory_region(kvm, mem);
> +
> +       /* TBD: use arch-specific hooks; this won't compile on non-x86 */
> +       if ((mem->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
> +           (mem->flags & KVM_MEM_READONLY)) {
> +               kvm_for_each_vcpu(i, vcpu, kvm)
> +                   kvm_vcpu_sync(vcpu);
> +       }
> +

Oops, this block should definitely be after the unlock as you
suggested...

>         mutex_unlock(&kvm->slots_lock);
>         return r;
>  }
> @@ -2658,6 +2670,8 @@ static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu)
>                 goto out;
>         if (signal_pending(current))
>                 goto out;
> +       if (kvm_check_request(KVM_REQ_SYNC_VCPU, vcpu))
> +               goto out;
> 
>         ret = 0;
>  out:
> 8<-------------------------------------------
> 
> -- 
> Peter Xu

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]