qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] hw/rdma: Lock before destroy


From: Yuval Shaia
Subject: Re: [PATCH] hw/rdma: Lock before destroy
Date: Tue, 24 Mar 2020 13:25:36 +0200



On Tue, 24 Mar 2020 at 13:18, Marcel Apfelbaum <address@hidden> wrote:
Hi Peter,Yuval

On 3/24/20 1:05 PM, Peter Maydell wrote:
> On Tue, 24 Mar 2020 at 10:54, Yuval Shaia <address@hidden> wrote:
>> To protect from the case that users of the protected_qlist are still
>> using the qlist let's lock before detsroying it.
>>
>> Reported-by: Coverity (CID 1421951)
>> Signed-off-by: Yuval Shaia <address@hidden>
>> ---
>>   hw/rdma/rdma_utils.c | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/hw/rdma/rdma_utils.c b/hw/rdma/rdma_utils.c
>> index 73f279104c..55779cbef6 100644
>> --- a/hw/rdma/rdma_utils.c
>> +++ b/hw/rdma/rdma_utils.c
>> @@ -63,6 +63,7 @@ void rdma_protected_qlist_init(RdmaProtectedQList *list)
>>   void rdma_protected_qlist_destroy(RdmaProtectedQList *list)
>>   {
>>       if (list->list) {
>> +        qemu_mutex_lock(&list->lock);
>>           qlist_destroy_obj(QOBJECT(list->list));
>>           qemu_mutex_destroy(&list->lock);
>>           list->list = NULL;
> Looking at the code a bit more closely, I don't think that taking
> the lock here helps. If there really could be another thread
> that might be calling eg rdma_protected_qlist_append_int64()
> at the same time that we're calling rdma_protected_qlist_destroy()
> then it's just as likely that the code calling destroy could
> finish just before the append starts: in that case the append
> will crash because the list has been freed and the mutex destroyed.
>
> So I think we require that the user of a protected-qlist
> ensures that there are no more users of it before it is
> destroyed (which is fairly normal semantics), and the code
> as it stands is correct (ie coverity false-positive).

I agree is a false positive for our case.
"rdma_protected_qlist_destroy" is called by "mad_fini" which in turn is
called by "rdma_backend_fini"
*after* the VM shutdown, at this point there is no active lock user.

As i already said, current code makes sure it will not happen however it better that API will ensure this and will not trust callers.
 

Thanks,
Marcel

> thanks
> -- PMM


reply via email to

[Prev in Thread] Current Thread [Next in Thread]