qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/2] lockable: replaced locks with lock guard macros where


From: Daniel Brodsky
Subject: Re: [PATCH v2 2/2] lockable: replaced locks with lock guard macros where appropriate
Date: Fri, 20 Mar 2020 05:19:57 -0700

On Fri, Mar 20, 2020 at 5:06 AM Paolo Bonzini <address@hidden> wrote:
>
> On 20/03/20 00:34, address@hidden wrote:
> > index 682abd8e09..89f8a656a4 100644
> > --- a/block/iscsi.c
> > +++ b/block/iscsi.c
> > @@ -1086,7 +1086,7 @@ static BlockAIOCB *iscsi_aio_ioctl(BlockDriverState 
> > *bs,
> >      acb->task->expxferlen = acb->ioh->dxfer_len;
> >
> >      data.size = 0;
> > -    qemu_mutex_lock(&iscsilun->mutex);
> > +    QEMU_LOCK_GUARD(&iscsilun->mutex);
> >      if (acb->task->xfer_dir == SCSI_XFER_WRITE) {
> >          if (acb->ioh->iovec_count == 0) {
> >              data.data = acb->ioh->dxferp;
> > @@ -1102,7 +1102,6 @@ static BlockAIOCB *iscsi_aio_ioctl(BlockDriverState 
> > *bs,
> >                                   iscsi_aio_ioctl_cb,
> >                                   (data.size > 0) ? &data : NULL,
> >                                   acb) != 0) {
> > -        qemu_mutex_unlock(&iscsilun->mutex);
> >          scsi_free_scsi_task(acb->task);
> >          qemu_aio_unref(acb);
> >          return NULL;
>
> Not exactly the same, should be okay but it also should be noted in the
> changelog.

Going to drop this change in the next version, I don't want this patch
to include cases
with possible side effects as I skipped other ones like this already.

> >  void cpu_list_remove(CPUState *cpu)
> >  {
> > -    qemu_mutex_lock(&qemu_cpu_list_lock);
> > +    QEMU_LOCK_GUARD(&qemu_cpu_list_lock);
> >      if (!QTAILQ_IN_USE(cpu, node)) {
> >          /* there is nothing to undo since cpu_exec_init() hasn't been 
> > called */
> >          qemu_mutex_unlock(&qemu_cpu_list_lock);
>
>
> Missed unlock.
>
> Otherwise looks good.
>
> Paolo
>
Thanks for the review, I'll fix the changes you pointed out in the next version.

Daniel



reply via email to

[Prev in Thread] Current Thread [Next in Thread]