qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 20/47] ivshmem: simplify a bit the code


From: Marc-André Lureau
Subject: Re: [Qemu-devel] [PATCH v4 20/47] ivshmem: simplify a bit the code
Date: Tue, 29 Sep 2015 09:06:17 -0400 (EDT)

Hi

----- Original Message -----
> On 24.09.2015 13:37, address@hidden wrote:
> > From: Marc-André Lureau <address@hidden>
> > 
> > Use some more explicit variables to simplify the code.
> > 
> > nth_eventfd variable is the current eventfd to be manipulated.
> 
> "the new_eventfd variable is the new eventfd to be manipulated".
> Although after the name change it is so obvious that maybe it could be
> removed from the commit message?

sure, would you give the reviewed-by with (or without) that change?
 
> > 
> > Signed-off-by: Marc-André Lureau <address@hidden>
> > ---
> >  hw/misc/ivshmem.c | 28 ++++++++++++++--------------
> >  1 file changed, 14 insertions(+), 14 deletions(-)
> > 
> > diff --git a/hw/misc/ivshmem.c b/hw/misc/ivshmem.c
> > index 63bcf6c..c59d9ed 100644
> > --- a/hw/misc/ivshmem.c
> > +++ b/hw/misc/ivshmem.c
> > @@ -488,9 +488,10 @@ static void ivshmem_read(void *opaque, const uint8_t
> > *buf, int size)
> >  {
> >      IVShmemState *s = opaque;
> >      int incoming_fd;
> > -    int guest_max_eventfd;
> > +    int new_eventfd;
> >      long incoming_posn;
> >      Error *err = NULL;
> > +    Peer *peer;
> >  
> >      if (!fifo_update_and_get(s, buf, size,
> >                               &incoming_posn, sizeof(incoming_posn))) {
> > @@ -517,6 +518,8 @@ static void ivshmem_read(void *opaque, const uint8_t
> > *buf, int size)
> >          }
> >      }
> >  
> > +    peer = &s->peers[incoming_posn];
> > +
> >      if (incoming_fd == -1) {
> >          /* if posn is positive and unseen before then this is our posn*/
> >          if (incoming_posn >= 0 && s->vm_id == -1) {
> > @@ -564,27 +567,24 @@ static void ivshmem_read(void *opaque, const uint8_t
> > *buf, int size)
> >          return;
> >      }
> >  
> > -    /* each guest has an array of eventfds, and we keep track of how many
> > -     * guests for each VM */
> > -    guest_max_eventfd = s->peers[incoming_posn].nb_eventfds;
> > +    /* each peer has an associated array of eventfds, and we keep
> > +     * track of how many eventfds received so far */
> > +    /* get a new eventfd: */
> > +    new_eventfd = peer->nb_eventfds++;
> >  
> >      /* this is an eventfd for a particular guest VM */
> >      IVSHMEM_DPRINTF("eventfds[%ld][%d] = %d\n", incoming_posn,
> > -                    guest_max_eventfd, incoming_fd);
> > -
> > event_notifier_init_fd(&s->peers[incoming_posn].eventfds[guest_max_eventfd],
> > -                           incoming_fd);
> > -
> > -    /* increment count for particular guest */
> > -    s->peers[incoming_posn].nb_eventfds++;
> > +                    new_eventfd, incoming_fd);
> > +    event_notifier_init_fd(&peer->eventfds[new_eventfd], incoming_fd);
> >  
> >      if (incoming_posn == s->vm_id) {
> > -        s->eventfd_chr[guest_max_eventfd] = create_eventfd_chr_device(s,
> > -                   &s->peers[s->vm_id].eventfds[guest_max_eventfd],
> > -                   guest_max_eventfd);
> > +        s->eventfd_chr[new_eventfd] = create_eventfd_chr_device(s,
> > +                   &s->peers[s->vm_id].eventfds[new_eventfd],
> > +                   new_eventfd);
> >      }
> >  
> >      if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) {
> > -        ivshmem_add_eventfd(s, incoming_posn, guest_max_eventfd);
> > +        ivshmem_add_eventfd(s, incoming_posn, new_eventfd);
> >      }
> >  }
> >  
> > 
> 
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]