qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 20/46] ivshmem: simplify a bit the code


From: Claudio Fontana
Subject: Re: [Qemu-devel] [PATCH v3 20/46] ivshmem: simplify a bit the code
Date: Wed, 23 Sep 2015 14:18:12 +0200
User-agent: Mozilla/5.0 (Windows NT 6.1; rv:38.0) Gecko/20100101 Thunderbird/38.2.0

On 22.09.2015 16:56, Marc-André Lureau wrote:
> 
> 
> ----- Original Message -----
>> On 15.09.2015 18:07, address@hidden wrote:
>>> From: Marc-André Lureau <address@hidden>
>>>
>>> Use some more explicit variables to simplify the code.
>>>
>>> nth_eventfd variable is the current eventfd to be manipulated.
>>
>> well maybe a silly question, but then why not call it current_eventfd?
> 
> Either way, ok.
> 
> current_eventfd is the nth eventfd to be added :)
> 
>>
>>> Signed-off-by: Marc-André Lureau <address@hidden>
>>> ---
>>>  hw/misc/ivshmem.c | 26 ++++++++++++--------------
>>>  1 file changed, 12 insertions(+), 14 deletions(-)
>>>
>>> diff --git a/hw/misc/ivshmem.c b/hw/misc/ivshmem.c
>>> index 1c98ec3..a60454f 100644
>>> --- a/hw/misc/ivshmem.c
>>> +++ b/hw/misc/ivshmem.c
>>> @@ -488,9 +488,10 @@ static void ivshmem_read(void *opaque, const uint8_t
>>> *buf, int size)
>>>  {
>>>      IVShmemState *s = opaque;
>>>      int incoming_fd;
>>> -    int guest_max_eventfd;
>>> +    int nth_eventfd;
>>>      long incoming_posn;
>>>      Error *err = NULL;
>>> +    Peer *peer;
>>>  
>>>      if (!fifo_update_and_get(s, buf, size,
>>>                               &incoming_posn, sizeof(incoming_posn))) {
>>> @@ -517,6 +518,8 @@ static void ivshmem_read(void *opaque, const uint8_t
>>> *buf, int size)
>>>          }
>>>      }
>>>  
>>> +    peer = &s->peers[incoming_posn];
>>> +
>>>      if (incoming_fd == -1) {
>>>          /* if posn is positive and unseen before then this is our posn*/
>>>          if (incoming_posn >= 0 && s->vm_id == -1) {
>>> @@ -564,27 +567,22 @@ static void ivshmem_read(void *opaque, const uint8_t
>>> *buf, int size)
>>>          return;
>>>      }
>>>  
>>> -    /* each guest has an array of eventfds, and we keep track of how many
>>> -     * guests for each VM */
>>
>> you removed a few comments, do they no longer apply?
>> If so do they need to be replaced with better ones mentioning how it works in
>> contrast with the previous?
> 
> That comment didn't make much sense to me, especially the second part,
> what about:
> 
> "each peer has an associated array of eventfds, and we keep track of how many 
> eventfd received so far"

ok, "... of how many eventfds have been received so far".

> 
>>
>>> -    guest_max_eventfd = s->peers[incoming_posn].nb_eventfds;
>>> +    /* get a new eventfd */
>>> +    nth_eventfd = peer->nb_eventfds++;
>>>  
>>>      /* this is an eventfd for a particular guest VM */
>>>      IVSHMEM_DPRINTF("eventfds[%ld][%d] = %d\n", incoming_posn,
>>> -                    guest_max_eventfd, incoming_fd);
>>> -
>>> event_notifier_init_fd(&s->peers[incoming_posn].eventfds[guest_max_eventfd],
>>> -                           incoming_fd);
>>> -
>>> -    /* increment count for particular guest */
>>> -    s->peers[incoming_posn].nb_eventfds++;
>>> +                    nth_eventfd, incoming_fd);
>>> +    event_notifier_init_fd(&peer->eventfds[nth_eventfd], incoming_fd);
>>>  
>>>      if (incoming_posn == s->vm_id) {
>>> -        s->eventfd_chr[guest_max_eventfd] = create_eventfd_chr_device(s,
>>> -                   &s->peers[s->vm_id].eventfds[guest_max_eventfd],
>>> -                   guest_max_eventfd);
>>> +        s->eventfd_chr[nth_eventfd] = create_eventfd_chr_device(s,
>>> +                   &s->peers[s->vm_id].eventfds[nth_eventfd],
>>> +                   nth_eventfd);
>>>      }
>>>  
>>>      if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) {
>>> -        ivshmem_add_eventfd(s, incoming_posn, guest_max_eventfd);
>>> +        ivshmem_add_eventfd(s, incoming_posn, nth_eventfd);
>>>      }
>>>  }
>>>  
>>>
>>
>> Ciao
>> C.
>>
>>





reply via email to

[Prev in Thread] Current Thread [Next in Thread]