qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 13/17] migration: Create thread infrastructur


From: Juan Quintela
Subject: Re: [Qemu-devel] [PATCH v5 13/17] migration: Create thread infrastructure for multifd recv side
Date: Tue, 08 Aug 2017 13:51:11 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux)

"Dr. David Alan Gilbert" <address@hidden> wrote:
> * Juan Quintela (address@hidden) wrote:
>> We make the locking and the transfer of information specific, even if we
>> are still receiving things through the main thread.
>> 
>> Signed-off-by: Juan Quintela <address@hidden>
>> ---
>>  migration/ram.c | 68 
>> ++++++++++++++++++++++++++++++++++++++++++++++++++-------
>>  1 file changed, 60 insertions(+), 8 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index ac0742f..49c4880 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -49,6 +49,7 @@
>>  #include "migration/colo.h"
>>  #include "sysemu/sysemu.h"
>>  #include "qemu/uuid.h"
>> +#include "qemu/iov.h"
>>  
>>  /***********************************************************/
>>  /* ram save/restore */
>> @@ -527,7 +528,7 @@ int multifd_save_setup(void)
>>      return 0;
>>  }
>>  
>> -static int multifd_send_page(uint8_t *address)
>> +static uint16_t multifd_send_page(uint8_t *address, bool last_page)
>>  {
>>      int i, j;
>>      MultiFDSendParams *p = NULL; /* make happy gcc */
>> @@ -543,8 +544,10 @@ static int multifd_send_page(uint8_t *address)
>>      pages.iov[pages.num].iov_len = TARGET_PAGE_SIZE;
>>      pages.num++;
>>  
>> -    if (pages.num < (pages.size - 1)) {
>> -        return UINT16_MAX;
>> +    if (!last_page) {
>> +        if (pages.num < (pages.size - 1)) {
>> +            return UINT16_MAX;
>> +        }
>>      }
>
> This doesn't feel like it should be in a recv patch.


I will change it, until here we don't need it :p

>
>>      qemu_sem_wait(&multifd_send_state->sem);
>> @@ -572,12 +575,17 @@ static int multifd_send_page(uint8_t *address)
>>  }
>>  
>>  struct MultiFDRecvParams {
>> +    /* not changed */
>>      uint8_t id;
>>      QemuThread thread;
>>      QIOChannel *c;
>> +    QemuSemaphore ready;
>>      QemuSemaphore sem;
>>      QemuMutex mutex;
>> +    /* proteced by param mutex */
>>      bool quit;
>> +    multifd_pages_t pages;
>> +    bool done;
>>  };
>>  typedef struct MultiFDRecvParams MultiFDRecvParams;
>
> The params between Send and Recv keep looking very similar; I wonder
> if we can share them.

We use other parameters.  We could do, but I am not sure if it makes
sense the trouble.

>>   * save_page_header: write page header to wire
>>   *
>> @@ -1155,7 +1210,7 @@ static int ram_multifd_page(RAMState *rs, 
>> PageSearchStatus *pss,
>>          ram_counters.transferred +=
>>              save_page_header(rs, rs->f, block,
>>                               offset | RAM_SAVE_FLAG_MULTIFD_PAGE);
>> -        fd_num = multifd_send_page(p);
>> +        fd_num = multifd_send_page(p, rs->migration_dirty_pages == 1);
>
> I think that belongs in the previous patch and probably answers one of
> my questions.

Ok, I change that.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]