qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 5/8] multifd: Be flexible about packet size


From: Juan Quintela
Subject: Re: [Qemu-devel] [PATCH v2 5/8] multifd: Be flexible about packet size
Date: Wed, 27 Feb 2019 12:06:55 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux)

"Dr. David Alan Gilbert" <address@hidden> wrote:
> * Juan Quintela (address@hidden) wrote:
>> This way we can change the packet size in the future and everything
>> will work.  We choose an arbitrary big number (100 times configured
>> size) as a limit about how big we will reallocate.
>> 
>> Signed-off-by: Juan Quintela <address@hidden>
>> ---
>>  migration/ram.c | 24 ++++++++++++++++++------
>>  1 file changed, 18 insertions(+), 6 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index e22d02760b..75a8fc21f8 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -723,13 +723,13 @@ static void multifd_pages_clear(MultiFDPages_t *pages)
>>  static void multifd_send_fill_packet(MultiFDSendParams *p)
>>  {
>>      MultiFDPacket_t *packet = p->packet;
>> -    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>> +    uint32_t page_max = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>>      int i;
>>  
>>      packet->magic = cpu_to_be32(MULTIFD_MAGIC);
>>      packet->version = cpu_to_be32(MULTIFD_VERSION);
>>      packet->flags = cpu_to_be32(p->flags);
>> -    packet->pages_alloc = cpu_to_be32(page_count);
>> +    packet->pages_alloc = cpu_to_be32(page_max);
>>      packet->pages_used = cpu_to_be32(p->pages->used);
>>      packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>>      packet->packet_num = cpu_to_be64(p->packet_num);
>> @@ -746,7 +746,7 @@ static void multifd_send_fill_packet(MultiFDSendParams 
>> *p)
>>  static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>>  {
>>      MultiFDPacket_t *packet = p->packet;
>> -    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>> +    uint32_t pages_max = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>>      RAMBlock *block;
>>      int i;
>>  
>> @@ -769,12 +769,24 @@ static int 
>> multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>>      p->flags = be32_to_cpu(packet->flags);
>>  
>>      packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
>> -    if (packet->pages_alloc > page_count) {
>> +    /*
>> +     * If we recevied a packet that is 100 times bigger than expected
>> +     * just stop migration.  It is a magic number.
>> +     */
>> +    if (packet->pages_alloc > pages_max * 100) {
>>          error_setg(errp, "multifd: received packet "
>> -                   "with size %d and expected maximum size %d",
>> -                   packet->pages_alloc, page_count) ;
>> +                   "with size %d and expected size %d",
>> +                   packet->pages_alloc, pages_max) ;
>
> Should that end with pages_max * 100 ?

Not sure.

The *allocated* by defaault size is pages_max.  If we receive
bigger packets, we update it, but until a limit (arbitrary, I am open to
other limits).

So, what multifd is expecting here is pages_max.  But it will cope with
anything that is smaller than pages_max * 100.  So, what I should put on
the error message?  100 * pages_max or pages_max?

It appears that for you it is simpler to understand pages_max * 100, and
as I don't care, I am just changing it.

>>          return -1;
>>      }
>> +    /*
>> +     * We received a packet that is bigger than expected but inside
>> +     * reasonable limits (see previous comment).  Just reallocate.
>> +     */
>> +    if (packet->pages_alloc > p->pages->allocated) {
>> +        multifd_pages_clear(p->pages);
>> +        multifd_pages_init(packet->pages_alloc);
>> +    }
>>  
>>      p->pages->used = be32_to_cpu(packet->pages_used);
>>      if (p->pages->used > packet->pages_alloc) {
>
> Other than that error message, I think it's OK, although the names get
> very confusing (max, alloc, allocated)

I am open to suggestions.  I just got out of names :-(

>
>
> Reviewed-by: Dr. David Alan Gilbert <address@hidden>

Thanks.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]