qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [v2 2/2] migration: Implement multiple compression thre


From: ChenLiang
Subject: Re: [Qemu-devel] [v2 2/2] migration: Implement multiple compression threads
Date: Fri, 21 Nov 2014 16:38:44 +0800
User-agent: Mozilla/5.0 (Windows NT 6.1; rv:11.0) Gecko/20120327 Thunderbird/11.0.1

On 2014/11/21 16:17, ChenLiang wrote:

> On 2014/11/21 15:38, Li, Liang Z wrote:
> 
>>>> +int migrate_compress_threads(void)
>>>> +{
>>>> +    MigrationState *s;
>>>> +
>>>> +    s = migrate_get_current();
>>>> +
>>>> +    return s->compress_thread_count;
>>>> +}
>>>> +
>>>>  int migrate_use_xbzrle(void)
>>>>  {
>>>>      MigrationState *s;
>>>> @@ -697,4 +795,5 @@ void migrate_fd_connect(MigrationState *s)
>>>>  
>>>>      qemu_thread_create(&s->thread, "migration", migration_thread, s,
>>>>                         QEMU_THREAD_JOINABLE);
>>>> +    migrate_compress_threads_create(s);
>>
>>
>>> don't create compress_threads always.
>>> It may be better:
>>
>>> if (!migrate_use_xbzrle()) {
>>>     migrate_compress_threads_create(s);
>>> }
>>
>> Thanks for your comments, in fact,  the multiple thread compression can 
>> co-work with xbrzle, which can help to accelerate live migration.
> 
> 
> hmm, multiple thread compression can't co-work with xbzrle. xbzrle need 
> guarantee
> the cache at src is same to dest. But I dont see that below:
> 
> +    /* XBZRLE overflow or normal page */
> +    if (bytes_sent == -1) {
> +        bytes_sent = migrate_save_block_hdr(&param->migbuf, block,
> +            offset, cont, RAM_SAVE_FLAG_COMPRESS_PAGE);
> +        blen = migrate_qemu_add_compress(&param->migbuf, p,
> +            TARGET_PAGE_SIZE, migrate_compress_level());
> +        bytes_sent += blen;
> +        atomic_inc(&acct_info.norm_pages);
> 
> the code don't update the cache of xbzrle at src.
> 
>>
>>> BTW, this patch is too big to review. Spliting it into some patch will be 
>>> welcome.
>>
>> I am doing it.
>>
>>
>>
>>
>>
> 
> 
> 
> 
> 
> 






reply via email to

[Prev in Thread] Current Thread [Next in Thread]