qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH] migration/block: Avoid involve int


From: 858585 jemmy
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH] migration/block: Avoid involve into blk_drain too frequently
Date: Wed, 15 Mar 2017 10:28:40 +0800

On Tue, Mar 14, 2017 at 11:12 PM, Eric Blake <address@hidden> wrote:
> On 03/14/2017 02:57 AM, address@hidden wrote:
>> From: Lidong Chen <address@hidden>
>>
>> Increase bmds->cur_dirty after submit io, so reduce the frequency involve 
>> into blk_drain, and improve the performance obviously when block migration.
>
> Long line; please wrap your commit messages, preferably around 70 bytes
> since 'git log' displays them indented, and it is still nice to read
> them in an 80-column window.
>
> Do you have benchmark numbers to prove the impact of this patch, or even
> a formula for reproducing the benchmark testing?
>

the test result is base on current git master version.

the xml of guest os:
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source 
file='/instanceimage/ab3ba978-c7a3-463d-a1d0-48649fb7df00/ab3ba978-c7a3-463d-a1d0-48649fb7df00_vda.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/domu/ab3ba978-c7a3-463d-a1d0-48649fb7df00_vdb'/>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
    </disk>

i used fio running in guest os.  and the context of  fio configuration is below:
[randwrite]
ioengine=libaio
iodepth=128
bs=512
filename=/dev/vdb
rw=randwrite
direct=1

when the vm is not durning migrate, the iops is about 10.7K.

then i used this command to start migrate virtual machine.

virsh migrate-setspeed ab3ba978-c7a3-463d-a1d0-48649fb7df00 1000
virsh migrate --live ab3ba978-c7a3-463d-a1d0-48649fb7df00
--copy-storage-inc qemu+ssh://10.59.163.38/system

before apply this patch, during the block dirty save phase, the iops
in guest os is  only 4.0K, the migrate speed is about 505856 rsec/s.
after apply this patch, during the block dirty save phase, the iops in
guest os is is 9.5K. the migrate speed is about 855756 rsec/s.

with old version qemu(1.2.0), call bdrv_drain_all function to wait aio
complete, before apply this patch, the result is worse.
because the main thread is block by bdrv_drain_all for a long time,
the vnc is becoming response very slowly.
this bug is only obvious when set the migrate speed with a big number.

>>
>> Signed-off-by: Lidong Chen <address@hidden>
>> ---
>>  migration/block.c | 2 ++
>>  1 file changed, 2 insertions(+)
>>
>> diff --git a/migration/block.c b/migration/block.c
>> index 6741228..f059cca 100644
>> --- a/migration/block.c
>> +++ b/migration/block.c
>> @@ -576,6 +576,8 @@ static int mig_save_device_dirty(QEMUFile *f, 
>> BlkMigDevState *bmds,
>>              }
>>
>>              bdrv_reset_dirty_bitmap(bmds->dirty_bitmap, sector, nr_sectors);
>> +            sector += nr_sectors;
>> +            bmds->cur_dirty = sector;
>>              break;
>>          }
>>          sector += BDRV_SECTORS_PER_DIRTY_CHUNK;
>>
>
> --
> Eric Blake   eblake redhat com    +1-919-301-3266
> Libvirt virtualization library http://libvirt.org
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]