|Subject:||Re: [Qemu-discuss] qemu vm big network latency when met heavy io|
|Date:||Tue, 7 Jan 2014 05:37:48 +0000|
Yes, I've seen this before. Which version of ceph are you using? The problem was fixed with qemu 1.5 (async io for rbd), I see you run 1.7, so that should not be an issue. maybe it also helps if you enable writeback cache.
Von: qemu-discuss-bounces+address@hidden [qemu-discuss-bounces+address@hidden" im Auftrag von "Alan Ye address@hidden
Gesendet: Dienstag, 07. Jänner 2014 03:38
Betreff: [Qemu-discuss] qemu vm big network latency when met heavy io
There is a problem when I use ceph rbd for qemu storage. I launch 4 virtual machines, and start 5G random write test at the same time. Under such heavy I/O, the network to
virtual machine almost unusable, the network latency is extremely big.
I had test another situation, when I use 'virsh attach-device' command to attach rbd which mapped in my host machine(which run virtual machines), the problem was not show again.
So, I think this must be qemu-rbd 's problem.
Here is my testing environment:
# virsh version
Compiled against library: libvirt 1.2.0
Using library: libvirt 1.2.0
Using API: QEMU 1.2.0
Running hypervisor: QEMU 1.7.0
In vm's xml, I define the rbd like this:
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source protocol='rbd' name='qemu/rbd-vm4'>
<host name='10.120.111.111' port='6789'/>
<secret type='ceph' uuid='38b66185-4117-47a6-90bd-64111c3fc5d2'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
testing tool is : fio
io depth is : 32
io engine is : libaio
io direct is open
Is there anyone met such a problem?
Alan Ye (address@hidden)
|[Prev in Thread]||Current Thread||[Next in Thread]|