[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Performance issue with qcow2/raid
From: |
Jose R. Ziviani |
Subject: |
Performance issue with qcow2/raid |
Date: |
Thu, 27 May 2021 19:53:24 -0300 |
Hello team,
I'm currently investigating a performance regression detected by iozone
filesystem benchmark (https://www.iozone.org/).
Basically, if I format a QCOW2 image with XFS filesystem in my guest and run
iozone I'll get the following result:
$ mkfs.xfs -f /dev/xvdb1 && \
mount -t xfs /dev/xvdb1 /mnt && \
/opt/iozone/bin/iozone -a -e -s 16777216 -y 4 -q 8 -i 0 -i 1 -f
/mnt/iozone.dat
kB block len read reread
16777216 4K 354790 348796
16777216 8K 362356 364818
However, if I revert the commit 46cd1e8a47 (qcow2: Skip copy-on-write when
allocating a zero cluster) and run the same, I see a huge improvement:
$ mkfs.xfs -f /dev/xvdb1 && \
mount -t xfs /dev/xvdb1 /mnt && \
/opt/iozone/bin/iozone -a -e -s 16777216 -y 4 -q 8 -i 0 -i 1 -f
/mnt/iozone.dat
kB block len read reread
16777216 4K 524067 560057
16777216 8K 538661 537004
Note that if I run iozone without re-formating the disk, I'll get results
similar to last formatting. In other words, if I my current QEMU executable
doesn't have commit 46cd1e8a47 and I format the disk, iozone will continue
showing good results even if I reboot to use QEMU with that commit patched.
My system has a RAID controller[1] and runs QEMU/Xen. I'm not able to reproduce
such behavior in other systems.
Do you have any suggestion to help debugging this? What more info could help to
understand it better?
My next approach is using perf, but I appreciate if you have any hints measure
qcow efficiently.
[1]
# lspci -vv | grep -i raid
1a:00.0 RAID bus controller: Broadcom / LSI MegaRAID SAS-3 3108 [Invader] (rev
02)
Kernel driver in use: megaraid_sas
Kernel modules: megaraid_sas
Thank you very much!
Jose R. Ziviani
signature.asc
Description: Digital signature
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- Performance issue with qcow2/raid,
Jose R. Ziviani <=