qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 0/8] Add metadata overlap checks


From: Max Reitz
Subject: Re: [Qemu-devel] [PATCH v5 0/8] Add metadata overlap checks
Date: Thu, 19 Sep 2013 17:07:56 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130805 Thunderbird/17.0.8

Hi,

I've done some benchmarks regarding this series now. In particular, I've created a 7G image, installed Arch Linux to a partition in the first 2G and created an empty ext4 partition for benchmarking in the remaining 5G.

My first test consisted of running bonnie++ ("bonnie++ -d [scratch partition] -s 4g -n 0 -x 16 -Z /dev/urandom" (4G files for I/O performance tests, no file creation tests, repeat 16 times)) using different metadata overlap checks (none, constant (all tests can be performed in constant time) and cached (current default)). The reason I didn't test for "all" (perform all overlap checks) is that this will only make a difference (when compared to "cached") if there are snapshots (right now, at least). I put the underlying image file to /tmp (tmpfs) for minimal true I/O latency (to maximize the check overhead).

The second test was basically the same, except I've taken 100 (internal) snapshots before and used 2G files instead of 4G. In this case, I also tested the "all" scenario.

I performed the third test on a HDD instead of tmpfs to maximize the overhead of non-cached overlap checks (that is, checking inactive L2 table overlaps right now) which require disk I/O. I used -drive cache=none for this test (in contrast to the others which ran on a tmpfs anyway). Also, I've used 256M files since 2G just took too much time. :)

As far as I understand, the I/O speed (the duration of an I/O operation) should be pretty much the same for all scenarios, however, the latency is the value in question (since the overlap checks should affect the latency only).


Basically, I didn't get any results which indicate a performance hit. The raw HDD test data sometimes resulted in a standard deviation greater than the average itself (!), thus I've removed some outliers there. The averages rarely exceed each other's standard deviation and if they do, often there is no trend at all. The only time there is a real trend exceeding the standard deviation is for block writes in my first test – however, the trend is negative, indicating overlap checks actually sped things up (which is obviously contraintuitive). The difference, however, is below 1 % anyway.

The only major differences visible (exceeding the combined standard deviation of the two values in question) occured during the HDD test: The duration of putc, block writes and rewrites for HDDs was much greater (about 10 to 20 %, however, bear in mind the standard deviation is in that magnitude as well) for "constant" and "cached" than for "none" and "all". On the other hand, the putc and rewrite latency was much better for "constant" and "cached" then for "none" and "all". The durations differing so greatly is a sign to me that the data from this test is not really usable (since I think it should be the same for all scenarios). If we're to ignore that and the fact that there was indeed a higher latency in "none" than in "all" for both latencies affected, we could conclude that "all" is really much slower than "constant" or "cached". But then again, the block write latency was even smaller for "all" than for "cached" and "constant", so I'd just ignore these benchmarks (for the HDD).


All in all, I don't see any significant performance difference when benchmarking on a tmpfs (which should maximize the overhead of "constant" and "cached") and the data from my HDD benchmarks is probably stastically unusable. The only comparison which they would have been useful for are the comparison of "all" to "cached", but since "all" will not be the default (and anyone explicitly using this option is in fact responsible for slow I/O himself)) they aren't actually that important anyway.


I've attached a CSV file containing the edited results, that is, the averages and standard deviations for the tests performed by bonnie++, excluding some outliers from the HDD benchmark; I think the values are given in microseconds.


Max

Attachment: res.csv
Description: Text Data


reply via email to

[Prev in Thread] Current Thread [Next in Thread]