qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 3/8] migration: show the statistics of compre


From: Xiao Guangrong
Subject: Re: [Qemu-devel] [PATCH v2 3/8] migration: show the statistics of compression
Date: Tue, 24 Jul 2018 15:37:24 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0



On 07/23/2018 05:15 PM, Peter Xu wrote:
On Mon, Jul 23, 2018 at 04:40:29PM +0800, Xiao Guangrong wrote:


On 07/23/2018 04:05 PM, Peter Xu wrote:
On Mon, Jul 23, 2018 at 03:39:18PM +0800, Xiao Guangrong wrote:


On 07/23/2018 12:36 PM, Peter Xu wrote:
On Thu, Jul 19, 2018 at 08:15:15PM +0800, address@hidden wrote:
@@ -1597,6 +1608,24 @@ static void migration_update_rates(RAMState *rs, int64_t 
end_time)
                rs->xbzrle_cache_miss_prev) / iter_count;
            rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss;
        }
+
+    if (migrate_use_compression()) {
+        uint64_t comp_pages;
+
+        compression_counters.busy_rate = (double)(compression_counters.busy -
+            rs->compress_thread_busy_prev) / iter_count;

Here I'm not sure it's correct...

"iter_count" stands for ramstate.iterations.  It's increased per
ram_find_and_save_block(), so IMHO it might contain multiple guest

ram_find_and_save_block() returns if a page is successfully posted and
it only posts 1 page out at one time.

ram_find_and_save_block() calls ram_save_host_page(), and we should be
sending multiple guest pages in ram_save_host_page() if the host page
is a huge page?


You're right, thank you for pointing it out.

So, how about introduce a filed, posted_pages, into RAMState that is used
to track total pages posted out.

Then will use this filed to replace 'iter_count' for compression and use
'RAMState.posted_pages - ram_counters.duplicate' to calculate
xbzrle_cache_miss as the zero page is not handled by xbzrle.

Or introduce a new function, total_posted_pages, which returns the
sum of all page counts:

    static total_posted_pages(void)
    {
        return ram_counters.normal + ram_counters.duplicate + 
compression_counters.pages
               +  xbzrle_counters.pages;
    }

that would be a bit more complexity...

If below [1] is wrong too, then I'm thinking whether we could just
move the rs->iterations++ from ram_save_iterate() loop to
ram_save_host_page() loop, then we possibly fix both places...


Beside that, even if we fix iterations, xbzrle is painful still as
the zero should not be counted in the cache miss, i.e, xbzrle does
not handle zero page at all.

Anyway, fixing iterations is a good start. :)

After all I don't see any other usages of rs->iterations so it seems
fine.  Dave/Juan?



pages.  However compression_counters.busy should be per guest page.


Actually, it's derived from xbzrle_counters.cache_miss_rate:
          xbzrle_counters.cache_miss_rate = (double)(xbzrle_counters.cache_miss 
-
              rs->xbzrle_cache_miss_prev) / iter_count;

Then this is suspecious to me too...

[1]



+        rs->compress_thread_busy_prev = compression_counters.busy;
+
+        comp_pages = compression_counters.pages - rs->compress_pages_prev;
+        if (comp_pages) {
+            compression_counters.compression_rate =
+                (double)(compression_counters.reduced_size -
+                rs->compress_reduced_size_prev) /
+                (comp_pages * TARGET_PAGE_SIZE);
+            rs->compress_pages_prev = compression_counters.pages;
+            rs->compress_reduced_size_prev = compression_counters.reduced_size;
+        }
+    }
    }
    static void migration_bitmap_sync(RAMState *rs)
@@ -1872,6 +1901,9 @@ static void flush_compressed_data(RAMState *rs)
            qemu_mutex_lock(&comp_param[idx].mutex);
            if (!comp_param[idx].quit) {
                len = qemu_put_qemu_file(rs->f, comp_param[idx].file);
+            /* 8 means a header with RAM_SAVE_FLAG_CONTINUE. */
+            compression_counters.reduced_size += TARGET_PAGE_SIZE - len + 8;

I would agree with Dave here - why we store the "reduced size" instead
of the size of the compressed data (which I think should be len - 8)?


len-8 is the size of data after compressed rather than the data improved
by compression that is not straightforward for the user to see how much
the improvement is by applying compression.

Hmm... but it is not a big deal to me... :)

Yeah it might be a personal preference indeed. :)

It's just natural to do that this way for me since AFAIU the
compression ratio is defined as:

                             compressed data size
    compression ratio =    ------------------------
                              original data size


Er, we do it as following:
             compression_counters.compression_rate =
                 (double)(compression_counters.reduced_size -
                 rs->compress_reduced_size_prev) /
                 (comp_pages * TARGET_PAGE_SIZE);

We use reduced_size, i.e,:

                              original data size - compressed data size
     compression ratio =    ------------------------
                               original data size

for example, for 100 bytes raw data, if we posted 99 bytes out, then
the compression ration should be 1%.

I think it should be 99%...


So if i understand correctly, the reduced_size is really you want? :)


Here's the "offical" definition. :)

https://en.wikipedia.org/wiki/Data_compression_ratio

But obviously I reverted the molecular and denominator... So maybe we
can follow what the wiki page said (then I think you'll just store the
summation of len-8)?

Thank you for fixing my knowledge, what i did is "spacing saving" rather
than "compression ratio". As this "compression ratio" is popularly used
in compression benchmarks, then your suggestion is fine to me.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]