qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/3] migration: add sztd compression


From: Juan Quintela
Subject: Re: [PATCH 0/3] migration: add sztd compression
Date: Fri, 24 Jan 2020 13:43:23 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)

Denis Plotnikov <address@hidden> wrote:
> zstd date compression algorithm shows better performance on data compression.
> It might be useful to employ the algorithm in VM migration to reduce CPU 
> usage.
> A user will be able to choose between those algorithms, therefor compress-type
> migration parameter is added.
>
> Here are some results of performance comparison zstd vs gzip:

Please, could you comment on the series:

[PATCH v3 00/21] Multifd Migration Compression

That series integrated zstd/zlib compression on top of multifd,
advantages over "old" compression code are:
- We don't have to copy data back and forth
- The unit of compression is 512KB instead of 4kb
- We "conserve" the compression state between packets (this is specially
  interesting for zstd, that uses dictionaries)

> host: i7-4790 8xCPU @ 3.60GHz, 16G RAM
> migration to the same host
> VM: 2xVCPU, 8G RAM total
> 5G RAM used, memory populated with postgreqsl data
> produced by pgbench performance benchmark
>
>
> Threads: 1 compress – 1 decompress
>
> zstd provides slightly less compression ratio with almost the same
> CPU usage but copes with RAM  compression roghly 2 times faster
>
> compression type              zlib       |      zstd
> ---------------------------------------------------------
> compression level          1       5     |   1       5
> compression ratio          6.92    7.05  |   6.69    6.89
> cpu idle, %                82      83    |   86      80
> time, sec                  49      71    |   26      31
> time diff to zlib, sec                      -25     -41
>
>
> Threads: 8 compress – 2 decompress
>
> zstd provides the same migration time with less cpu consumption
>
> compression type         none  |        gzip(zlib)    |          zstd
> ------------------------------------------------------------------------------
> compression level        -     |  1      5       9    |   1       5       15
> compression ratio        -     |  6.94   6.99    7.14 |   6.64    6.89    6.93
> time, sec                154   |  22     23      27   |   23      23      25
> cpu idle, %              99    |  45     30      12   |   70      52      23
> cpu idle diff to zlib          |                      |  -25%    -22%    -11%

I don't have handy results, but it looked for me like you:
- zstd has a way better latency than zlib (i.e. the packet cames sooner)
- And it compress much better

On the migration test (best possible case for a compressor, as we are
writting just one byte of each page, and we write the same value in all
pages):

- zlib: compress 512KB -> 2500 bytes
- zstd: compess 512KB -> 52 bytes (yeap, I tested several times, it
  looked too small)

See that I posted another patch to "delete" the old compression code.
Why?
- I have been unable to modify migration-test to test it and work
  reliablely (only way was to allow a really huge downtime)
- Even with slow networking (1Gigabit) I got really mixed results,
  because as it is so slow, the guest continue dirtying memory, and in
  my tests it was never a winner

Thanks, Juan.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]