[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2 2/2] migration: add support for qatzip compression when do
From: |
Juan Quintela |
Subject: |
Re: [PATCH v2 2/2] migration: add support for qatzip compression when doing live migration |
Date: |
Thu, 20 Apr 2023 13:29:30 +0200 |
User-agent: |
Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux) |
Markus Armbruster <armbru@redhat.com> wrote:
> "you.chen" <you.chen@intel.com> writes:
>
>> Add config and logics to use qatzip for page compression, in order
>> to support qatzip compression better, we collect multipe pages
>> together to do qatzip compression for best performance.
>> And we use compile option CONFIG_QATZIP to determine whether should qatzip
>> related code be compiled or not.
>>
>> Co-developed-by: dennis.wu <dennis.wu@intel.com>
>> Signed-off-by: you.chen <you.chen@intel.com>
>
> [...]
>
>> diff --git a/qapi/migration.json b/qapi/migration.json
>> index c84fa10e86..6459927c7a 100644
>> --- a/qapi/migration.json
>> +++ b/qapi/migration.json
>> @@ -644,6 +644,8 @@
>> # no compression, 1 means the best compression speed, and
>> 9 means best
>> # compression ratio which will consume more CPU.
>> #
>> +# @compress-with-qat: compress with qat on and off. (Since 8.1)
>> +#
>> # @compress-threads: Set compression thread count to be used in live
>> migration,
>> # the compression thread count is an integer between 1
>> and 255.
>> #
>> @@ -784,7 +786,7 @@
>> { 'enum': 'MigrationParameter',
>> 'data': ['announce-initial', 'announce-max',
>> 'announce-rounds', 'announce-step',
>> - 'compress-level', 'compress-threads', 'decompress-threads',
>> + 'compress-level', 'compress-with-qat', 'compress-threads',
>> 'decompress-threads',
>> 'compress-wait-thread', 'throttle-trigger-threshold',
>> 'cpu-throttle-initial', 'cpu-throttle-increment',
>> 'cpu-throttle-tailslow',
>> @@ -815,6 +817,8 @@
>> #
>> # @compress-level: compression level
>> #
>> +# @compress-with-qat: compression with qat (Since 8.1)
>> +#
>> # @compress-threads: compression thread count
>> #
>> # @compress-wait-thread: Controls behavior when all compression threads are
>> @@ -954,6 +958,7 @@
>> '*announce-rounds': 'size',
>> '*announce-step': 'size',
>> '*compress-level': 'uint8',
>> + '*compress-with-qat': 'bool',
>> '*compress-threads': 'uint8',
>> '*compress-wait-thread': 'bool',
>> '*decompress-threads': 'uint8',
>> @@ -1152,6 +1157,7 @@
>> '*announce-rounds': 'size',
>> '*announce-step': 'size',
>> '*compress-level': 'uint8',
>> + '*compress-with-qat': 'bool',
>> '*compress-threads': 'uint8',
>> '*compress-wait-thread': 'bool',
>> '*decompress-threads': 'uint8',
>
> We already have MigrationCapability compress
>
> # @compress: Use multiple compression threads to accelerate live
> migration.
> # This feature can help to reduce the migration traffic, by
> sending
> # compressed pages. Please note that if compress and xbzrle
> are both
> # on, compress only takes effect in the ram bulk stage, after
> that,
> # it will be disabled and only xbzrle takes effect, this can
> help to
> # minimize migration traffic. The feature is disabled by
> default.
> # (since 2.4 )
I had the patch to deprecate it on 8.1.
And now colo is using it. Sniff.
> and xbzrle
>
> # @xbzrle: Migration supports xbzrle (Xor Based Zero Run Length Encoding).
> # This feature allows us to minimize migration traffic for
> certain work
> # loads, by sending compressed difference of the pages
> #
Diferent can of worms, but I agree with you.
> and MigrationParameters / MigrateSetParameters multifd-compression
>
> # @multifd-compression: Which compression method to use.
> # Defaults to none. (Since 5.0)
> #
> # @multifd-zlib-level: Set the compression level to be used in live
> # migration, the compression level is an integer
> between 0
> # and 9, where 0 means no compression, 1 means the
> best
> # compression speed, and 9 means best compression
> ratio which
> # will consume more CPU.
> # Defaults to 1. (Since 5.0)
> #
> # @multifd-zstd-level: Set the compression level to be used in live
> # migration, the compression level is an integer
> between 0
> # and 20, where 0 means no compression, 1 means the
> best
> # compression speed, and 20 means best compression
> ratio which
> # will consume more CPU.
> # Defaults to 1. (Since 5.0)
>
> where multifd-compression is
>
> ##
> # @MultiFDCompression:
> #
> # An enumeration of multifd compression methods.
> #
> # @none: no compression.
> # @zlib: use zlib compression method.
> # @zstd: use zstd compression method.
> #
> # Since: 5.0
> ##
I think it belongs here as another compression method.
Later, Juan.
> How does this all fit together? It feels like a bunch of featured piled
> onto each other, then shaken well. Or am I confused?
>
> I could use an abstract description of compression in migration.
compression -> old code, uses threads and compress a page at a time
(i.e. takes more time to copy the code to the thread that
what we got in return). Data is copied several times.
xbzrle: use a cache of already sent pages and sent the difference.
Not really very useful execpt if you are migrating over data
centers. Here you trade memory and cpu consumption on the host
for less network bandwidth used.
Current cache size default is a joke, but that is a complete
different can of worms.
multifd compression: it is much better (I know that I am the author, but
the whole reason to create it was to address the
shortcuts of the old compression code).
highlights:
- compress 64 pages at at time, so get much better
compression ratios.
- for each channel, it never resets the compression
during migration, that means that it compress
much better. For the few things that I know
about compression, new methods are based heavily
in dictionaries, so you need long sessions to get
best effectivity.
- it minimizes the number of copies. zstd, no
copies at all. zlib -> it makes one copy because
hardware implmentations (s390x I am looking at
you) make two passes through the data in some
cases.
I plan to add xbzre in the near future to multifd compression so I can
also deprecate it,
But someone will appears that really needs it O:-)
Later, Juan.