qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 06/12] migration: do not detect zero page for co


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH 06/12] migration: do not detect zero page for compression
Date: Sun, 22 Jul 2018 19:05:16 +0300

On Wed, Jul 18, 2018 at 04:46:21PM +0800, Xiao Guangrong wrote:
> 
> 
> On 07/17/2018 02:58 AM, Dr. David Alan Gilbert wrote:
> > * Xiao Guangrong (address@hidden) wrote:
> > > 
> > > 
> > > On 06/29/2018 05:42 PM, Dr. David Alan Gilbert wrote:
> > > > * Xiao Guangrong (address@hidden) wrote:
> > > > > 
> > > > > Hi Peter,
> > > > > 
> > > > > Sorry for the delay as i was busy on other things.
> > > > > 
> > > > > On 06/19/2018 03:30 PM, Peter Xu wrote:
> > > > > > On Mon, Jun 04, 2018 at 05:55:14PM +0800, address@hidden wrote:
> > > > > > > From: Xiao Guangrong <address@hidden>
> > > > > > > 
> > > > > > > Detecting zero page is not a light work, we can disable it
> > > > > > > for compression that can handle all zero data very well
> > > > > > 
> > > > > > Is there any number shows how the compression algo performs better
> > > > > > than the zero-detect algo?  Asked since AFAIU buffer_is_zero() might
> > > > > > be fast, depending on how init_accel() is done in 
> > > > > > util/bufferiszero.c.
> > > > > 
> > > > > This is the comparison between zero-detection and compression (the 
> > > > > target
> > > > > buffer is all zero bit):
> > > > > 
> > > > > Zero 810 ns Compression: 26905 ns.
> > > > > Zero 417 ns Compression: 8022 ns.
> > > > > Zero 408 ns Compression: 7189 ns.
> > > > > Zero 400 ns Compression: 7255 ns.
> > > > > Zero 412 ns Compression: 7016 ns.
> > > > > Zero 411 ns Compression: 7035 ns.
> > > > > Zero 413 ns Compression: 6994 ns.
> > > > > Zero 399 ns Compression: 7024 ns.
> > > > > Zero 416 ns Compression: 7053 ns.
> > > > > Zero 405 ns Compression: 7041 ns.
> > > > > 
> > > > > Indeed, zero-detection is faster than compression.
> > > > > 
> > > > > However during our profiling for the live_migration thread (after 
> > > > > reverted this patch),
> > > > > we noticed zero-detection cost lots of CPU:
> > > > > 
> > > > >    12.01%  kqemu  qemu-system-x86_64            [.] buffer_zero_sse2  
> > > > >                                                                       
> > > > >                                                                       
> > > > >                              ◆
> > > > 
> > > > Interesting; what host are you running on?
> > > > Some hosts have support for the faster buffer_zero_ss4/avx2
> > > 
> > > The host is:
> > > 
> > > model name        : Intel(R) Xeon(R) Gold 6142 CPU @ 2.60GHz
> > > ...
> > > flags             : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
> > > mca cmov pat pse36 clflush dts acpi
> > >   mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm 
> > > constant_tsc art arch_perfmon pebs bts
> > >   rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni 
> > > pclmulqdq dtes64 monitor
> > >   ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 
> > > sse4_2 x2apic movbe popcnt
> > >   tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch 
> > > cpuid_fault epb cat_l3
> > >   cdp_l3 intel_ppin intel_pt mba tpr_shadow vnmi flexpriority ept vpid 
> > > fsgsbase tsc_adjust bmi1
> > >   hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq 
> > > rdseed adx smap clflushopt
> > >   clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc 
> > > cqm_occup_llc cqm_mbm_total
> > >   cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp 
> > > hwp_pkg_req pku ospke
> > > 
> > > I checked and noticed "CONFIG_AVX2_OPT" has not been enabled, maybe is 
> > > due to too old glib/gcc
> > > version:
> > >     gcc version 4.4.6 20110731 (Red Hat 4.4.6-4) (GCC)
> > >     glibc.x86_64                     2.12
> > 
> > Yes, that's pretty old (RHEL6 ?) - I think you should get AVX2 in RHEL7.
> 
> Er, it is not easy to update glibc in the production env.... :(

But neither is QEMU updated in production all that easily. While we do
want to support older hosts functionally, it does not make
much sense to devel complex optimizations that only benefit
older hosts.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]