lzip-bug
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Lzip-bug] On Windows unpacking does not use all cores


From: wrotycz
Subject: Re: [Lzip-bug] On Windows unpacking does not use all cores
Date: Mon, 16 Apr 2018 03:56:40 +0200
User-agent: GWP-Draft


Romano wrote:
Yesterday I compiled latest lzlib 1.9 and plzip 1.7 under cygwin(also
latest, just installed yesterday) on Windows 7 and as a 64bit both. It
compiled without any errors or warning and tool also work fine. During
packing it is able to utilize all CPU cores to 100% so multi threading
works. Same with testing using -t flag. However, when I actually try to
unpack with -d it never even peak above 50%. This despite -n option,
even if I double the number to -n 8. On parallel, until now I always
used FreeArc's internal 4x4:lzma which always fully utilized my cpu and
it shows as during unpacking without io limitation it could reach
~200Mib/s.

I don't use Windows, so I can't test your executable. (BTW, please,
don't spam the list with huge unsolicited files). The fact that plzip
can use all cores while testing makes me suspect of some I/O
problem/idiosyncrasy. See for example this thread on the MinGW list:


I am aware of blocks concept as well, tool also did not utilized all CPU
with smaller -B block and big enough file, and I know for sure its not
my HDD limiting it because first it is quicker than output and FreeArc
can still utilize its max, but also because it does not utilize full CPU
even if using stdin/stdout.

Even if decompressing from a regular file to /dev/null ?

Yes - it seems /dev/null is treated as you said as non seekable - I noticed the same on linux decompressing to null device, you can check yourself. I was actually to report the very same thing and made some test with some big amount of data only to find out it's intended behaviour. But not all in vein - it turned out that decompressing to standard file removes limitations.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]