parallel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Processing a big file using more CPUs


From: Nio Wiklund
Subject: Re: Processing a big file using more CPUs
Date: Tue, 12 Feb 2019 06:39:54 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0

Den 2019-02-11 kl. 23:54, skrev Ole Tange:
On Mon, Feb 4, 2019 at 10:19 PM Nio Wiklund <nio.wiklund@gmail.com> wrote:
:
    cat bigfile | parallel --pipe --recend '' -k gzip -9 > bigfile.gz
:
The reason why I want this is that I often create compressed images of
the content of a drive, /dev/sdx, and I lose approximately half the
compression improvement from gzip to xz, when using parallel. The
improvement in speed is good, 2.5 times, but I think larger blocks would
give xz a chance to get a compression much closer to what it can get
without parallel.

Is it possible with with the current code? In that case how?

Since version 2016-07-22:

parallel --pipepart -a bigfile --recend '' -k --block -1 xz > bigfile.xz
parallel --pipepart -a /dev/sdx --recend '' -k --block -1 xz > bigfile.xz

Unfortunately the size computation of block devices only works under GNU/Linux.

(That said: pxz exists, and it may be more relevant to use here).


/Ole


Thanks for this reply, Ole,

I will test how your suggested command lines work for me, and also look into parallel processing within xz.

Best regards
Nio



reply via email to

[Prev in Thread] Current Thread [Next in Thread]