help-tar
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-tar] Extraction performance problem


From: Paul Eggert
Subject: Re: [Help-tar] Extraction performance problem
Date: Thu, 05 Feb 2015 12:23:04 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0

On 02/05/2015 10:57 AM, Jakob Bohm wrote:
The default is 20 (i.e., 20 x 512 = 10 KiB).
Which happens not to be a multiple of 4Kio.

True. The 10 KiB value has been the default since the 1970s, though; it dates back when computers often had only 32 KiB RAM. Changing it might break things, and we shouldn't change it without good reason. A sufficient performance improvement for typical uses would be a good enough reason, but we'd need to see the numbers.


has anyone tried to make a multi-threaded version?

Not as far as I know. It's not clear that going multithreaded would be worth the hassle.


I would agree, but given the typical behavior of correctly
implemented file system flush logic, it might pay to somehow
overlap the closing of extracted regular files with the
extraction of subsequent files (because close(fd) must imply
fdflush(fd) which must wait for disk I/O

POSIX doesn't require that 'close' must flush buffers to disk, and 'close' typically does not do that. If you're on a system where 'close' flushes buffers, perhaps you can speed things up by configuring the system so that it doesn't flush buffers. It sounds like your virus scanner is slowing you down, so I'd look into how it's configured.

Some of the optimizations you mention look like they may be worth doing, though we'd need to see benchmarks.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]