[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#59382: cp(1) tries to allocate too much memory if filesystem blocksi

From: Paul Eggert
Subject: bug#59382: cp(1) tries to allocate too much memory if filesystem blocksizes are unusual
Date: Sun, 20 Nov 2022 09:29:33 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.4.2

On 2022-11-19 22:43, Korn Andras wrote:
the same file can contain records of different
sizes. Reductio ad absurdum: the "optimal" blocksize for reading may in fact
depend on the position within the file (and only apply to the next read).

This sort of problem exists on traditional devices as well. A tape drive can have records of different sizes. For these devices, the best approach is to allocate a buffer of the maximum blocksize the drive supports.

For the file you describe the situation is different, since ZFS will straddle small blocks during I/O. Although there's no single "best" I would guess that it'd typically be better to report the blocksize currently in use for creating new blocks (which would be a power of two for ZFS), as that will map better to how programs like cp deal with blocksizes. This may not be perfect but it'd be better than what ZFS does now, at least for the instances of 'cp' that are already out there.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]