[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: df incorrectly handles volumes > 4TB
From: |
Paul Eggert |
Subject: |
Re: df incorrectly handles volumes > 4TB |
Date: |
Sun, 26 Dec 2004 22:04:02 -0800 |
User-agent: |
Gnus/5.1006 (Gnus v5.10.6) Emacs/21.3 (gnu/linux) |
address@hidden writes:
> This is how a 4.5TB filesystem with ~400GB of data on it is displayed with df
> from coreutils-5.2.1.
>
> nkfb0 root # df -H
> Filesystem Size Used Avail Use% Mounted on
> ...
> /dev/archive/scr0 601G 412G 183G 70% /scr0
> ...
> strace -v df -k /dev/archive/scr0
> ...
> statfs("/scr0", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=146499832,
> f_bfree=80165081, f_bavail=78697075, f_files=9318400, f_ffree=9282355,
> f_fsid={0, 0}, f_namelen=255, f_frsize=4096}) = 0
statfs is claiming that there are 146,499,832 blocks, each of size
4096 bytes, which works out to 600,063,311,872 bytes total, so df's
"601G" faithfully reflects what the system call is telling "df".
Most likely, then, the bug is not in "df" itself. I'd look for bugs
in the underlying C library, or operating system, or file system, etc.
4.5 TB is just over 2**30 * 4096 bytes, so perhaps your file system
can't handle more than 2**30 blocks for some reason.