[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
bug#9734: [solaris] `dd if=/dev/urandom of=file bs=1024k count=1' gets a
bug#9734: [solaris] `dd if=/dev/urandom of=file bs=1024k count=1' gets a file of 133120 bytes
Wed, 12 Oct 2011 16:42:49 +0100
Mozilla/5.0 (X11; Linux x86_64; rv:6.0) Gecko/20110816 Thunderbird/6.0
On 10/12/2011 03:14 PM, Eric Blake wrote:
> On 10/12/2011 02:22 AM, Clark J. Wang wrote:
>> I'm not sure if it's a bug but it's not reasonable to me. On Solaris 11
>> (SunOS 5.11 snv_174, i86pc):
>> $ uname -a
>> SunOS sollab-242.cn.oracle.com 5.11 snv_174 i86pc i386 i86pc
>> $ pkg list gnu-coreutils
>> NAME (PUBLISHER) VERSION
>> file/gnu-coreutils 8.5-0.174.0.0.0.0.504
>> $ /usr/gnu/bin/dd if=/dev/urandom of=file bs=1024k count=1
>> 0+1 records in
> Notice that this means you read a partial record - read() tried to read 1024k
> bytes, but the read ended short at only 133120 bytes.
>> 0+1 records out
> And because you didn't request dd to group multiple short reads before doing
> a full write, you got a single (short) record written.
>> I'm new to Solaris but I've never seen this problem whe I use Linux so it
>> really suprises me.
> Solaris and Linux kernels differ on when you will get short reads, and magic
> files like /dev/urandom are more likely to display the issue than regular
> files. That said, Linux also has the "problem" of short reads; it's
> especially noticeable when passing the output of dd to a pipe.
> You probably wanted to use this GNU extension:
> dd if=/dev/urandom of=file bs=1024k count=1 iconv=fullblock
> where the iconv flag requests that dd pile together multiple read()s until it
> has a full block, so that you no longer have a partial block output.
Right, but for the record it's iflag=fullblock (available since coreutils 7.0)
This common issue is warned about by coreutils >= 8.11,
where it will suggest using iflag=fullblock
Note the particular case where count=1 is not warned about,
as with a single read, one doesn't know if we're just at EOF.
Also it's probably a quite common idiom to, consume available data
up to $bs bytes.