emacs-bug-tracker
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Emacs-bug-tracker] bug#7362: closed (dd strangeness)


From: GNU bug Tracking System
Subject: [Emacs-bug-tracker] bug#7362: closed (dd strangeness)
Date: Wed, 10 Nov 2010 15:26:01 +0000

Your message dated Wed, 10 Nov 2010 15:29:41 +0000
with message-id <address@hidden>
and subject line Re: bug#7362: dd strangeness
has caused the GNU bug report #7362,
regarding dd strangeness
to be marked as done.

(If you believe you have received this mail in error, please contact
address@hidden)


-- 
7362: http://debbugs.gnu.org/cgi/bugreport.cgi?bug=7362
GNU Bug Tracking System
Contact address@hidden with problems
--- Begin Message --- Subject: dd strangeness Date: Wed, 10 Nov 2010 11:22:25 +0100 User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9.2.4) Gecko/20100608 Thunderbird/3.1 I see this behavior in Solaris, Linux and BSD dd: if I send a big enough file they all read it short at the end of the stream.

This works as expected:

# cat /dev/zero | dd bs=512 count=293601280 | wc

I get the expected results, dd reads exactly 293601280 blocks and wc sees 150323855360 characters, 140 GB

Whereas substituting cat for zfs send doesn't:

# zfs send <backup> | dd bs=512 count=293601280 | wc

The output of one of the runs is

293590463+10817 records in
293590463+10817 records out

and the bytes counted by wc are < 140 GB. The zfs command sends 600 GB, so obviously dd should not run short.

BSD and Linux dd were used on BSD and Linux machines, respectively, piping the stream with nc.

Since this happens with three different implementations of dd I'm thinking of a design flaw but I've never ecountered it before. I'm testing sdd (a dd replacement) and will see what happens, but it'll take 5 hours still. There seems to be something going on in dd with different input and output block sizes since both sdd and this https://bugs.launchpad.net/ubuntu/+source/rsyslog/+bug/517773 hint at it: "The dd process requires a ridiculous amount of CPU during startup, though, since it is running with bs=1 to not miss stuff". But I don't know if that's what's happening here. According to man dd, bs sets ibs and obs.

bs=512 is the last attempt I made but I've tried combinations of the bs and count parameters (always to make a size of 140 GB) to no avail, nothing seems to work with a big stream. I still haven't tried bs=1 as I think it would take weeks to go through but maybe I'm wrong. If I try with smaller files, up to hundreds of MBs dd works fine, but I can't tell at what size it breaks or under which circumstances or why.



--- End Message ---
--- Begin Message --- Subject: Re: bug#7362: dd strangeness Date: Wed, 10 Nov 2010 15:29:41 +0000 User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.8) Gecko/20100227 Thunderbird/3.0.3
On 10/11/10 10:22, Lucia Rotger wrote:
> I see this behavior in Solaris, Linux and BSD dd: if I send a big enough
> file they all read it short at the end of the stream.
> 
> This works as expected:
> 
> # cat /dev/zero | dd bs=512 count=293601280 | wc
> 
> I get the expected results, dd reads exactly 293601280 blocks and wc
> sees 150323855360 characters, 140 GB
> 
> Whereas substituting cat for zfs send doesn't:
> 
> # zfs send <backup> | dd bs=512 count=293601280 | wc

different write sizes to the pipe mean
in the later case, dd will get short reads.
IMHO dd is doing the wrong/most surprising thing here,
but it can't be changed for compatibility reasons.
You can get coreutils dd to do what you want with:

dd iflag=fullblock

cheers,
Pádraig.


--- End Message ---

reply via email to

[Prev in Thread] Current Thread [Next in Thread]