[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
dd strangeness
From: |
abc |
Subject: |
dd strangeness |
Date: |
Wed, 08 Dec 2010 15:04:14 -0000 |
User-agent: |
G2/1.0 |
I need to split a large stream of approx 600 GB that gets generated by
Solaris' zfs send command. I could try the whole chain with a big
enough file but I don't see how the zfs command could be the problem
as it only generates a stream.
These two commands work as expected:
# zfs send <backup> | wc gives the expected result, wc displays the
full size of the backup, well beyond 140 GB, as I'm interested in the
first 140 GB I think this is good.
# cat /dev/zero | dd bs=512 count=293601280 | wc gives also expected
results, dd reads exactly 293601280 blocks and wc sees 150323855360
characters, that is, 140 GB
The problem comes when I combine the former part of the first command
with the latter of the second, dd always reads short of the count
value:
# zfs send <backup> | dd bs=512 count=293601280 | wc
293590463+10817 records in
293590463+10817 records out
Admittedly, this is Solaris' dd but when I pipe the stream through nc
to an up-to-date Linux box and do the dd there, I also get short reads
on the last block.
I haven't tried BSD dd yet.
bs=512 is the last attempt I made but I've tried combinations of the
bs and count parameters to no avail, nothing seems to work with the
600 GB stream. I haven't tried bs=1 as I think it would take weeks to
go through but maybe I'm wrong. If I try with smaller files, up to
hundreds of MBs dd works fine, but I can't tell at what size it breaks
or under which circumstances or why. I welcome any pointers at this
time.
Thanks,
Lucia