bug-ddrescue
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Odd behaviour by ddrescue--seemed to work but did nothing visible (a


From: Shahrukh Merchant
Subject: Re: Odd behaviour by ddrescue--seemed to work but did nothing visible (after 8 hours!)
Date: Sat, 28 Dec 2019 09:02:11 -0300
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1

Thanks Robert, good deducing! That was exactly it--I had done a previous clonefor a friend's 500G drive, where the mapfile was not really going to be used since the drive was errorfree, but it was still "lying around."

The funny thing is that since normally a live Linux run only keeps the mapfile in RAM memory unless some other location is specified, it does not stay around from one run to another if you've rebooted between. But I had recently installed the persistent version of System Rescue CD on USB and got bitten by the fact that it DID stay around, and I wasn't used to that!

I will just rerun the job from scratch to play safe.

But it brings up an interesting point about this feature. Yes, the fact that a previous mapfile is "continued" from one run to the next is, I understand, very much a feature of ddrescue and an essential part of the way it works. However, it seems too easy to end up in my situation and end up using an old mapfile, which could actually be disastrous for the end result if one didn't realize it! You could end up thinking you recovered a damaged disk and in fact you didn't even try many of the sectors even though they were recoverable because of using an old mapfile.

Seems like there should be a warning (with a flag to override the warning) and some action by ddrescue to recognize the use of an old mapfile. E.g., include the source and/or destination disk ID in the mapfile (perhaps in a special comment field for backward compatibility) and warn if either or both are different. I'm sure there are many ways in which one could detect the most common cases of error, perhaps combination of ways, and other may have ideas about how to implement this as well.

Shahrukh

On 2019-12-26 9:13 AM, Robert Backhaus wrote:
You must have had an existing mapfile in the correct directory, perhaps
from a previous recovery, that showed the first 250G as recovered, so
ddrescue continued on from that point.

How to go from here depends on what was in that old mapfile. If it was from
a completed rescue, then all it effectively did was started at ~250GB,
doing a new copy, using the --size parameter and a new mapfile should
complete the copy -

ddrescue -f -s 255G /dev/sda /dev/sdb first250.mapfile

If the mapfile wasn't from a completed run, then you would be missing
chunks from across the disk. I don't know how to go from here. Perhaps
starting afresh would be more reliable. As the disk started off blank, you
could use ddrescue -G on the new disk to create a mapfile from the areas
that are still zeros, but that would require reading the entire disk, which
wouldn't be much faster than re-doing the copy.


On Thu, 26 Dec 2019 21:55 Shahrukh Merchant, <address@hidden>
wrote:

I cloned a 2T disk to another same-model same-size new disk. The
destination disk was brand new (not initialized or formatted or
anything), using, as I have done any number of times before, the following:

ddrescue -f -v /dev/sda /dev/sdb mapfile

Two strange things happened:

1. ddrescue status after just a couple of minutes when I returned to
check showed "25% rescued" -- the first 500G had already been copied,
but that is not possible! The rest of the copy took 7+ hours (which is
the normal time)--there is no way 25% could have been done in a couple
of minutes. ddrescue seemingly skipped 25% while claiming to have done
it. (Incidentally, 32% of the disk is unallocated--no partitions--but
ddrescue shouldn't know or care about that ...)

2. When the clone was completed (7+ hours later), the new disk showed NO
partitions at all. Something was copied during 7 hours for sure, but not
including the MFT and apparently not including 25% more of the disk
content.

Here at the contents of mapfile (looks normal to me ...) and the lsblk
output after the alleged clone operation (NOT what I was expecting). No
errors were reported.

# Mapfile. Created by GNU ddrescue version 1.24
# Command line: ddrescue -f -v /dev/sda /dev/sdb mapfile
# Start time:   2019-12-25 23:29:13
# Current time: 2019-12-26 06:55:48
# Finished
# current_pos  current_status  current_pass
0x1D1C1110000     +               1
#      pos       size  status
0x00000000 0x1D1C1116000  +

(Note difference in 0x1D1C1110000 and 0x1D1C1116000 if that's important
... 6000 vs 0000 at end.)

# lsblk -oNAME,SIZE,FSTYPE,LABEL,MODEL

NAME     SIZE FSTYPE   LABEL       MODEL
loop0  788.8M squashfs
sda      1.8T                      WDC_WD20SPZX-00CRAT0
|-sda1  39.2M vfat
|-sda2  11.8G ntfs     RECOVERY
|-sda3   250G ntfs     OS
|-sda4  1010G ntfs     Data
sdb      1.8T                      WDC_WD20SPZX-22UA7T0

(I'm skipping showing sdc which is the USB drive from which I was
running ddrescue via System Rescue CD.)

/dev/sda and /dev/sdb are exactly the same before and after the image--I
was expecting sdb to show the same 4 partitions as sda after the
ddrescue operation as in fact it has done any number of other times. I
even opened disk manager on another Windows machine to see if it saw
anything on the newly cloned disk and Windows wanted to initialize the
disk (no MFT found).

For me, the 25% "instant" jump at the start is the key (I happened to
notice it just by chance since I checked shortly after starting the run,
since there was absolutely no other indication of anything wrong) but I
have no idea as to why that happened or what went wrong.

So ... what's going on???

Thanks!

Shahrukh






reply via email to

[Prev in Thread] Current Thread [Next in Thread]