duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] S3 ECONNRESET during restore results in SHA1 hash m


From: edgar . soldin
Subject: Re: [Duplicity-talk] S3 ECONNRESET during restore results in SHA1 hash mismatch
Date: Wed, 4 May 2016 10:54:42 +0200
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:45.0) Gecko/20100101 Thunderbird/45.0

On 04.05.2016 05:15, Aphyr wrote:
> Hello, all!

hey Aphyr, please state your duply, duplicity, boto, python versions

> I've got a couple terabytes backed up to S3 via duply. I'm doing a disaster 
> recovery drill and trying to restore that data, but I can't make it more than 
> a few minutes/hours without hitting a (recoverable!) S3 network hiccup which 
> breaks the restore process and forces me to restart the restore from scratch. 
> Each time it breaks on a different file, so I know the issue is a network 
> fault, not that that the files themselves are corrupt.
> 
> For example:
> 
> --- Start running command RESTORE at 21:41:53.892 ---
> Local and Remote metadata are synchronized, no sync needed.
> Last full backup date: Thu Oct 15 11:45:39 2015
> Download 
> s3://s3-us-west-2.amazonaws.com/my-bucket/b/duplicity-full.20151015T164539Z.vol26.difftar.gpg
>  failed (attempt #1, reason: SSLError: ('The read operation timed out',))
> Download 
> s3://s3-us-west-2.amazonaws.com/my-bucket/b/duplicity-full.20151015T164539Z.vol26.difftar.gpg
>  failed (attempt #2, reason: SSLError: ('The read operation timed out',))
> Download 
> s3://s3-us-west-2.amazonaws.com/my-bucket/b/duplicity-full.20151015T164539Z.vol26.difftar.gpg
>  failed (attempt #3, reason: SSLError: ('The read operation timed out',))
> Download 
> s3://s3-us-west-2.amazonaws.com/my-bucket/b/duplicity-full.20151015T164539Z.vol26.difftar.gpg
>  failed (attempt #4, reason: error: [Errno 104] Connection reset by peer)
> Invalid data - SHA1 hash mismatch for file:
>  duplicity-full.20151015T164539Z.vol26.difftar.gpg
>  Calculated hash: da39a3ee5e6b4b0d3255bfef95601890afd80709
>  Manifest hash: 71d69b04b6ed6aa75b604e4eecff51ab08a24cfe
> 
> Sometimes it breaks as early as vol3, other times it gets as far as vol745. 
> I've added
> 
> DUPL_PARAMS="$DUPL_PARAMS --num-retries=100 "
> 
> to my duply config, but it doesn't seem to make a difference.

--num-retries should work over all backends

run duply w/ '--preview' and check that --num-retries is propagated to the 
command line.

> I've also seen a suggestion that I download the full S3 archives to a local 
> directory, and restore from there. Sadly, I don't have enough free disk to 
> cache an additional 2+ TB.
> 
> I think connection-reset errors are always recoverable here; if Duply could 
> either resume restores, or just sleep and retry the download instead of 
> trying to verify incomplete files, I think this would work fine. Any 
> suggestions?
> 

probably should, but nobody contributed that so far ;).. ede/duply.net



reply via email to

[Prev in Thread] Current Thread [Next in Thread]