[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Weird error message after incremental backup of lar

From: edgar . soldin
Subject: Re: [Duplicity-talk] Weird error message after incremental backup of large drive with a bunch of changed files
Date: Wed, 15 Mar 2023 13:16:31 +0100
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.6.1

hey Jacob,

On 15.03.2023 04:40, Jakob Bohm via Duplicity-talk wrote:
On 2023-03-14 23:48, edgar.soldin--- via Duplicity-talk wrote:

On 14.03.2023 21:54, Jakob Bohm via Duplicity-talk wrote:
Dear group,

hey Jakob,

I have set up some scripts to do various parts of a full system backup
via duplicity to a geographically close S3 bucket (AWS Stockholm),
however for the largest drive, I occasionally experience hangs/errors
near the end of each backup, with the progress display wobbling between
"stalled" and less than 30 minutes left (this time I observed as low as
16 minutes at one point).

Then a day into the stall/short time phase, I received to following
error message (redacted bucket name, mountpoints etc.):

Attempt of put Nr. 1 failed. S3UploadFailedError: Failed to upload
 An error occurred (RequestTimeTooSkewed) when calling the 
CreateMultipartUpload operation: The difference between the request time and 
the current time is too large.

sounds like S3 does not like for the upload to take that long?
Yeah, but problem is what makes duplicity take so long to
upload that file, hence why I tried identifying the file
size with ls after the failure.

anyway, to find out what is going on we actually need you to write the full 
console log to a file and post it somewhere. parse it before upload for 
sensitive information you don't want to share and obfuscate if needed.
Log already in file, mail contents was extracted as the likely most
relevant parts.  Is verbosity notice not high enough?  Can the file
produced by the --logfile option help?

As I wrote the failure occurs after days of processing, and not every
time, hence any procedure requiring a retry will take weeks .

we will need at least verbosity info.
you will probably need to disable `--progress` as it does not play well with 
I know, but I need it to know when it stops doing useful work. Anyway,
I can be pretty good at filtering such logs if need be.

After the message there were some small increases in the GB counter in the 
progress bar

ls report after killing duplicity:

-rw-r--r-- 1 root root 21336776 Mar 13 18:33 

sorry, that does not tell us anything.

It tells me that the file was way below the 2G supposedly causing trouble
in that bug report.

Currently invoking duplicity 1.2.1 (patched) with command line

duplicity incremental --name commonprefix-partname \
   --archive-dir /duplicitypart/archive/commonprefix-partname \
   --asynchronous-upload \
   --file-prefix commonprefix-partname_ \
   --tempdir /duplicitypart/tmp/commonprefix-partname \
   --verbosity notice \
   --progress \
/duplicitypart/tmp/commonprefix-partname/log_20230311T20_38_52.log \
   --gpg-options '--homedir /configdir/.gnupg --compress-algo=bzip2' \
   --encrypt-secret-keyring /configdir/.gnupg/secret.gpg \
   --encrypt-key 1234567890ABCDEF1234567890ABCDEF12345678 \
   --sign-key 1234567890ABCDEF1234567890ABCDEF12345678 \
   --hidden-encrypt-key 1234567890ABCDEF1234567890ABCDEF12345678 \
   --exclude-other-filesystems \
   --full-if-older-than 3M \
   --s3-use-multiprocessing \
   --numeric-owner \
   /partname \

probably not error relevant, but some notes as per man page 

1. you mention AWS Stockholm, so you probably need `--s3-endpoint-url` with 
boto3. see http://duplicity.us/stable/duplicity.1.html#a-note-on-amazon-s3
2. `--s3-use-multiprocessing` does nothing on boto3, multichunk is activated by 

Unfortunately, whomever wrote the manpage and bug report comment trail
were really bad at telling the difference between boto2 and boto3.

  Since all the previous elements were already uploaded, I strongly
suspect that boto3 identifies the appropriate URL using appropriate AWS
APIs, as this already goes beyond the outdated assumption that AWS has
only 2 locations worldwide.

netstat during another running backup shows that there is indeed a
connection made to s3-r-w.eu-north-1.amazonaws.com in the proper
AWS region.

OS: Debian GNU/Linux 11.7 (bullseye) with Python 3.9.2, python-boto3 version 

I suspect a relationship with issue #254, and as you see, I have incorporated 
some of the workarounds into the command line.

as it dies with the par2 file, i doubt that the problem is the size of your 
The behavior is indistinguishable from that bug, thanks to the complete
lack of useful error and progress messages.  Hence why I was checking
the file size manually.

What are the appropriate troubleshooting steps?

as said, a proper log file for a start. you can send it personally, if you 
don't wanna share it with the list.


unfortunately we can't help you, if you don't provide the information requested.

sending baseless presumptions and claiming a "complete lack of error messages" 
while having verbosity set to a minimum will neither motivate anyone here to help you nor 
does it shed any light on your issue.

good luck ..ede/duply.net

reply via email to

[Prev in Thread] Current Thread [Next in Thread]