duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] webdavs and large sync volume failing


From: address@hidden
Subject: Re: [Duplicity-talk] webdavs and large sync volume failing
Date: Tue, 23 Feb 2016 20:28:04 +0100

Hi,
I tested the original lftp+webdavs backend from the latest available duplicity version.

still list command is not working - it does not find any existing backup 

Edgar, I have sent you a log file to your email adress.

Gruss,
Wolle
 
Am 23.02.2016 um 15:43 schrieb address@hidden:

yo Wolle,

forget about 'AttributeError: 'module' object has no attribute 'ssl_cacert_path'' that's my bad as i forgot that it depends on other changes in my devel branch.

how do your self-installed versions work out so far?

..ede

On 22.02.2016 21:02, address@hidden wrote:
Hi,

short update on the 'module' object has no attribute ‚ssl_cacert_path' error

I got duplicity back to work by downloading a fresh duplicity tar from http://duplicity.nongnu.org/ <http://duplicity.nongnu.org/>
and manually removing all files in /usr/local/bin and in /usr/local/lib/python2.7

after a fresh python setup.py install duplicity worked again with the lftp backend on smaller test repositories.

For completness reasons I retested the procedure as above but this time replacing the lftpbackend.py before the install
=> this resulted again in the ssl_cacert_path error

Gruss,
wolle


Am 21.02.2016 um 18:30 schrieb address@hidden:

Hi Edgar,

no reason to apologize, thanks for taking this up again.

to your questions:

A. where did you get your duplicity? did you build it yourself, which repository.

first I used the original raspbian duplicity and duply packages (I guess this is coming from Debian jessie arm branch)
This created the memory problems with the webdavs backend
I then upgraded to latest duplicity/duply directly downloading from the duply and duplicity project pages and then manually compiling/installing.


B. how much ram are we talking about in your raspberrypi? (never played around w/ it)

The raspberrypi has 512MB ram (of which 434MB are really available) and a 100MB swap file 

The tmp folder is on a harddisk

1. webdav & oom-killing

I will re-run a full backup now with the latest duplicity version self compiled/installed and test if the memory problem re-apears (this will take a while as the job runs appr 24hours)
I will come back on this

2. lftp not listing


I tried to create a new log using your modified lftpbackend.py but it fails with ccl_cacert_path error output below.
The file /etc/duplicity/cacert.pem is existing.
Surprisingly the error remains even when reverting back to the original lftpbackend.py. 
The webdavs backend is still working ok for list command.

What I did I do:
- replace the lftpbackend.py in the extracted duplicity archive
- installed it via „python setup.py install“
- run the command: duply photo_2010 list 

Did I do something wrong when replacing the file?
I am pretty sure this error was not there when I ran the same duply command with lftpbackend some week back.


——— output when running the duply command ——— 
duply photo_2010 list > /var/log/duplydebug
Traceback (most recent call last):
File "/usr/local/bin/duplicity", line 1532, in <module>
  with_tempdir(main)
File "/usr/local/bin/duplicity", line 1526, in with_tempdir
  fn()
File "/usr/local/bin/duplicity", line 1364, in main
  action = "" class="">File "/usr/local/lib/python2.7/dist-packages/duplicity/commandline.py", line 1108, in ProcessCommandLine
  globals.backend = backend.get_backend(args[0])
File "/usr/local/lib/python2.7/dist-packages/duplicity/backend.py", line 223, in get_backend
  obj = get_backend_object(url_string)
File "/usr/local/lib/python2.7/dist-packages/duplicity/backend.py", line 209, in get_backend_object
  return factory(pu)
File "/usr/local/lib/python2.7/dist-packages/duplicity/backends/lftpbackend.py", line 111, in __init__
  if globals.ssl_cacert_path:
AttributeError: 'module' object has no attribute 'ssl_cacert_path'

18:04:55.610 Task 'LIST' failed with exit code '30'.

——— output end —— 


Gruss,
Wolle



Am 19.02.2016 um 12:35 schrieb address@hidden:

Wolle,

sorry, lost you in my email tsunami!
there are obviously several issues in the mix here. let's try to tackle them in order below

first some more general questions?

A. where did you get your duplicity? did you build it yourself, which repository.
B. how much ram are we talking about in your raspberrypi? (never played around w/ it)

usually memory issues in the past were caused by maintainers fiddling w/ our gpginterface, that's why i am asking where you've got your duplicity from to check that.

What you can try.

1. webdav & oom-killing

run duplicity w/ a command you know that caused the issue in the past and in a second terminal observe memory usage and processes. tyr to find out which (sub)processes stuff your memory.
also make sure that /tmp or the folder you gave for temp files is not mounted to a in-memory file system.

2. lftp not listing

lftp backend probably has an issue in the listing code. please run the listing again in max. verbosity but backup/replace duplicity/backends/lftpbackend.py w/ the copy attached beforehand.

the resulting -v9 log output might tell us why the returned list is empty.

..ede/duply.net

On 18.02.2016 21:51, address@hidden wrote:
Hi,
noone an idea what this problem could be or where I could continue trouble shooting?

Thanks for help.

BR,
Wolle


Am 06.02.2016 um 09:09 schrieb address@hidden:

Hi,

I have reproduced the error with verbosity 9 and sent the logfiles to ede/duply.net.
As stated in that mail it looks like this is a memory issue (/var/log/messages shows that oom-killer kicks in)

In parallel I have now upgraded my system to 
duplicity 0.7.06
duply 1.11.1

and re-run the backup with backend lftp+webdavs as proposed below. This works fine without errors

but

If I check the status of this backup via the same backend lftp+webdavs it shows no backup present.
If I change back to webdavs backend for the status command I can see the 1 successfull full backup I created with the lftp+webdavs backend. 
I tested the same with a small backup repository, here the backend lftp+webdavs works fine both for the backup and the status command

A quick investigation I did shows that the status command for the big backup (roughly 14GB of photos) has an issue when retrieving the file info from the webdav server. There is no data returned - in the log below there should be a long file list after STDOUT: but this is empty.

——— snip from log —— 
CMD: lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls'
Reading results of 'lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls''
STDERR:
---- Resolving host address...
---- 1 address found: xxxxxxx
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND / HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
---> 
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:09 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1857
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<--- 
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
---> 
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:19 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1368
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<--- 
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> GET /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
---> 
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 200 OK
<--- Date: Sat, 06 Feb 2016 06:55:20 GMT
<--- Server: Apache
<--- Last-Modified: Mon, 01 Feb 2016 19:44:00 GMT
<--- ETag: "xxxxxxx ="
<--- Accept-Ranges: bytes
<--- Content-Length: 0
<--- Content-Type: text/html; charset="UTF-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<--- 
---- Receiving body...
---- Received all
---- Closing HTTP connection

STDOUT:

Local and Remote metadata are synchronized, no sync needed.

——— snip end from log ——  

Am 02.02.2016 um 21:27 schrieb address@hidden:

On 02.02.2016 21:18, address@hidden wrote:
Hi,

I am trying to use Duplicity and duply to backup appr 60GB data to webdavs storage.

The script is running on a raspberrypi with raspbian jessie

Small backups are working.
I have tried to run this backup in one go, it failed; I then split into 15GB chunks it failed again with exit code 137

Any ideas what to do / where to look for?

can you run duplicity in max. verbosity '-v9' and send me the _complete_ output privately?

Is the webdavs access not stable enough for such backup?

should make no difference. the backup is split into volumes of the same size anyway.

Is it maybe a performance issue with the raspberry

btw I started the backup script via a cron job, it was then running several hours before it failed.


what's your duplicity version? with the latest duplicity you can install lftp and use that as alernative webdav backend via lftp+webdav://

..ede/duply.net





_______________________________________________
Duplicity-talk mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/duplicity-talk

<lftpbackend.py>_______________________________________________
Duplicity-talk mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/duplicity-talk


_______________________________________________
Duplicity-talk mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/duplicity-talk




_______________________________________________
Duplicity-talk mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/duplicity-talk


_______________________________________________
Duplicity-talk mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/duplicity-talk


reply via email to

[Prev in Thread] Current Thread [Next in Thread]