duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] (Option to) Cache retrieved volumes between (fetchi


From: edgar . soldin
Subject: Re: [Duplicity-talk] (Option to) Cache retrieved volumes between (fetching/restoring) runs locally?!
Date: Fri, 19 Nov 2010 11:25:24 +0100
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.12) Gecko/20101027 Thunderbird/3.1.6

On 18.11.2010 21:38, Daniel Hahler wrote:
Hello,

It would be much easier to simply mirror you repository to a
local path and restore from there.

It would be ~30GB, which would take a while to mirror, apart from
that you might not have the space available.

if you are lacking the space, you probably have no space for the
cache either.

While you make a good point here of course, not the whole backup gets
used when restoring only certain files from it (according to duplicity's
design and the log).
Otherwise it would have been much slower anyway.

Point is: it accumulates over time. And the current design intentionally works 
on only one backup volume at all times.


Can't you just fetch/restore all files/folders in one go?

Does "duply fetch" / duplicity support fetching multiple files? It
does not look so from the man page.

Indeed it doesn't. I always expected duplicity to work with
exclude/include also on restoring. It doesn't. Currently you can
obviously only restore all or one file/folder. For the latter. It is
the folder only without it's contents ;).

While I have not verified it (and don't know if you're joking), but it
seemed like the contents of a folder gets restored completely - at least
the files contained therein.

_I tried it and the resultr was an empty folder_. No joke here.


You could of course restore all to a temp folder and move what you
need to wherever needed. That's what I did whenever I had to
restore.

Yes, might have worked better after all - given that you cannot provide
"includes"/pattern yet.

You can't. But you are welcome to hack the support into duplicity ;) - no joke.


If you're referring to get the meat of it only, that would have
been the root directory of all virtual containers (which is>90% of
the whole backup).
when it is 90% of the backup, the last 10% download really do not
weigh out the speed improvement once you restore from your local
'cache'.

Not 90% were being downloaded now, but only certain files out of the 90%
(below /vz/private, where my containers are in).

Eventually I think it might be more important to make restore respect
in/exclude, as a user would expect it to, instead of working around
it by looping/caching over the backup.

Would you create a feature request on launchpad for that?

Here you go: https://bugs.launchpad.net/duplicity/+bug/677177

Thank you .. ede/duply.net



Thanks,
Daniel


regards ..ede/duply.net



Thanks, Daniel

On 09.11.2010 21:53, Daniel Hahler wrote:
Hello,

I would like to be able to cache retrieved files from the
backend locally between multiple runs of duplicity, e.g. via
some config or command line option.

Use case: having accidentally overwritten a lot of my
(virtual) containers files, I've used the following to restore
the previous state: for i in $=VCIDS; do b=path/to/files
/bin/rm -rf /$b* duply profile fetch $b /$b 1D duply profile
fetch ${b}.d /${b}.d 1D done

This makes up 60+ runs of duplicity (2 runs of duplicity per
30+ containers, one for a single file, the other for a
directory), and when looking at it with "--verbosity 9" it
looks like a lot of the same volumes (with a size of 50M in my
case) are downloaded every time.

I think it would speed this (particular) use case dramatically
up, if these files would get cached locally.

I could imagine to configure something like "keep files for X
hours", and when duplicity gets run and files are older than
this, they get cleaned on shutdown. When being accessed, they
would get touched to reset the timer.

However, there should also be a maximum number of files to
cache, since this might easily fill your local volume
otherwise.

I am thinking about caching the files encrypted (just as on the
remote site), but maybe caching decrypted files would make
sense, too?

Obviously, this should take into account if this is a remote
backup (maybe by looking at the transfer rate of the files?!),
and not pollute the cache, if the backend is as fast as local
transfers might be.

What do you think?


Cheers, Daniel

_______________________________________________ Duplicity-talk
mailing list address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk


_______________________________________________ Duplicity-talk
mailing list address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk


_______________________________________________ Duplicity-talk
mailing list address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk


_______________________________________________ Duplicity-talk
mailing list address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk


_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk



reply via email to

[Prev in Thread] Current Thread [Next in Thread]