[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] (Option to) Cache retrieved volumes between (fetchi

From: edgar . soldin
Subject: Re: [Duplicity-talk] (Option to) Cache retrieved volumes between (fetching/restoring) runs locally?!
Date: Tue, 09 Nov 2010 23:44:33 +0100
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv: Gecko/20101027 Thunderbird/3.1.6

It would be much easier to simply mirror you repository to a local path and 
restore from there. 

Can't you just fetch/restore all files/folders in one go?


On 09.11.2010 21:53, Daniel Hahler wrote:
> Hello,
> I would like to be able to cache retrieved files from the backend
> locally between multiple runs of duplicity, e.g. via some config or
> command line option.
> Use case: having accidentally overwritten a lot of my (virtual)
> containers files, I've used the following to restore the previous state:
>   for i in $=VCIDS; do
>     b=path/to/files
>     /bin/rm -rf /$b*
>     duply profile fetch $b /$b 1D
>     duply profile fetch ${b}.d /${b}.d 1D
>   done
> This makes up 60+ runs of duplicity (2 runs of duplicity per 30+
> containers, one for a single file, the other for a directory), and when
> looking at it with "--verbosity 9" it looks like a lot of the same
> volumes (with a size of 50M in my case) are downloaded every time.
> I think it would speed this (particular) use case dramatically up, if
> these files would get cached locally.
> I could imagine to configure something like "keep files for X hours",
> and when duplicity gets run and files are older than this, they get
> cleaned on shutdown.
> When being accessed, they would get touched to reset the timer.
> However, there should also be a maximum number of files to cache, since
> this might easily fill your local volume otherwise.
> I am thinking about caching the files encrypted (just as on the remote
> site), but maybe caching decrypted files would make sense, too?
> Obviously, this should take into account if this is a remote backup
> (maybe by looking at the transfer rate of the files?!), and not pollute
> the cache, if the backend is as fast as local transfers might be.
> What do you think?
> Cheers,
> Daniel

Duplicity-talk mailing list

reply via email to

[Prev in Thread] Current Thread [Next in Thread]