duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Duplicity using 1.5 TB storage and loosing incremen


From: Zach Adams
Subject: Re: [Duplicity-talk] Duplicity using 1.5 TB storage and loosing incremental backups?
Date: Sun, 31 May 2015 13:08:13 -0500

On Sun, May 31, 2015 at 12:35 PM,  <address@hidden> wrote:
> On 31.05.2015 18:54, Remy van Elst wrote:
>>
>>
>> On 05/31/2015 06:31 PM, address@hidden wrote:
>>> On 31.05.2015 18:23, Remy van Elst wrote:
>>>>
>>>>
>>>> On 05/31/2015 06:18 PM, address@hidden wrote:
>>>>> On 31.05.2015 17:51, Remy van Elst wrote:
>>>>>>
>>>> [...]
>>>>>>
>>>>
>>>>> btw. 74k files is quite a lot.. you might want to consider
>>>>> raising your volume size.
>>>>
>>>>
>>>> What is the benefit of that? And, what would be a good size? (It
>>>> now is 25 MB).
>>>>
>>
>>> primarily less handling overhead for duplicity hence a slightly
>>> faster backup. and of course, less files on the backend. some
>>> backends limit the amount of files possible.
>>
>>> which size is best for you is up to you and the backend constraints
>>> and your local temp space.
>>
>> Can I just change the volsize with an already existing chain? What
>> effects does that have?
>>
>
> don't see why not. volsize should only effect creating volumes and have no 
> effect while reading them.
>
> if you feel adventurous, try it. if not, be safe and use the new volsize for 
> a new chain starting a new full.
>
> ..ede/duply.net
>
> _______________________________________________
> Duplicity-talk mailing list
> address@hidden
> https://lists.nongnu.org/mailman/listinfo/duplicity-talk

I've tried it before, it works as expected, leaving existing files
untouched but using a larger volume size for new incremental and full
backups.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]