[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Fwd: AssertionError on every attempt

From: Rupert Levene
Subject: Re: [Duplicity-talk] Fwd: AssertionError on every attempt
Date: Wed, 10 Jun 2015 13:43:44 +0100

Here's a revised pydrivebackend.py. The changes are hopefully:

(1) querying on individual filenames where possible, which is much
faster than the old method

(2) warning if a filename used by duplicity is shared by several files
in the backup folder

(3) overwrite on upload if there is (at least one) existing file with
the same filename

(4) deletion of a non-existing file will do nothing; I don't know what
the old code would have done.


On 9 June 2015 at 19:23,  <address@hidden> wrote:
> On 09.06.2015 18:54, Rupert Levene wrote:
>> On 9 June 2015 at 16:56,  <address@hidden> wrote:
>>> still smells hackish to me. an exception for the deletion can have many 
>>> causes and doesn't guarantee that there are no more other instances of that 
>>> file on the backend.
>> Agreed, using an exception isn't great. Maybe the following would be
>> better at the top of _put?
>>         while self.id_by_name(remote_filename)!='':
>>             self._delete(remote_filename)
> the problem there would be that it is an endless loop, if for some reason the 
> delete does not go through. read further below
>>> an approach like
>>> 1. list before upload
>>> 2. if one or more instances already, delete all
>>> 3. list again and raise error if there are still instances
>>> would be more costly but also more secure. no?
>> Sounds good. Your other email about querying on a specific filename
>> should greatly reduce the extra cost. FilesList calls can be very very
>> slow, with each call taking several minutes if there are lots of files
>> in the drive backup folder. Drive seems to throttle such requests to
>> something like 60 KB/s for me, whereas straight file transfers run at
>> more than 5MB/s. So changing id_by_name and _query to avoid a full
>> FilesList call would be very useful.
>> BTW, FilesList can be made somewhat quicker (and use considerably less
>> memory) by restricting the fields requested:
>>             ret = self.drive.ListFile({'q': "'" + self.folder + "' in
>> parents", 'fields': 'items(title,id,fileSize),nextPageToken'
>> }).GetList()
>>> reading
>>> http://bazaar.launchpad.net/~duplicity-team/duplicity/0.7-series/view/head:/duplicity/backends/pydrivebackend.py
>>> it looks like creating new file instances with the same name is possible by 
>>> design. reading here
>>> http://pythonhosted.org/PyDrive/filemanagement.html#upload-and-update-file-content
>>> suggests that "overwriting" a file would be retrieving the existing file 
>>> and SetContentFile() on the object again eg. something like
>>> """ overwrite a possibly existing failed upload or create a new file """
>>> id = self.id_by_name(remote_filename)
>>> if id:
>>>   drive_file = self.drive.CreateFile({'id': id})
>>> else:
>>>   drive_file = self.drive.CreateFile({'title': remote_filename, 'parents': 
>>> [{"kind": "drive#fileLink", "id": self.folder}]})
>>> drive_file.SetContentFile(source_path.name)
>>> drive_file.Upload()
>> This looks like a good idea, in conjunction with the approach above:
>> if there are files with the same filename, first delete all but one
>> and then update the unique file remaining; otherwise upload a new
>> file. As an added bonus, I imagine drive would keep revision history
>> for any updated files.
> i'd suggest to use only this. when the put routine always checks for existing 
> files and reuses it's id if so, there will be no multiple instances on the 
> backend anymore, hence the whole deletion becomes obsolete.
> ..ede/duply.net
> _______________________________________________
> Duplicity-talk mailing list
> address@hidden
> https://lists.nongnu.org/mailman/listinfo/duplicity-talk

Attachment: pydrivebackend.py
Description: Text Data

Attachment: pydrivebackend.patch
Description: Text Data

reply via email to

[Prev in Thread] Current Thread [Next in Thread]