[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] amazon s3 speed

From: Peter Schuller
Subject: Re: [Duplicity-talk] amazon s3 speed
Date: Mon, 5 Jul 2010 08:45:25 +0200

> I know this depends on so many things, but I was experimenting with a
> full dump today of a 500gb partition (no encryption) and observing the
> progress. It seemed to be creating 27mb chunks on s3, and after 20
> minutes had created about 50 of them.
> By my back of the envelope guess that would mean a full dump was going
> to be complete in about 5 or 6 days, depending on whether or not my
> xterm maintained its connection to the command line for that long !

Individual transfers in/out of S3 (at least from outside, i.e., not
from within EC2) tend to be limited in speed. Probably due to the
usual TCP rate limiting mechanisms, though I suspect something else
may be at play.

In my testing the best way to gain speed is to up the concurrency.
WIth concurrent transfers, you'll be able to reach really good speeds
(I once tested speeds up to ~ 1 gbit, after which local networking
became the bottleneck).

Unfortunately duplicity only supports uploading while constructing the
next volume, and does not support arbitrary concurrency in the backend
I/O. I started work towards this (--asynchronous-upload being the
first step), but I have not had the time to finish it (or work on
duplicity at all in quite a while). But I suspect it would be the best
method of allowing duplicity to really use a good chunk of your

/ Peter Schuller

reply via email to

[Prev in Thread] Current Thread [Next in Thread]