duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] S3 connection timeout


From: Greg Heartsfield
Subject: Re: [Duplicity-talk] S3 connection timeout
Date: Mon, 29 Oct 2007 22:37:25 -0500
User-agent: Mutt/1.4.2.1i

It looks like it may be difficult to separate out some Amazon/network
provider issues from software issues.  I'll throw in my own
circumstantial evidence: During the same week that I was looking into
duplicity/boto/s3 issues, I was doing development on a Haskell S3
library on the same machine.  I consistently had a complete inability
to use duplicity over S3, but never had an issue with my haskell
library (which I was constantly running unit tests on, some of which
upload multi-megabyte objects).  That said, I have to believe that the
network failures others are reporting are very real, and certainly
make this a mess to try and debug.  FWIW, I'm on AT&T DSL.  

I looked into boto pretty carefully, and I found nothing to improve
on.  Duplicity's backend boto code is similarly clean and
straightforward.  I'm afraid I may have to fire up Ethereal to really
understand what is going on, but I won't have time to do that myself
for a couple weeks.

-Greg Heartsfield

On Mon, Oct 29, 2007 at 04:36:40PM -0400, Mitchell Garnaat wrote:
> Because S3 is a networked service, intermittent service failures (the
> dreaded 500 errors) are a way of life.  The boto library tries pretty hard
> to detect those errors and retry until things work but I'm sure there are
> corner cases I'm missing.
> 
> I mainly access S3 from home which uses a RoadRunner cable modem.  I can
> honestly say that I remember the last time I experienced an actual failure
> when putting or getting files to/from S3.  I'm sure errors and retries are
> happening but not to the point that ever become noticeable to me.
> 
> Having said all of that, the S3 forums seem to have a fairly high number of
> messages from people on comcast that seem to be having trouble accessing the
> service.  Maybe there are some bigger issues there.
> 
> Mitch
> 
> On 10/29/07, Kenneth Loafman <address@hidden> wrote:
> >
> > Eric Evans wrote:
> > > [ Greg Heartsfield ]
> > >> I used Duplicity for quite some time with the old bitbucket backend,
> > >> mostly succesfully.  Since that was switched out with boto, I've never
> > >> had success doing a backup to S3.  On multiple OS X (10.4/10.5)
> > >> machines, I get "caught a socket error, trying to recover" messages
> > >> from s3/key.py (as you note).
> > >>
> > >> The error is coming from boto, but I can only recreate it when boto is
> > >> running in the context of duplicity.  I can take backup sets created
> > >> with duplicity, and run a simple program that uses boto to upload them
> > >> to S3 without problems (most of the code is line-for-line from the
> > >> boto/duplicity backend).
> > >>
> > >> The error only occurs when I send data exceeding a precise number of
> > >> bits, which I don't have at hand at the moment, but it was something
> > >> like 8k (note, this is within duplicity, it works fine running only
> > >> boto).
> > >>
> > >> I'd also love to hear from anyone who is succesfully using
> > >> boto/duplicity, and what their environment is like.
> > >
> > > I backup 5 machines (weekly full, daily incrementals) to s3 using
> > > duplicity. All of them are using 0.4.3 and boto 0.9b on various versions
> > > of Debian (stable, testing, and unstable). I can't remember the last
> > > time I've had a failure.
> >
> > Are any of you guys that are failing on Comcast?
> >
> > ...Ken
> >
> >
> >
> > _______________________________________________
> > Duplicity-talk mailing list
> > address@hidden
> > http://lists.nongnu.org/mailman/listinfo/duplicity-talk
> >
> >
> >

> _______________________________________________
> Duplicity-talk mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/duplicity-talk

Attachment: pgpnHsSBoagM9.pgp
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]