duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] S3 connection timeout


From: Greg Heartsfield
Subject: Re: [Duplicity-talk] S3 connection timeout
Date: Sun, 28 Oct 2007 12:19:31 -0500
User-agent: Mutt/1.4.2.1i

I used Duplicity for quite some time with the old bitbucket backend,
mostly succesfully.  Since that was switched out with boto, I've never
had success doing a backup to S3.  On multiple OS X (10.4/10.5)
machines, I get "caught a socket error, trying to recover" messages
from s3/key.py (as you note).

The error is coming from boto, but I can only recreate it when boto is
running in the context of duplicity.  I can take backup sets created
with duplicity, and run a simple program that uses boto to upload them
to S3 without problems (most of the code is line-for-line from the
boto/duplicity backend).

The error only occurs when I send data exceeding a precise number of
bits, which I don't have at hand at the moment, but it was something
like 8k (note, this is within duplicity, it works fine running only
boto).

I'd also love to hear from anyone who is succesfully using
boto/duplicity, and what their environment is like.

Thanks,
Greg Heartsfield

On Sun, Oct 28, 2007 at 12:01:30PM +0100, Peter Schuller wrote:
> >     raise S3ResponseError(response.status, response.reason, body)
> > boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request
> > <?xml version="1.0" encoding="UTF-8"?>
> > <Error><Code>RequestTimeout</Code><Message>Your socket connection to the
> > server was not read from or written to within the timeout period. Idle
> > connections will be
> > closed.</Message><RequestId>xxx</RequestId><HostId>xxx</HostId></Error>
> 
> I am seeing this a lot recently, along with 'caught a socket error,
> trying to recover" from s3/key.py in boto.
> 
> In fact right now I am completely and utterly unable to complete a
> backup properly, and had the same problem yesterday. Even a single mb
> chunk won't upload.
> 
> tracing the process indicates that the remote end is closing the
> connection, but I don't know if any information is sent back.
> 
> tcpdumping traffic shows some a great deal of duplicate ACK:s (several
> at a time), and traffic from the s3 server being very very bursty (a
> bunch of packets comes through for a few hundre milliseconds, then a
> pause up to perhaps a second, then some more, etc). The end result is
> slow uploads (disregarding that the upload fails anyway).
> 
> This definitely seems to be an off-and-on thing. I have been backup up
> to s3 for a while without difficulties in the past (well, sometimes
> errors have happened and required retry, but as a rule it has been
> stable).
> 
> Can anyone report similar problems? Or the reverse, is anyone
> successfully using the s3 backend over time without difficulties?
> 
> -- 
> / Peter Schuller
> 
> PGP userID: 0xE9758B7D or 'Peter Schuller <address@hidden>'
> Key retrieval: Send an E-Mail to address@hidden
> E-Mail: address@hidden Web: http://www.scode.org
> 



> _______________________________________________
> Duplicity-talk mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/duplicity-talk

Attachment: pgpePPf43g5kW.pgp
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]