info-cvs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: proposed workaround (RE: Bidirectional repository synchronization wi


From: Eric Siegerman
Subject: Re: proposed workaround (RE: Bidirectional repository synchronization with CVSup - how?)
Date: Sun, 23 Sep 2001 20:02:06 -0400
User-agent: Mutt/1.2.5i

On Sun, Sep 23, 2001 at 08:40:57AM -0500, Art Eschenlauer wrote:
> Assumptions,
> 6.  DS1L is a "super-sandbox" repository at DS1, which is to say that it
>       is where untested changes may be committed.  Its hierarchy
>       maintained identically with that of GMSL.

Isn't this exactly the problem you asked about in the first
place?  If so, it would seem kind of premature to assume it
solved.

> [...]

8.  People can repeatedly follow a complicated manual procedure
    that provides no safeguards, without screwing up.
9.  That they will be willing to go through that, N times a day,
    and not be tempted to cheat, commit less frequently than they
    should, or otherwise use the source-control regime very
    suboptimally.

Neither of those assumptions is one I'd want to trust.  The
advantage of my proposal is that, once set up, it works pretty
much transparently; it looks just like "normal" CVS.

OK, so let's take a step back, and question the assumption that's
really central to all this:
> 7.  Having a single site for the CVS repository and using compression
>       do not give performance that is acceptable by all stakeholders.
>       That is why I am taking this approach.

What kind of performance are you getting?  E.g. how long does a
"cvs update" take when the sandbox is already up to date?  When
it isn't?  Why is that unacceptable?  What would be considered
acceptable?

What kind of files are you working with? (Text or binary?  How
many files/directories?  Typical file size?  Total size of a
sandbox?)  What kind of network topology do you have?  What kind
of development process?  How often do you want people to update?
To commit?  Where are the bottlenecks?  Is it network bandwidth?
Disk I/O?  Concurrent-update locking?

What client and server platform(s)?  Which CVS program(s)?  Which
version(s)?

What all these questions are about is, maybe there's something
wrong that, if fixed, would give CVS acceptable performance using
a single repo.  Or maybe the stakeholder in question just has
unrealistic expectations...

What's wrong with my proposal?  (I ask seriously, not petulantly;
whatever it is, maybe it can be addressed.)

Of course, it might make perfect sense to go ahead with the
complicated, error-prone scheme.  If it fails spectacularly
enough, that could well free up the money to invalidate the final
assumption (taken from your first message): "bitkeeper is not
(yet) an option for us" :-)

--

|  | /\
|-_|/  >   Eric Siegerman, Toronto, Ont.        address@hidden
|  |  /
The world has been attacked.  The world must respond ... [but] we must
be guided by a commitment to do what works in the long run, not by what
makes us feel better in the short run.
        - Jean Chr├ętien, Prime Minister of Canada



reply via email to

[Prev in Thread] Current Thread [Next in Thread]