gnu-arch-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu-arch-users] Working out a branching scheme [was: tag --seal --f


From: Tom Lord
Subject: Re: [Gnu-arch-users] Working out a branching scheme [was: tag --seal --fix]
Date: Fri, 2 Apr 2004 08:17:42 -0800 (PST)

    > From: Aaron Bentley <address@hidden>

    > Tom Lord wrote:
    > >     > From: Miles Bader <address@hidden>

    > >     > I don't see any problem with going into
    > >     > the thousands at least.

    > > Thousands and beyond.  To be fair, there's a few obscure optimizations
    > > that will have to come on line as these edge cases get exercised but,
    > > once we're into a few thousands of changesets in a version --- if your
    > > project has more than that in a given year then you are probably out
    > > of control.

    > Well, determining the patchlevel is an O(n) operation.  No matter how 
    > fast it is, it'll be slow if you throw enough revisions at it.  See my 
    > other post for an optimization that makes the O(n) part a local operation.

You wrote:

    > Hmm.  Assume each listing at 80 bytes, and 10,000 patches, and that's 
    > 800k of directory listings.  Sheesh.  But there's an obvious 
    > optimization: I have patch-10000; does patch-10001 exist? no? version-0? 
    > no? Okay, patch-10001 it is.

One could usually do even better than that.  Absent a --force option,
`commit' doesn't have to look at which revisions already exist
_at_all_.  The revision-to-be-committed can be inferred entirely from
tree-state and command line arguments and options.  The step for
acquiring the lock will fail if the user has tried to commit an
out-of-date tree or has omitted a necessary --fix option.  (--force is
a different matter.)

But I doubt it's worth the trouble.

First, 80 bytes is much too high a guess.  "versionfix-XXXX" is 14
characters.  Assuming a .listing file, add a carriage return and
newline for 16 characters.

An upper bound is 156k for the listing, not 800k.  Now, I grant you,
that's still 30 or 40 seconds over a 56k modem but otherwise it's just
a blip.

Second, I'd be more worried about `tag' for which, at that scale,
we'll need to add "log summaries" or something similar.

Third, remember, though, that at 10K patches _in_a_single_version_
we're really in the realm of a comfortable overestimate of anything
people should reasonably want.  If created over a long period of time,
such a long line of development should be split into successive
versions.  If created over a short period of time -- what the heck is
going on?  You really expect people to do something reasonable with
_that_ high a rate of change on a single line?  So, over a
short-period of time, I think you'd want it split into parallel
versions.

Someone mentioned the interesting case of a CVS archive that cscvs
calculates at 11,000 revisions.  Aside from needed to tweak cscvs for
this purpose or build some new scripts around it -- is there any
reason to not want to split that into successive versions?  It's
unlikely there's be any utility to keeping that many patch log entries
in the tree and, if you're going to prune, you'll want some version
boundaries in there to serve as the unit of pruning.  Sheesh --- can
you imagine tagging the latest revsion in an 11,000 revision version?
The need for "log summaries" aside -- the log message of the tag would
revision would be a better fit for your 80-bytes-per-revision size
estimate.


-t






reply via email to

[Prev in Thread] Current Thread [Next in Thread]