[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: CVS corrupts binary files ...

From: Paul Sander
Subject: RE: CVS corrupts binary files ...
Date: Mon, 28 Jun 2004 18:31:58 -0700

>--- Forwarded mail from address@hidden

>[ On Thursday, June 17, 2004 at 15:49:39 (-0700), Paul Sander wrote: ]
>> Subject: RE: CVS corrupts binary files ...
>> Nope, I got it.  The thing is, you can control pointers (e.g. makefiles
>> containing references to files stored in a library somewhere) all you
>> want, but that buys you nothing unless the targets of the pointers are
>> also tightly controlled.

>No, you didn't "get it".

>You seem to be among the many people who always forget to keep in mind
>what CVS is _not_.  For example:  CVS is not a complete software
>configuration management system.

You are correct:  CVS is a version control system.  And it does version
control only.  That means that it archives collections of files for
later reproducibility as a set (by label or branch/timestamp) or
individually.  And it does so without regard to the content of the
files, though it works best with files that contain only ASCII text.

What CVS does NOT do includes but is not limited to the following:
Compiling or building software from sources (i.e. build procedure);
packaging, installing, configuring, or deploying software produced by
a build procedure; passing collections of versions around and tracking
their progress through the development process (i.e. change control).

Furthermore, all of these must be repeatable procedures that produce
identical results when given identical inputs (or if not bit-for-bit
identical results then at least bug-for-bug compatible results), and as
output they must also record all inputs and outputs so that specific
results can be reproduced at will.  That means that the procedures must
remember environment variable settings, command line parameters, baseline
references, profiling and debugging flags, the operating system version
and patch level (and sometimes even the hardware configuration), and
a bazillion other things in such a way that they're readily available
so that someone can either a) run a procedure that's identical to another
one but with one or more specific and controlled variations, or b) reproduce
the results of a past build exactly.  And if a procedure fails for some
reason (e.g. dangling baseline pointer or OS incompatibility) then it
must diagnose the reason so that humans can perform the necessary recovery.

>Anyone using CVS as a change tracking tool _must_ have some encompassing
>software configuration management system as well.  If that encompassing
>SCM system does not have "tight control" over _all_ of the components
>and procedures and processes used in the production of the software
>products then it is certainly not the fault of CVS.

This much is true, also.  Version control is not sufficient on its
own to build a good SCM system.  All of the pieces I listed above
that CVS does NOT do are also necessary components.  Some of these pieces
are readily available, and others must be built.

However, getting back to the issue that originated this thread, there
remains the question of what to do with code drops from external sources
(particularly drops that are delivered in binary form).  I maintain that
such code drops must be handled as any other sources:  Put them under
source control, and apply the necessary change, build, and deployment
procedures to treat it like any other part of the product.

Contrast this with Greg's recommendations, which I reproduce here:

++> From:       Greg A. Woods
++> Subject:    Re: CVS corrupts binary files ...
++> Date:       Tue, 8 Jun 2004 17:15:29 -0400 (EDT)

++> [ On Saturday, June 5, 2004 at 13:01:48 (-0700), Adrian Constantin wrote: ]
++> > Subject: Re: CVS corrupts binary files ... 
++> >
++> > I don't wanna merge binary files, and I'm not likely
++> > to modify them in my module (project). I just want cvs
++> > to carry them along with the sources

++> Then your better tool is called a "directory" (i.e. outside of CVS) and
++> you use it with a simple reference to it from within your build system.

++> -- 
++>                                                 Greg A. Woods

and here:

++> From:       Greg A. Woods
++> Subject:    Re: CVS corrupts binary files ...
++> Date:       Thu, 17 Jun 2004 16:12:15 -0400 (EDT)

++> [ On Wednesday, June 9, 2004 at 09:35:37 (-0400), Tom Copeland wrote: ]
++> > Subject: Re: CVS corrupts binary files ...


++> >  Most of my Java projects use 3rd party jar files,
++> > which are compressed tar balls, more or less.  And I certainly don't
++> > want to try to merge foolib-0.1.jar with foolib-0.2.jar when a new
++> > version comes out; I just want to put it in CVS so that it gets tagged
++> > and exported and so forth.

++> No, you REALLY DO NOT want (or need) to do that.  What a waste.

++> What you should do is treat the foolib product files for what they are
++> and to install them as products on your build machines in directories
++> named after their complete version-specific identifiers.

++> Then you need only program your build system to refer to the appropriate
++> directory for the appropriate components and if your build system is
++> anywhere half decent you'll simply check in the build system
++> configuration source file(s) and tag them.  Once you've done that then
++> you can check out any release of your source and type "make" and the
++> right components will be combined with your sources 
++> CVS is _not_ a complete configuration management system.

++> Please learn to use the right tool for the job!!!!

++> -- 
++>                                                 Greg A. Woods

and here:

++> From:       Greg A. Woods
++> Subject:    Re: CVS corrupts binary files ...
++> Date:       Thu, 17 Jun 2004 18:16:32 -0400 (EDT)

++> [ On Thursday, June 17, 2004 at 16:25:02 (-0400), Tom Copeland wrote: ]
++> > Subject: Re: CVS corrupts binary files ...
++> >
++> > Hm.  Why not simply check these jar files into the repository where they
++> > can be tagged/branched/exported and so forth?  Why should every
++> > programmer on my team need to get all the versions of each jar file laid
++> > out on his machine when he could just do a "cvs up" to get the current
++> > stuff for his branch?

++> Don't you have a build system?  (apparently you do going by your later
++> comments)

++> Can't it do all those things for you?

++> Let me repeat:  CVS is _not_ a build system.

++> Just because you can use CVS to update version-controlled files from
++> some central repository doesn't mean you should try to use CVS to copy
++> all types of files from all kinds of repositories.

++> If you have many and diverse build machines then put your static
++> (i.e. non-changing) components on a central machine in a public
++> directory and have your build system invoke the appropriate tool to copy
++> them into the build environment as necessary.  If you do that, and if
++> the way you reference thoversion numbers (e.g. in the name of the directory 
they're "installed"
++> in), and if your build system is configured using normal source files
++> (e.g. text makefiles) that you commit to your CVS repository, then CVS
++> will track which version of which component is needed for every release.

++> -- 
++>                                                 Greg A. Woods

On three separate occasions, Greg actually *recommends* intalling and
treating such code drops as uncontrolled sources!  Dropping stuff in
a directory and pointing makefiles at it is just plain bad CM.  The
reasons it fails to be good CM are, among others:  It does not capture
the installation and configuration options for repeatability or
reproducibility; it does not provide change control; it does not even
guarantee reproducibility of sources.

>Also what you and many other folks seem to forget as well is that manual
>procedures and processes can be far easier and more effective than
>canned software tools for implementing some parts of a complete SCM
>system.  Furthermore those who expect one tool to do everything for them
>are living in a world of pure fantasy.  Software development is first
>and foremost a process driven by people, not just other software.

This is just plain wrong.  Manual procedures, though easy, are unrepeatable
by their very nature.  And although they have their place for one-off
actions, anything that's worth doing twice is worth writing a tool for.
(Well, that last statement has limits in practicality; there's a break-
even point where the the benefit of automation exceeds the cost of
automation, but that point is usually relatively low, especially in the
CM domain.)

Yes, software development is a process is driven by people.  And to be
more productive, people use tools.  Tools are written to the specification
of their users.  People are not to be enslaved by their tools; if the tools
don't work then they must be fixed or replaced.  Shops that practice
software reuse tend to believe that fixing is cheaper than replacing.

>--- End of forwarded message from address@hidden

reply via email to

[Prev in Thread] Current Thread [Next in Thread]