info-cvs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Triggers (was Re: CVS diff and unknown files.)


From: Paul Sander
Subject: Triggers (was Re: CVS diff and unknown files.)
Date: Fri, 28 Jan 2005 03:26:05 -0800


On Jan 27, 2005, at 12:36 PM, address@hidden wrote:

[ On Wednesday, January 26, 2005 at 21:05:46 (-0800), Paul Sander wrote: ]
Subject: Re: CVS diff and unknown files.


See above.  If there are no add-time triggers, then I can live with
what you say. On the other hand, some shops REQUIRE add-time triggers,
and if add-time triggers are used then contacting the server is
REQUIRED to make them run. I had hoped that this was clear in the last
go-round, but apparently not.

Developers who think they require add-time triggers need their heads
examined, but if they really think they want to implement them then they
have all the power of any programming language they desire to do so in
a pogram that they build over and above and outside the core CVS program.

Sigh. Just because you haven't found a use for add-time triggers within the scope of your blinders, doesn't mean that no one has.

Also, wrappers are not always the answer. Sure, it's possible to duplicate the CVS command line and invokes whatever tools enforce policy for the given shop. The thing is, it's a huge duplication of effort when many shops do it, plus the wrappers must contact the server to insure uniform enforcement. We already have code that does that, and it's called "cvs". It's faster, cheaper, easier, and more robust to add it to CVS.

Agreed.  But the tool must be sufficiently flexible to allow robust
implementations of policies. Sometimes triggers are the right way (and
wrapper scripts are not), and we've identified one area here where CVS
is not sufficiently flexible.

That's pure and utter bullshit.

These so poortly named "triggers" in CVS now were a poor idea from the
start which have been misperceived by folks such as yourself (and no
doubt your blathering on about your misperceptions has only spread the
problem further) and they've been rather poorly implemented to boot.

I agree that triggers as implemented by CVS are poor. They're better than they once were, but they're still poor. That doesn't mean they're not needed or shouldn't be used (or fixed).

If you think you want policy control over working directories then
either you're using the wrong tool for the job, or you need to think
more like a true toolsmith and build your policy enforcement tools over
and above and outside the core CVS program where they belong.

Are you arguing here that I should not be using triggers, or that I should not be using CVS? If the former, then Greg, the ignorance shown by that statement utterly astonishes.

My current employment has me implementing process automation and policy enforcement every day. One of my current projects is to design an engineering change order (ECO) process and the necessary tools to implement it. This is the third such process that I have done (for different projects) in the past two years. We don't use CVS for these projects, but rather a home-grown system. Until recently it had no triggers, and even now their implementation is incomplete. In other words, we face the same limitations with this system as we would have with CVS.

People (other than myself, working on other projects) demanded that triggers be added to this version control system for the following reasons, which are valid:

- The command given by the user doesn't always have enough information for a wrapper to determine all of the artifacts affected by the trigger. Some processing by the version control system is required before refinements take over. - Wrappers can be subverted by the users, sometimes accidentally, sometimes deliberately.
- Some users must be kept away from certain features.
- Access to certain features must be restricted conditionally upon the user's specific task. - The runtime cost of certain actions is much cheaper when invoked at a level below the command line. - In client/server environments, certain actions must be performed in the context of the server, not in the context of the user. The security model is such that access to the server is given only to certain trusted applications. - Wrappers can only add functionality. They cannot remove or limit existing functionality. - The actions of wrappers cannot be incorporated into the back-end transactions implemented by the version control primitives. - Wrappers cannot utilize or control primitive operations at a lower level than the command line.

Now I will give an example that will absolutely horrify you: We have actually identified cases where our counterparts of "cvs checkout" and "cvs update" must be restricted in ways that don't involve classical access control. This directly counters your assertion that policies don't reign over individuals' workspaces.

The requirements are:
- There is a notion of "development" phase and "frozen" phase.
- Different branches of the project can be in different phases.
- During the "development" phase, all users have access to all versions of all files.
- During the "frozen" phase:
- Users who work on an ECO can fetch the latest checked-in versions of files related to that ECO only. - All users can fetch specific groups of files that have undergone release integration.
- These rules are complete, and there are no exceptions.

The project is large with many interconnected pieces. (Here are some metrics to give a rough idea of how large: A single baseline occupies more than 120GB of disk. The repository exceeds 500GB of disk, after compression. Workspaces dedicated to development, excluding test benches and data collected from the verification effort, exceed 6TB of disk.) We run a huge battery of tests to verify that the project meets its specifications. The tests can run at arbitrary times by arbitrary users. (Sometimes the tests are run in the filesystem containing the baseline produced by the integration process. Sometimes they're run in a user's workspace or dedicated test bench and the results are deliverables in their own right, which undergo release integration.) Run times of test suites range from a few seconds to several days (non-stop, fully automated).

The reason why we subject ourselves to these rules is because, unless someone is actively fixing a problem, we want people to have only working configurations of the project. The project is so large and the testing procedures so costly that we simply cannot afford to pay the price to recover from version mismatch errors should users have arbitrary control over their workspaces. These kinds of errors cause significant schedule slips, which in turn shortens the relevance of the product in a very fast-moving industry, thus compounding the cost by way of lost revenue.

It turns out that all of the users have learned and retained just enough version control to get their day-to-day work done, assuming nothing "special" happens. In other words, they are versed well enough in the version control system to get themselves into trouble. Plus sometimes people swoop in from outside to use our project as test data for some new procedure and assume that their knowledge of the basic tools is sufficient to work effectively. Because the ECO process builds metadata in addition to that kept by the version control system, and because the notions of "working" and "not working" are tracked by the ECO process, we really want users' environments to be in the context of the ECO process while the project is frozen. The easiest way to guarantee that is to make sure that the capability to fetch arbitrary versions of files is removed from the environment in the context of the ECO process, preferring instead to force the user to fetch only those versions that have been proven to work together. But because the same users may be working on frozen and unfrozen parts of the project concurrently, we can't simply remove the version control system from the environment.

We found that the fastest, easiest, and most robust way to enforce this is to: - Write ECO tools in such a way to inform called tools that an ECO context is active. - Record metadata in the version control system to specify what parts of the project are in a "frozen" phase. - Write triggers on the fetch operations that sense both conditions and grant access according to the rules listed above.

Now, it's true that our implementation of this has nothing to do with CVS, but a process like this is equally relevant using CVS as the version control tool. It's also true that it has nothing to do with add-time triggers. But I hope I'm getting across the point that the need for triggers crops up in the most surprising places. Putting one into "cvs add" just happens to be a higher priority because people (other than myself) have mentioned it here before.

It's frequently said to users that "tags apply to the versions in your
workspace" when "cvs tag" is used.

If that's the way it is being said then that is misleading.

Tags applied by the "cvs tag" command go, by default, against the
BASE REVISIONS of the files in the workspace.

(and besides, files in the working directories don't really have
revisions -- they were derived from their base revision and they may, or
may not, have local changes)

And then you have to explain all this, and if you're very lucky they might understand the difference. If you're extremely lucky they might even care. In no case will they change their minds about how it "should" work.

 But on the other hand,
having an atomic commit/tag operation would be useful if it existed...

once again you're trying to use, or thinking of using, the wrong tool
for the job.

CVS does not, should not, and need not, try to guarantee atomicity for
higher-level SCM abstractions.

You're kidding, right? I'm too tired to start in on this other than to say that this statement is so far off base as to be laughable. So for now I'll assume that it's a very dry joke.

The problem was that "cvs tag" was complaining that it could not tag
the foo file.  This is because CVS didn't remember what version it had
after the rm.

Well obvsiously cvs does remember the base revision of a locally removed
file (1.3 in this case):

5:31 [1872] $ cvs status which.csh
===================================================================
File: no file which.csh         Status: Locally Removed

   Working revision:    -1.3    Tue Jun 17 21:13:33 2003
Repository revision: 1.3 /cvs/master/m-NetBSD/main/src/usr.bin/which/Attic/which.csh,v
   Sticky Tag:          netbsd-1-6 (branch: 1.3.12)
   Sticky Date:         (none)
   Sticky Options:      (none)


The bug in "cvs tag -F" is that it just doesn't make use of that memory.

Here at least we agree.  :-)

--
Paul Sander | "Lets stick to the new mistakes and get rid of the old
address@hidden | ones" -- William Brown





reply via email to

[Prev in Thread] Current Thread [Next in Thread]