Your closing statement does sound relatively accurate -- your tool's
focus seems to be on setting up a "staging area" locally, getting
things ready there, copying the files in one go, and then executing
scripts on the remote end. Fabric, on the other hand, has been more
aimed at writing a Python script "locally" which uses the SSH tunnel
to actually execute the shell level commands on the remote server.
File copying is obviously much the same anywhere.
Glad to see I understood all well :-)
The deployment logic can be changed so I'm not (so far) reluctant on using Fabric, if it can improve things...
There's really nothing preventing you from executing your current
workflow using Fabric, however, since it's capable of working locally
as well as remotely.
That's also what I was thinking about and what I basically try to implement so far.
The main thing Fabric is missing that you seem to rely on is your
"environments" and "tags", which sound similar to an item on our todo
list, namely implementing a way to group servers together (something
Capistrano refers to as 'roles'). We also do not have a baked-in
concept of environments, but it's easy enough right now to implement
them (I believe the current docs explore this somewhat, IIRC).