fab-user
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Fab-user] [ANN] Fabric 1.0.4, 1.1.4, 1.2.2 released, & status updat


From: Jeff Forcier
Subject: Re: [Fab-user] [ANN] Fabric 1.0.4, 1.1.4, 1.2.2 released, & status update
Date: Mon, 5 Sep 2011 18:40:01 -0700

Hi Ramon,

On Fri, Sep 2, 2011 at 2:31 AM, Ramon van Alteren <address@hidden> wrote:

> I would prefer a scenario where fabric would always execute a command marked
> for parallel execution on all hosts and report status afterwards. The
> execution of any subsequent commands could be halted or not depending on the
> value of warn_only.
> I was wondering what your take on this was.

Without having digested Morgan's work yet, I would agree with your
scenario/statements here. Once parallelism enters the mix, I think the
most sensible execution method is to do "best effort" for all hosts in
the host list, and only fail once every host has attempted to execute
the task in question.

I think this also ties into the idea of "checkpointing", where if one
had tasks A,B,C running on many hosts, the common case would be to run
task A to completion on all hosts before continuing to task B, and so
on.


> I would prefer to surpress all interaction in parallel execution mode and
> make the task fail instead.

This is already in, as the `abort_on_prompts` setting (which is relatively new.)


> Switching to logging will not solve this problem, interleaved output will
> still happen in a situation where all output is pushed into a logging
> stream.

The idea as I had it was to set up each host's stdout/err as a
separate logging object. By default, they would (probably) all still
be writing to sys.stdout, and thus quite possibly getting interleaved,
but it should/will be trivial to switch that to writing one file per
host, or (more daringly perhaps) to one screen/tmux session per host,
if the user wants.

The main point of using a logging library is that it makes this sort
of multiplexing a lot easier than naively printing as we do now.

> I noticed that return values from tasks are not captured in the current
> codebase (is this correct ?)

"Capturing" return values for tasks themselves doesn't make any
conceptual sense at the moment because when using 'fab' the return
values have nowhere interesting to go; and when not using 'fab',
you're in full control and can do whatever you want with a task
function's return value.

If I'm missing what your question is about, let me know.


> Although in a very embryonic state, I took the work that Morgan did on the
> parallel queue and converted it to a queue that is capable of capturing the
> return values of tasks executed in parallel.

I haven't had a chance to look at this yet, but if you can add a
comment with a link to it in the multiprocessing ticket (#19) that'd
be great!


Thanks for all the input, it's appreciated.

Best,
Jeff


-- 
Jeff Forcier
Unix sysadmin; Python/Ruby engineer
http://bitprophet.org



reply via email to

[Prev in Thread] Current Thread [Next in Thread]