bug-cvs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Lost process output in pipe between Emacs and CVS


From: kevin wang
Subject: Re: Lost process output in pipe between Emacs and CVS
Date: Wed, 24 Jul 2002 16:11:30 -0700

 From Derek Robert Price
> The problem is that in a standard configuration:
> 
>            --stderr->     -------------stderr------------>
>           /          \   /                     /          \
> CVS Server            ssh            CVS client            tty
>           \          /   \          /          \          /
>            --stdout->     --stdout->            --stdout->
> 
> Note that since CVS didn't access the stderr of its child process, ssh, 
> the child process gets a clone of the parent process' stderr descriptor 
> and ssh and the CVS client end up sharing the tty's standard error.
> 
> Now, when the user redirects stderr to stdout, say, to redirect the 
> output to a file (e.g. CVS_RSH=ssh cvs diff >tmp.diff 2>&1), you get the 
> following configuration:

This may sound silly, but as a temporary workaround, can't you:

CVS_RSH=ssh cvs diff >tmp.diff 2>tmp.diff 

that ought to open the file twice as separate file descriptors.
The downside is that I/O is now unsynchronized, and lines may be inserted
in the middle of other lines, but nothing should get lost at least.
switching to line buffering should eliminate most of that problem.

this assumes, of course, that is what you're trying to do. There are
other possibilities that you can't control, I know, and doesn't solve
the real problem.  

>            --stderr->     -------------stderr-------------
>           /          \   /                     /          \
> CVS Server            ssh            CVS client            >tty/file/whatever
>           \          /   \          /          \          /
>            --stdout->     --stdout->            --stdout--
> 
> Since CVS was using the same file descriptor for stderr and stdout, ssh 
> is writing to CVS's stdout descriptor as its stderr.  When ssh sets its 
> stderr to non-block, the same happens to CVS's stdout.  Since CVS isn't 
> prepared for this, data gets lost (written to a non-blocking descriptor 
> without watching for EAGAIN).
> 
> So, anyway, cat wouldn't need to do line buffering.  What has been 
> proposed is that a script stick cat in between ssh's stderr and cvs's 
> stderr.  I assume by redirecting ssh's stderr to cat's stdin and then 
> cat's stdout back to CVS's stderr, but I'm going to leave stdin out of 
> the following picture for convenience:
> 
>            --stderr->     --stderr----->cat-------stderr--
>           /          \   /                     /          \
> CVS Server            ssh            CVS client            >tty/file/whatever
>           \          /   \          /          \          /
>            --stdout->     --stdout->            --stdout--
> 
> Now, when ssh sets its stderr to O_NONBLOCK, only cat's stdin will be 
> affected.  cat's buffering ability will be irrelevant since ssh is the 
> only PROCESS that needs to be aware of the non-blocking i/o and resend 
> data and it is already doing that.

yup. perhaps re-opening/re-assigning stderr before forking off ssh? hm,
no, you still have the fd cloning problem

you could fork off a client and do the 'cat' thing yourself, but exec'ing
cat would have the same result.

you could setup a pipe and use the parent process to do the read/write
separation, but that's no different.

Hm, it's too bad that the shared internal file info also shares the
block/nonblock setting, but I suppose that's unavoidable.  You cannot
both have and not have a buffer block on a given filedescriptor.


thanks for taking the time to detail the issues; I can't seem to find
the first part of the thread; did it come from another mailing list? or
am I just being blind?

   - Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]