coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Allow tail to display data after stream stops sending input after N


From: konsolebox
Subject: Re: Allow tail to display data after stream stops sending input after N seconds
Date: Tue, 26 Apr 2022 02:06:31 +0000

On Mon, Apr 25, 2022 at 11:04 PM Rob Landley <rob@landley.net> wrote:
> You asked for:
>
> > allows tail to wait for N seconds before considering input as "stopped"
> > and then displaying currently absorbed input.
>
> Which is what Padraig offered, but what you seem to have actually WANTED was 
> an
> infinite wait for initial output, and then stop after 10 seconds of no
> additional output.

I made enough replies to clarify my intent but most importantly I gave
a "working" example.

tail -f /var/log/messages -n +1 | grep -e something > temp & pid=$! #
Actually missed adding '--line-buffered'
inotifywait -mqq -t 1 temp
tail -f --pid="$pid" temp
rm temp

This is what the code does:

- Initialize an instance of tail that reads data from a file indefinitely.
- Filter the read data using grep.
- Wait for a few seconds for filtered data to stop being sent
- Initialize another tail -f command which shows the last 10 lines for
the moment, and then continue reading and outputting more incoming
filtered data; which also continues to.

This imitates a tail command with filtering function:

tail -f /var/log/messages -e something [-n 10]

None of the replies gave an alternative to the method as far I see.

> My quick-and-dirty suggestion off the top off my head was:
>
>   input | while read $X i; do echo "$i"; X="-t 10"; done | output
>
> I.E. an interposer that waits infinitely for the first gulp of data, and then
> has a shorter timeout for additional data.
>
> This seems like a thing you can easily do in an existing bash pipeline rather
> than adding an option to a command that was in unix v7 and has somehow gotten 
> by
> without this for over 40 years? (And yes you can do byte at a time read/echo
> instead of line at a time if that's what you want. Again, you didn't 
> specify...)

Anyway this also doesn't do what I want and I've been worried about
the use of read with timeout since there's a possibility that a line
can be read partially and sent as a line even just by theory.

But this suggestion made me rethink about `read -t`.  I figured `grep`
actually does --line-buffered anyway so at the very least it
guarantees data to be sent and received line by line.

I then realized I could do something like this:

tail -f /var/log/messages -n +1 | grep -e something --line-buffered | (
    buffer=() IFS= l=0

    while read -rt "${timeout}" __; do
        buffer[l++ % final_limit]=$__
    done

    for (( i = l % final_limit, j = l > final_limit ? final_limit : l;
j > 0; ++i, --j )); do
        printf '%s\n' "${buffer[i % final_limit]}"
    done

    exec cat
)

This however isn't perfect since if the last data ends without a
newline and the pipeline theoretically terminates at that point, the
partial data will become transformed to a complete line.  Also in the
way the code is written, it actually gets completely ignored.  I
however found a way around it:

tail -f /var/log/messages -n +1 | grep -e something --line-buffered | (
    buffer=() IFS= l=0

    while read -rt "${wait}" __; do
        buffer[l++ % final_limit]=$__$'\n'
    done

    [[ $? -ne 0 && $__ ]] && buffer[l++ % final_limit]=$__

    for (( i = l % final_limit, j = l > final_limit ? final_limit : l;
j > 0; ++i, --j )); do
        printf %s "${buffer[i % final_limit]}"
    done

    exec cat
)

The fix also ironically circumvents the partial line read issue which
I worried about.

So I guess the feature I'm asking may no longer be needed if this
turns out to work perfectly.


-- 
konsolebox



reply via email to

[Prev in Thread] Current Thread [Next in Thread]