bug-gawk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [bug-gawk] Problem with printing 5000 lines to a coprocess


From: Hermann Peifer
Subject: Re: [bug-gawk] Problem with printing 5000 lines to a coprocess
Date: Sat, 20 Dec 2014 21:10:48 -0200
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:24.0) Gecko/20100101 Thunderbird/24.6.0

On 2014-12-20 20:37, Andrew J. Schorr wrote:

The other approach is to write out a batch of data, and then read a batch
of results.  In other words, instead of writing 1 line and then reading 1 line,
or writing the whole file and then reading all the output, try something
in between: write N lines of data at a time, and then read the output from that
batch.  I'm not sure what the best value of N is; you will have to experiment.
This should give much better performance than writing 1 line at a time.
You just need to make sure that N is small enough to avoid filling the
kernel buffer.


Thanks again for all the hints. Actually, I thought about the "write N lines" option, where a reasonable value for N could be around 1000, as far as I can tell from my experiments.

However, while re-reading the manual, I saw this hint at the end of section 12.3:

> You may also use pseudo-ttys (ptys) for two-way communication
> instead of pipes (...) Using ptys avoids the buffer deadlock
> issues described earlier, at some loss in performance.

As far as I can tell: my initial approach (write 1 line/read 1 line) works fine now, after adding: PROCINFO[command, "pty"] = 1

As you mentioned earlier: maybe some additional hints about buffer deadlocks could be added to the manual.

Hermann



reply via email to

[Prev in Thread] Current Thread [Next in Thread]