|From:||Dr . Jürgen Sauermann|
|Subject:||Re: Time/size complexity of APL algorithms|
|Date:||Mon, 17 Feb 2020 12:57:02 +0100|
|User-agent:||Mozilla/5.0 (X11; Linux i686; rv:60.0) Gecko/20100101 Thunderbird/60.6.1|
thank you very much for your suggestion. See my comments below.
On 2/17/20 8:11 AM, Elias Mårtenson wrote:
No problem, that's what bug-apl is made for.
Not quite sure what purely immutable implementation means. Is it lazy
evaluation where the computation of the result is deferred until the
result is needed?
I very much appreciate your focus on practical relevance. Over time I
have seen some proposals onthe web that were promising theoretical
improvements but were easily debunked when looking at the details.
What we should aim at is the total execution time of a real world APL
+/⍳ is an example that was mentioned in the context of lazy evaluation
and it is a very good example for what we should NOT aim at. The good
news is that the time-complexity of +/⍳ ran be reduced from
O(N) down to O(1). Perfekt. Shall we optimize it? Certainly not. The
reason is that +/⍳ is simply a programming mistake. If a programmer
chooses to write +/⍳N instead of, for example, (-N-N×N)÷2, then
improving on that is only fixing obvious mistakes rather than improving
APL. Already 9-year old Freddy Gauß knew how to compute +/⍳ so we
should require that an APL programmer knows it as well.
By coincidence, I was myself thinking about optimizing +/⍳ some years
ago (it is a rather low hanging fruit with an impressive speedup) but I
then dropped it due to its lack of practical relevance.
I suppose that modifying variables is a rather common programming
style in APL. Not allowing it for the sake of simpler parallelization can
easily fire back (increasing the total execution time or space).
The true challenge of parallelization is not to find faster implementations
for specific constructs like +/⍳ but to understand the trade-offs between
the benefits of parallel execution on the one hand and the (primarily
run-time) cost for them on the other.
In the +/⍳ example this is easy (although useless) since +/⍳ can be detected
statically (at ⎕FX time). However lazy evaluation in the general case has
far higher run-time cost than +/⍳ so it remains to be seen if those run-
time cost can ever be be amortized in a real-world application. Many of
the examples brought up by proponents of lazy evaluation have the
following design pattern:
1. create some large values,
2. perform a (due to the sizes expensive) computation on the values
3. discard most of the results computed in 2.
I strongly believe that optimizing such design patterns by the APL
interpreter (rather than by the APL programmer) is not worth the
effort and, more importantly, will not speed up APL.
|[Prev in Thread]||Current Thread||[Next in Thread]|