bug-apl
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Time/size complexity of APL algorithms


From: Dr . Jürgen Sauermann
Subject: Re: Time/size complexity of APL algorithms
Date: Mon, 17 Feb 2020 12:57:02 +0100
User-agent: Mozilla/5.0 (X11; Linux i686; rv:60.0) Gecko/20100101 Thunderbird/60.6.1

Hi Elias,

thank you very much for your suggestion. See my comments below.

Jürgen

On 2/17/20 8:11 AM, Elias Mårtenson wrote:
First of all, I hope Jürgen is OK with me posting about an APL-related topic that doesn't have anything directly to do with GNU APL. I'm doing this because this is the best place to find like-minded people. Sorry about that.

No problem, that's what bug-apl is made for.
The discussions around paralellisation let me to start thinking about ways to improve this in APL, and I was wondering if it would be possible to make a purely immutable APL implementation. If so, it would, at least in theory, bepossble to defer a lot of computations until a result is actually requested. This would be a lot easier to parallelise.

Not quite sure what purely immutable implementation means. Is it lazy
evaluation where the computation of the result is deferred until the
result is needed?
I hacked together a prototype which does this very thing. I've implemented some of the APL primitives that allows me to test different ideas.

Now, for my question: Do you have any examples of APL algorithms that are very slow, or uses a lot of memory? I want to know if my ideas actually work in practice.

I very much appreciate your focus on practical relevance. Over time I
have seen some proposals onthe web that were promising theoretical
improvements but were easily debunked when looking at the details.
What we should aim at is the total execution time of a real world APL
application.
If you are curious: Certain things a good. For example: +/⍳N runs in constant space. Others are not good. For example, debugging is a nightmare since evaluation can happen at any time (or not at all, or multiple times).

+/⍳ is an example that was mentioned in the context of lazy evaluation
and it is a very good example for what we should NOT aim at. The good
news is that the time-complexity of +/⍳ ran be reduced from
O(N) down to O(1). Perfekt. Shall we optimize it? Certainly not. The
reason is that +/⍳ is simply a programming mistake. If a programmer
chooses to write +/⍳N instead of, for example, (-N-N×N)÷2, then
improving on that is only fixing obvious mistakes rather than improving
APL. Already 9-year old Freddy Gauß knew how to compute +/⍳ so we
should require that an APL programmer knows it as well.

By coincidence, I was myself thinking about optimizing +/⍳ some years
ago (it is a rather low hanging fruit with an impressive speedup) but I
then dropped it due to its lack of practical relevance.
Also, x[N]←V is invalid code since arrays cannot be modified.

I suppose that modifying variables is a rather common programming
style in APL. Not allowing it for the sake of simpler parallelization can
easily fire back (increasing the total execution time or space).

The true challenge of parallelization is not to find faster implementations
for specific constructs like +/⍳ but to understand the  trade-offs between
the benefits of parallel execution on the one hand  and the (primarily
run-time) cost for them on the other.

In the +/⍳  example this is easy (although useless) since +/⍳ can be detected
statically (at ⎕FX time). However lazy evaluation in the general case has
far higher run-time cost than +/⍳ so it remains to be seen if those run-
time cost can ever be be amortized in a real-world application. Many of
the examples brought up by proponents of lazy evaluation have the
following design pattern:

1. create some large values,
2. perform a (due to the sizes expensive) computation on the values
3. discard most of the results computed in 2.

I strongly believe that optimizing such design patterns by the APL
interpreter (rather than by the APL programmer) is not worth the
effort and, more importantly, will not speed up APL.
Regards,
Elias


reply via email to

[Prev in Thread] Current Thread [Next in Thread]