gm2
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Dynamic mutidimensional arrays


From: Michael Riedl
Subject: Re: Dynamic mutidimensional arrays
Date: Wed, 5 Apr 2023 14:30:14 +0200
User-agent: Mozilla/5.0 (X11; Linux i686; rv:102.0) Gecko/20100101 Thunderbird/102.9.0

Benjamin, Andreas,

I think we can stop this discussion as it leads to nowhere.

I will definitely not use constructs such as val:=value(A,i,k) * value(B,k,j) or val:=A^[i]^[k]*B^[k]^[j] instead of a clear and concise val:=A[i,k]*B[k,j] for the given reasons.

I gave my points and I did not see anyone picking up any argument I brought on the table what is speaking for my suggest. OK, then we should abandon this conversation and safe our time.

A nice eastern for all

Michael


Am 05.04.23 um 10:40 schrieb Benjamin Kowarsch:
Hi Michael,

On Wed, 5 Apr 2023 at 17:08, Michael Riedl wrote:

if you have to call a function to access a single array element such a constuct would be a complete disaster form a runtime prospectice. And I not even talk about the readbility of the resulting code and the ability to debug it ...


Which construct? The method I showed allows the use of array subscripts.

Even then, what you are arguing for is premature optimisation.

Code needs to be written to be (1) correct and (2) comprehensible, for which we need abstraction. Optimisation is first and foremost the job of the compiler, and thus the job of language designers and compiler implementers.

The pragma specification of our revision for example has an <*INLINE*> pragma that signals to the compiler that a certain procedure or function is time critical. Now, most of the time, the compiler implementer knows better when to optimise than the compiler user, and thus the pragma is not a strict mandate. Nevertheless, it signals time criticality to the compiler and this information can then be used in the optimising backend to take appropriate measures to optimise the code. Under no circumstances should abstraction and readability be sacrificed because of a belief that the code would execute faster that way. A source code editor is simply the wrong tool to do optimisation. An optimising compiler is.

Last but not least, as you can see in the dynamic string library I referenced, there are ways to transfer loops from the outside into a procedure or function whose implementation is hidden. This is what Modula-2 has procedure types for. You define a procedure type and implement various iterator procedures/functions that take an argument of the procedure type. Then you write a procedure that acts on the array in the way you wish, and pass that procedure into the iterator procedure/function call. Now your code runs directly inside the hidden procedure/function without the penalty of a function call per access.

https://github.com/m2sf/m2bsk/blob/master/src/lib/imp/String.iso.mod#L410
 

I have part of code with O^3 (e.g. a simple matrix multiplication) up to O^5 dependencies of the size of the problem. It is heavily under-estimated by most programmers how e.g. cache efficiency and similar issue due influence the run-time in numerical analysis.

If you read my earlier response, you will find that I addressed cache efficiency there.
 

I therefore can only warn from a statement such as "CPU and RAM are no constrains any more".

I didn't say any such thing. I said CPU and RAM constraints are less of an issue, and I did that in the context of the work on the compiler, not in reference to the execution speed of generated code. The point was this: The reason I am advising against bolting on specific features for a very specific use case is not that the compiler would run into resource issues when burdened with that extra feature. Instead, the reason is that the compiler will eventually run into complexity and maintainability issues when burdened with more and more such features.

Our programms even need hours or even days on current hardware for medium sized problems.

And you are absolutely right - I do not want to code everything again and again - there are mature libraries developed over decades and you cannot win the completion if its about runtime for them.


Then why not use a highly optimising Fortran compiler for all the math?

If I am not mistaken, Intel's Fortran compiler is still the gold standard for math performance evaluation and library availability is unmatched, too.

regards
benjamin


reply via email to

[Prev in Thread] Current Thread [Next in Thread]