octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

octave linear programming, symbol references and compilation.


From: Paul Thomas
Subject: octave linear programming, symbol references and compilation.
Date: Thu, 05 Feb 2004 17:31:39 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.2.1) Gecko/20030225

Dear All,

Somewhat intrigued by remarks in John Eaton's DSC 2001 paper and the recent discussion on benchmarking, I have started perusing the octave parser with a view to understanding why linear programming is slow (sometimes) in octave:

The "x=[1:1e5];tot=0;for i=1:1e5;tot=tot+x(i);end;" example (due to Paul Kienzle) is indeed slower in octave than Matlab. It takes 2s on octave 2.1.50 and 0.4s on Matlab R13 (Athlon 1700 with RH9). I had been inclined not to be over bothered with this but it clearly exercises some people and, in any case, reamins the only area where octave underperforms in the benchmark tests.

Expanding on the above example by replacing tot=tot+x(i) with <stmt> yields the following times (s):

#        <stmt>         octave2.1.50           MatlabR13

1          -               0.1154                0.039
2      do j=0;end;         0.523                 0.278
3    do j=1:10;end;        1.306                 0.843
4    do j=1:100;end;       8.787                 4.127
5        tot;              0.3717                0.131
6      tot=z;              0.6862                0.170
7    tot=tot+1;            0.7981                0.263
8    tot=tot+z;            0.943                 0.298
9    tot=tot+x(1);         1.853                 0.349
10   tot=tot+x(i);         2.027                 0.337
11   tot=sin(3.14159)      1.302                 0.259

From 1-4, I deduce that, very crudely, for octave(Matlab) there is an overhead of 4.1(2.4) microseconds for setting up a loop, with a further 0.83(0.38) microseconds per loop.

From 5-8, there is a 2.5-3(0.65-0.9) microsecond cost for a variable reference, depending on the context (I think, although these tests are way too crude to discern this, that binary arithmetic is quite a bit slower in Matlab.), and 1.1(0.6) microseconds for a literal constant.

From 8-10, I think that there is a whopping 7.5 microseconds extra for an indexed reference. The equivalent in Matlab is unmeasurably small.

From 7 and 11, it seems that octave is 7 microseconds to reference the sine function (or its execution is a bit slow). In Matlab, the reference to sine takes the same as a variable; 0.9microseconds.

I know that these tests are not accurately pinning down the time usage of different operations but they must be in the right ball park.

I conclude that the FOR/END construction in octave is about a factor 2 slower than Matlab, whilst direct references are 3-4 times slower and indexed and function references are about 10 times slower.

I had been gearing up to put some logging and timing into the C code fragments in parser.y, with a view to trying to figure out where the time is spent; ie. is it being long winded in the parser itself or in the C fragments? However, from the above, I think this would be a wasted effort and that sorting out symbol references, whilst executing the tree code, is what must be taking the time.

Presumably, the parser could pass on positions of pointers in symbol tables, rather than the names. Has anybody investigated this possibility or, alternatively, more rapid algorithms for searching in the symbol tables? If I am wasting my time, treading a well worn path, I would be grateful if somebody would let me know.

It occurred to me that appropropriate C fragments could be spewed out by the parser into a stream for digestion by g++ or mkoctfile, thus forming an octave compiler of sorts. Has anybody tried this?

Best regards

Paul Thomas



reply via email to

[Prev in Thread] Current Thread [Next in Thread]