octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: unary mapper system redesigned + a few questions


From: Jaroslav Hajek
Subject: Re: unary mapper system redesigned + a few questions
Date: Wed, 18 Nov 2009 08:59:37 +0100



On Wed, Nov 18, 2009 at 8:42 AM, Judd Storrs <address@hidden> wrote:
On Wed, Nov 18, 2009 at 1:35 AM, Jaroslav Hajek <address@hidden> wrote:
> I seriously doubt you need this done for every computation. In fact I'm
> interested if you have any real-life script that starts producing wrong
> results after the discussed patch is applied.

I have a quick question while I'm pondering this to think whether this
change would affect what I do. Does this demotion behaviour only
happen for scalars or will the type mangling affect matrices? I assume
it doesn't affect matrices because it would be a pretty heavy
operation to check all elements of a matrix.


It does affect matrices. Every time a matrix is encapsulated in octave_value, it is scanned for all zero imaginary components (currently, there's an extra check whether the zero is positive). The check is short-circuit, so for genuinely complex matrices usually only a few elements are scanned.
 
I guess my response to the isreal() problem was that it was a bug in
isreal() not checking the magnitude of the imaginary component. (1,-0)
and (1,0) are should be real because the imaginary component is zero.

In fact, isreal only checks the type of the result, so that you can force a complex number with zero imaginary part using complex().
 
Personally, I prefer to be explicit when a narrowing is expected e.g.
use real() when I want it to happen.


The automatic narrowing is a feature of Matlab. Dropping the narrowing entirely is something that IMHO wants a separate discussion. All that I'm saying is that if the automatic narrowing is done, constraining it to positive zeros makes little sense.
 
>From a performance tweaking/optimization point of view, I'm not liking
that type conversions become data-driven rather than programmer driven
if you know what I mean.

I know what you mean. Data-driven conversions are typical in the Matlab/Octave language.
 
Won't it be confusing if some functions runs
faster or uses more memory on some input data compared to others?
 
This is happening all the time. For example, ranges try to act like row vectors wherever they can; however, in a lot of places they work much faster than the equivalent row vector. Ask yourself if you find that confusing; I don't. You just need to keep in mind that 1:10 and [1:10] are *not* the same thing, and the first one is usually more efficient.

If you're doing something time critical, how do you know if you've found
the worst case? It would suck if things crash/fail randomly based on
data representation of the input as a result.

If you're doing something time critical in Octave, you should really understand a bit about how the internals work. There's no way out.

--
RNDr. Jaroslav Hajek
computing expert & GNU Octave developer
Aeronautical Research and Test Institute (VZLU)
Prague, Czech Republic
url: www.highegg.matfyz.cz

reply via email to

[Prev in Thread] Current Thread [Next in Thread]