octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: julia language


From: Levente Torok
Subject: Re: julia language
Date: Sat, 19 May 2012 12:11:01 +0200

On Wed, May 16, 2012 at 4:54 AM, Max Brister <address@hidden> wrote:
> Hi Levente,
>
>> Hi Max,
>>
>> [snip]
>>>>>> I am also thinking if octave could introduce reference type of
>>>>>> variables (matlab doesn't have this).
>>>>>> This would be a great thing, indeed.
>>>>>
>>>>> I don't think that introducing a reference type in Octave is a good
>>>>> idea. Doing this would require a major rework of the interpreter, and
>>>>> it is not clear what the syntax should look like. If call by value is
>>>>> really causing performance issues, I advocate using live range
>>>>> analysis to further reduce copies instead of a reference type.
>>>>
>>>> Well. I can imagine it in a way that wouldn't imply rewriting the 
>>>> interpreter.
>>>> This first thing that comes to my mind is to have a cast-like type
>>>> modifier similar to  uint32, for example.
>>>> ie.:
>>>>  a = ref(a)
>>>
>>> That would end up breaking most existing scripts. For example,
>>>
>>> function foo (a)
>>>  b = a
>>>  a(1) = b(1) + 1
>>>  if a(1) == b(1)
>>>    error ("My script is broken!")
>>>  endif
>>> endfunction
>>>
>>> If a were passes to foo as a reference, we would see an unexpected
>>> failure.
>>
>> Iff there is parallel execution present, right?
>
> This does not depend on parallel execution (but maybe we are assuming
> ref has different semantics?). Take the following code for example
>
> bar0 = [1 2 3];
> bar = ref (bar0);
> foo (bar);
>
> In foo we would then get both a and b as references to bar0. The line
> a(1) = b(1) + 1
> then changes bar0(1) to 2. We then have a(1) == b(1) and get a
> failure. Without the ref function we will never see a(1) == b(1) and
> there will be no failure.

True.
Well it comes to the  questions if a copy of reference should be a
reference or a value.
But in fact you are right. This is probably not the best way to go around this.
But I insist that we would need a notion of reference in octave
language someway.
Don't you think so?

>
>>> We could instead introduce a new keyword for function
>>> arguments like in C++. Something like
>>>
>>> function foo (ref a)
>>>  # now a is a reference
>>> endfunction
>>>
>>> But this makes the Octave language significantly more complex, as
>>> users now must check function signatures to see if their input
>>> variables are modified. Instead, if we make the interpreter recognize
>>> patterns like:
>>> a = foo (a);
>>> We can get the same performance benefit of pass by reference in the
>>> current system without having to change the language at all.
>>
>> Plus, if a function operates on the variable in a read only way, and
>> we exclude async access (multi thread/core) to the data we can use
>> reference w/o any worry, I think.
>
> We are already using a reference internally in this case. A copy is
> only created if you mutate the argument that.

I see. It is good to hear.
If I pass it to another function would still remain a reference?


>> [snip]
>>>>
>>>> And what do you think of this<
>>>>
>>>> And what do you think of julia arithmetic as such:
>>>> nheads = @parallel (+) for i=1:100000000
>>>>  randbit()
>>>> end
>>>> which would call all computing nodes and sum results together.
>>>
>>> I really like how Julia supports parallel computation. Matlab has
>>> something similar, parfor, but we currently don't parallelize it in
>>> Octave. There are several underlying technical issues in the
>>> interpreter which make this sort of parallel execution difficult.
>>>
>>> There exists Octave Forge packages that provide some parallelization
>>> support [3] [4]. However, it would be nice to parallelize the
>>> interpreter.
>>
>> I don't think octave really needs to sport on parfor since interpreted
>> languages are well known for effectively running (for) cycles.
>> Currently, I am quite happy with the parcellfun but I can imagine
>> something that is easier to use in daily life.
>
> Actually, parfor does support reduction (like @parallel in Julia).
> Writing the Julia example with parfor looks like
> nheads = 0;
> parfor i=1:100000000
>  nheads = nheads + randbit ();
> end
>
> It should currently be possible to create a function like parcellfun
> that does reductions like Julia's @parallel (I haven't taken a look at
> the code for parcellfun though).

I wonder if it is worth creating a parallel version and/or bigData version of
unique, intersect, find, findrows, intersectrows functions.
In my job they are the beast for writing vectorized codes.

What do you think?
( I have already made findrows for bigData which breaks data into
chunks and processes but the code is not clean enough yet. )

Lev


reply via email to

[Prev in Thread] Current Thread [Next in Thread]