Hi Rowan,
see below.
On 4/13/20 4:57 PM, Rowan Cannaday
wrote:
"I
added a
new primitive in GNU APL which computes the QR
factorization of a real
or complex matrix"
Looking
through the source code I see the following in
`src/LApack.hh`:
```
// Compute QR factorization with column pivoting of
A:
// A * P = Q * R
//
char * pivot_tau_tmp = new char[N*(sizeof(ShapeItem) +
4*sizeof(APL_Float))];
```
Which is part
of the parent function:
```
int scaled_gelsy(Matrix<T> & A,
Matrix<T> & B, double rcond)
```
Is this
exposed to the APL interpreter? Or is it used internally
in dyadic ⌹?
If its used
internally, I will do some digging later if dyadic ⌹ can
be used to solve the eigenvalue problem.
My linear
algebra experience is mostly from college and I am out
of practice, I've mostly been piecing things together
from various wikipedia articles.
Actually no. The gelsy and
friends are part of my dyadic ⌹ implementation back in 1.4. The
new factorization
is in Bif_F12_DOMINO.cc (function householder). It is exported to
APL as monadic with axis (Z←⌹[EPS] B,
see https://www.gnu.org/software/apl/apl.html#Section-3_002e18).
My initial plan was to replace the
QR factorization in LApack.hh with the QR factorization in Bif_F12_DOMINO.cc, but it seems like the old
implementation is still a bit faster than the new one for
reasons that I do not yet fully understand. The old
implementation uses quite a few sin() cos() and sqrt() functions
and also memory allocations while the new implementation is
entirely based on the 4 basic arithmetic operations and a single
memory allocation.
So I expected the new implementation to be faster, but
apparently it is not.
Thanks for
the ACM article, I scanned it briefly and it seems to be
very helpful. I agree that reading equations in APL have
made them abundantly more clear, so I have no problem
pursuing an APL based solution, especially if it can
benefit the community.
That was my point. If we
could establish APL as a language for describing algorithms. I
was thinking
along the lines of Iverson's "Notation as a tool of thought" which
is also free now:
https://dl.acm.org/doi/10.1145/358896.358899
Any
additional ACM APL articles are appreciated!
- Rowan
Hi again,
sorry, but I have to correct myself: it was actually Peter
Teeson who
pointed us to the ACM papers.
Best Regards,
Jürgen
On 4/13/20 1:17 PM, Dr. Jürgen Sauermann wrote:
Hi Rowan,
i would like to share some thoughts of mine with you.
First of all some history. As you have correctly
noticed, LAPACK was a build
requirement for GNU APL up to including version 1.4.
Looking at the
configure.ac in 1.4:
AC_CHECK_LIB([lapack], [dgelsy_])
AC_CHECK_LIB([blas], [dcopy_])
#AC_CHECK_LIB([gfortran], [_gfortran_transpose])
So, I only needed in single function dgelsy (in order to
implement ⌹).
dgelsy was kindly provided by liblapack, liblapack in
turn was dependent
on libblas, and libblas was dependent on libgfortran. At
that time I
wasn't sure if these libraries would be available on all
platforms. My
suspicion at that time was that libgfortran might limit
the use of GNU APL
to GNU-only platforms. Also, installing these libraries
was rather tedious
at that time.
In order to simplify the GNU APL build process, I
decided to manually translate
the FORTRAN version of dgelsy_ (and all functions that
it called, and all
functions that they called...) to C++. Before doing that
(which was a major
task) I looked at existing ports of LAPACK to C or C++,
but none of them
really convinced me. You can still see the result of the
manual translation
in file src/LApack.hh of GNU APL.
So, as a summary, reintroducing a dependency on
liblapack is not an idea
that would make me happy. The situations here is
different though because
unlike ⌹ which is a built-in primitive of every APL
interpreter, while
LAPACK support would be an optional feature that could
be dropped on all
platforms that cause problems. Similar to the emacs and
SQL functions in
GNU APL today.
A second thought is the following. As you may have
noticed, I added a
new primitive in GNU APL which computes the QR
factorization of a real
or complex matrix. Before implementing that primitive I
browsed through
quite a number of linear algebra publications on the
web, most of them
concerning the householder transformation which is the
core algorithm of
both ⌹ and of QR factorization. I should at this point
admit, that I never
fully understood what my own code in LApack.hh was
doing, Rather I
translated every LAPACK function needed from FORTRAN to
C++ in a 1:1
fashion, and the result seemed to be OK. For that reason
I was never too
happy with the ⌹ implementation in GNU APL even though
it apparently does
what it should.
Then, while I was looking into QR factorization, Jay
Foad pointed us to
the now available ACM papers. I was formerly looking for
some of these
papers because they were cited in the ISO APL standard,
but at that time
I could not get them without paying for them (I am not
an ACM member).
On the other hand the papers were not that important
either. While
browsing through the now freely available papers, I
stumbled upon a
paper "THE HOUSEHOLDER ALGORITHM AND APPLICATIONS"
written by Garry Helzer
in the 1970s. That paper contains in its Appendix a
12-line APL
function of the algorithm that I was trying to
implement. Suddenly,
simply by looking at those 12 lines of APL code, the
matter became so
crystal clear to me that I could implement it right
away.
The point that I am trying to make here is the
following. Suppose we would
create a good translation from (parts of) LApack to APL.
The resulting APL
code would then be fairly valuable not only for those
who want to solve a
particular problem (like eigenvalues) but also for those
who want to
understand the mathematics behind the problem. Such a
translation would
not be limited to GNU APL but would be useful for all
APL programmers.
Of course a translation of LApack to APL will have a
worse performance than
a direct implementation in, say, C/C++. But that is the
only drawback that
I can currently see. Given the many advantages that a
translation of LApack
to APL could have, that would be my preferred option (=
the last option in
your "way forward" list below).
Best Regards,
Jürgen
On 4/12/20 6:08 PM, Rowan Cannaday wrote:
I've been mulling over methods of
bringing linear algebra methods into gnu-apl,
specifically eigenvalues & eigenvectors (leaving
open the possibility for more).
The canonical methods for this are from LAPACK
(which was formerly a compilation dependency).
Specifically:
dgeev
dsyev
zheev
zgeev
There are a few paths forward I can think of:
- include LAPACK as an optional dependency and
wrap the functions (similar to how ⌹ was implemented
before it was ported to C++). Possibly with a ⎕FOO
wrapper.
- Port (or find) C/C++ versions of the functions
and use the ⎕FX interface
- Implement the underlying algorithms as APL
libraries
Mostly looking for advice on how to proceed, that
would make this:
1. Simple to integrate
2. 'okay' performance
3. Possibility to expand to more LAPACK
functionality
4. Not introducing unnecessary compilation
challenges
Since this is something I suspect has been
considered, I figured I'd reach out to the broader
list before attempting something.
Thanks,
- Rowan
|