[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Axiom-developer] directions
Re: [Axiom-developer] directions
Sat, 4 Jul 2015 11:47:01 -0500
>I don't have pointers applying FPGA techniques to symbolic
>computations. However, the trend is to use something
>similar to FPGA all the way down to the manufacture level.
>For some background, see
>Qualcomm, for example already licenses this technology, as
>do many other companies. Their idea is to tailor each
>individual chip at the single consumer level to enhance
I'm actually working with FPGAs to do hardware security with some
people at CMU.
>By the way, the link
>is not readable by Adobe or Preview on Macs.
I cloned the repo and can read the thesis.pdf file.
>So this is mainly a numerical set up. Graphics processors
>have been used for numerical scientific work for a long
>time already since GPUs have improved tremendously in
>power (my layman observation is the advance is faster than
>CPUs in terms of number of cores, speed, and low power
>Since Axiom is software, I am not sure how the technique
>may be applied, unless you are thinking about an Axiom
>chip, or some hybrid numerical-symbolic approach is used.
>However, Axiom is a relatively small system (compared to
>modern mammoth bloated software),
The speculation is that Intel will merge the Altera FPGA fabric
into the CPU. This already exists on my FGPA board (dual hardware
CPUs, dual firmware CPUs, and a large FPGA fabric in one chip).
Think of this as a "programmable ALU" where you can temporarily
(or permanently) run an ALU to do computational mathematics. At
the moment this undergrad thesis shows that one can do a lot of
ODE (exponential, simple harmonic, cosine hyperbolic, simple
forcd harmonic) numerical computation.
The extrapolation would be to do a symbolic computation, compile
the result to the "FPGA ALU", and do blindingly fast evaluations.
I'm already pushing forward on the BLAS work (bookvol10.5).
Given a programmable "FPGA ALU", this could really change the game.
> and I think the priority
>should be to make Axiom's learning curve less steep than
>make Axiom run faster, or even more securely.
I'm still pushing forward on "making the learning curve less steep".
I have been collecting permissions from authors to use their work as
part of the books, the most recent being work on Clifford Algebra.
I'm trying to collect permissions for each of the domains. These will
be used to introduce domains in a book-like readable treatment of the
Spad code so people can understand the algorithms while reading the
code. I have material to write, a document structure that will
organize it, and the permission to use it. It just takes time.
I'm also pulling yet more of the code into the books, eliminating dead
code, providing signatures for the lisp code, organizing the code into
chapters and sections, documenting the data structures, developing
help, )display examples, and input files for domains, and working on
both an enlarged computer algebra test suite and program proof
technology. It just takes time.
The incremental changes are hard to spot until there is a major
completion point. These have happened (e.g. replace noweb with
straight latex, remove boot and aldor, continuous integration with
docker, restructure the algebra, a new browser front end, additional
algebra, merged lisp code, enlarged test suite, etc.) It just takes time.
Eventually Axiom will be a system can be maintained, modified,
extended, and taught by people who are not the original authors
or one of the three gurus. It just takes time.
Fortunately Axiom has a 30 year horizon.