[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Axiom-developer] ISSAC and CAFE trip report

From: C Y
Subject: Re: [Axiom-developer] ISSAC and CAFE trip report
Date: Mon, 17 Jul 2006 13:50:12 -0700 (PDT)

--- root <address@hidden> wrote:

> Yep. Stephen and Mike Dewar are discussing the license details.
> Stephen has been away in a series of conferences and had not yet
> returned home so there was no progress in the last 2 weeks. But
> conversations are happening.

Yay!  Fingers crossed...

By the way, has anyone tried building the Aldor stuff with the silver
branch?  I hit a couple of snags (which are probably my fault) but I
was curious if it was a one-off issue with my machine or if others have
had trouble.

> > Did you distribute the DoyenCD while at the conference? If so was
> > there much interest in this format?
> I gave out 100 copies of the CD as well as 10 copies of the
> tutorial. And I had long, detailed conversations with virtually
> everyone I knew at the conference (roughly 20 people, all the old
> timers) and some I didn't know. Mostly pushing literate programming.

Thank you Tim (I think I can safely say from all of us) for your
investment of time and materials in promoting Axiom.

> I want the next ISSAC CD to be a Doyen CD. I helped to make the ISSAC
> CD for the last three years. I think we need to get a demo version
> running by december if we're to have a hope of replacing the static

That's a neat idea - what can we do to help?

> Several points arose. I claim that this isn't a science if we cannot
> build on each other's work. In computational math it is not
> sufficient to show an algorithm in a paper. In that case I have to
> write a program to implement your algorithm before I can modify it
> to improve it. Thus I'm forced to start from nothing rather than
> build on your work.

Yes and no.  It might be argued that programming the algorithm yourself
is similar to a physical scientist having to duplicate the equipment in
the lab needed to run an experiment.  (I'm sure you heard all the
arguments at ISSAC ;-).  I would tend to view it as follows - since
computation math cannot realistically be separated from the computer,
and the "apparatus" can be duplicated for very low cost (compared to an
XRD analysis system, for example) it is inefficient to force
researchers to spend time duplicating a setup already available to
them.  (The catch, of course, being that it is only available if the
license/cost permits it to be available, or sometimes the software is
in-house only at the university and never released.)  One thing about
different programmed implementations is that they can identify problems
with the experimental setup originally used (e.g. the CAS) but I think
the way forward there is to incorporate and interface with proof
systems if a particular result needs to be verified correct with a high
degree of certainty - this effectively "tests" the functioning of the
CAS by putting its result up for review by pure logic systems, which
(if they do THEIR job) will provide a high degree of confidence in the
answer.  (Of course, you might suspect the proof software, but then
human reviewers aren't perfect either, so SOMETHING has to be trusted.)

Anyway.  Pardon the rant - I'm sure everyone here is more fluent in
such matters than I am. (setf *deliver-sermon* nil) ;-)
> Plus I don't feel that complexity proofs are valid without the code.
> People argue things like "my algorithm is O(n^2) over the Integers".
> But my computer does not have Integers, it has fixnums and bignums.
> If the coefficients become bignums then an O(n^2) algorithm becomes
> at least O(n^3) or worse.

I think I would put it that a complexity proof in an ideal environment
is of limited utility when it comes to application of the algorithm in
question.  I have always thought it would be a useful (and difficult)
exercise to build "successive complexity analysis" into the design of
an OS and compiler - that way you can find out what the real overhead
is at all levels in the system, from core OS bootstrapping and
processing routines on up through integration algorithms.  I suppose
that's not possible practically though, unless we re-create a new
super-lisp machine from the circuits on up.  

It has always bothered me that people are capable of writing more
efficient assembler routines than a compiler - what are they seeing
that the compiler isn't?  Why can't a compiler see it?  I suppose it's
a question of how much time you want to spend compiling but given how
long some binaries are run in commercial applications (and how many
systems will run them every day) I would think there would be some

> And, of course, this literate programming solution to keeping the
> research and the code together got a lot of voice-time. 

Was it well recieved, on the whole?

> I had a conversation with Carlo Traverso, head of the math dept at
> Univ. of Pisa. He gave a talk at Calculemus the day before detailing
> literate programs being submitted to an "Active Journal". He gave me
> all of his implementation source code and slides which I'll be adding
> to the Doyen CD. Carlo is working to create an Active Journal for
> this field so people can submit their literate research for review.

I guess the trick is getting peer review and reputation established. 
Is his presentation available online anywhere?

> We might finally have permission to use Manuel Bronstein's algebra
> code. There is only one step left and it should complete shortly
> (in principle). This will mean a lot of work for me but it's the
> only way to keep Manuel's work alive.

Excellent!  Can we help you with the work?


Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 

reply via email to

[Prev in Thread] Current Thread [Next in Thread]