[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Deep Learning for Symbolic Mathematics
From: |
Tim Daly |
Subject: |
Deep Learning for Symbolic Mathematics |
Date: |
Thu, 30 Jul 2020 08:24:41 -0400 |
I'm Tim Daly, lead developer on Axiom, a computer algebra
system developed by IBM, which is now open source.
http://axiom-developer.org
https://en.wikipedia.org/wiki/Axiom_(computer_algebra_system)
http://daly.axiom-developer.org
I read your paper and watched your video with great interest.
It is excellent work. I have a few comments and a request.
A) In the short term (the last month)...
I'm using a tree-generator written in Axiom's SPAD language
and have been using it to generate test data.
Some of the integrals match the original equation. Some of
the integrals differ by a constant. Some of the integrals
require heavy simplification machinery (complex normalization)
to show equivalence.
Some of the results fail. One of the problems is that Axiom
and Mathematica share one definition of trig formulas,
Maple and Maxima use a different definition. That will
affect simplification. Simplification of roots is also a problem
as well as field extensions.
A second effect is to find bugs in Axiom, which is quite useful.
B) In the longer term...
I expect to spend several months on your data.
I intend to try to get your system running from the github
archive so I can do more extensive test sets.
I intend to automate testing against your test suite.
C) You mention in the paper that you would be interested
in data which you did not generate. Axiom has a computer
algebra test suite (CATS) at
http://axiom-developer.org/axiom-website/CATS/index.html
There you will find the integrals from Schaum's book,
ordinary differential equations from Kamke's book,
The Charlwood test suite, and Albert Rich's integration
test suite. The sources are provided on that page and
are free for you to use.
D) Axiom has extensive documentation and we are trying
to incorporate historically interesting documents along with
Axiom-related information. Axiom uses literate programming
so the actual code is embedded in (and extracted from) the
Latex documents.
E) It would be interesting to apply your approach to the
simplification problem. Simplification is very hard, and not
well defined.
For example, your final slide shows 10 different
ODE solutions. Can these be simplified to 0, taking pairwise
differences of the results? If not, why not, since they should
be mathematically equivalent.
You could do a beam search that generated multiple
correct solutions and use that to train a "simplifier",
which would be of great interest.
I would like your permission to quote your paper, with proper
acknowledgements, as part of the current test suite effort.
Many thanks,
Tim Daly
http://daly.axiom-developer.org
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- Deep Learning for Symbolic Mathematics,
Tim Daly <=