texmacs-dev
[Top][All Lists]

## [Texmacs-dev] Tensor indices

 From: grozin Subject: [Texmacs-dev] Tensor indices Date: Mon, 20 May 2013 16:26:23 +0700 (NOVT) User-agent: Alpine 2.00 (LNX 1167 2008-08-23)

Hello *,


In tensor calculus (as used in general relativity and in other areas of physics) tensors with many upper and lower indices are often used. Their order is important even if some indices are upper and some lower: a tensor can have, say, the first index upper, the second one lower, the third upper, etc. As an exception, if a tensor is symmetric in a pair of adjacent indices, and one of them is upper and the other one is lower, they are traditionally written with the indices on top of each other (a typical example is the Kronecker \delta_\mu^\nu - it is easy to write in TeXmacs).

So, i've written the following the following scheme function:

(tm-define (tensor . args)
(define (process-args l u d)
(if (null? l)
(concat (rsup ,(apply tmconcat (reverse u)))
(rsub ,(apply tmconcat (reverse d))))
(let ((x (tree->stree (car l))))
(cond ((== (car x) 'rsup)
(process-args (cdr l) (cons (cadr x) u) (cons (hphantom
((== (car x) 'rsub)
(process-args (cdr l) (cons (hphantom ,(cadr x)) u) (cons
((== (car x) 'concat)
(cond ((== (car a) 'rsup)
(if (== (car b) 'rsub)
(process-args (cdr l) (cons (cadr a) u) (cons
(process-args (cdr l) (cons (cadr a) u) (cons
(hphantom ,(cadr a)) d))))
((== (car a) 'rsub)
(if (== (car b) 'rsup)
(process-args (cdr l) (cons (cadr b) u) (cons
(process-args (cdr l) (cons ((hphantom ,(cadr
a))) u) (cons (cadr a) d))))
(else (process-args (cdr l) u d)))))
(else (process-args (cdr l) u d))))))
(tm->tree (process-args args '() '())))


If you put it to your ~/.TeXmacs/progs/my-init-texmacs.scm and start TeXmacs, you can do (in math mode, or course), e.g.,


R \extern <tab> tensor <tab> <tab> ^ m <tab> <right> <tab> _ n <tab> <right> <tab> ^ a <tab> <right> <tab> _ b <tab> <right> <enter>


and obtain a nice Riemann curvature tensor (with some particular positions of its 4 indices).


Several things should be done before this can become a perfect (or at least good) solution of the problem of writing tensor expressions in TeXmacs.


1. How to call it from the editor without writing \extern and other long stuff? From the documentation, xmacro has an arbitrary number of arguments; but I failed to connect some xmacro to extern scheme function tensor. If I do in the preamble

(assign "tensor" (xmacro "args" (extern "tensor" (arg "args"))))

(in scheme form), and then do

\tensor <enter> <tab> <some indices> <enter>


TeXmacs segfaults. So, at the moment I don't see how to assign some keyboard shortcut to this function.


2. Error handling in this function is poor. It takes any number of arguments. Each argument should have one of 3 forms:

^ <upper index>
_ <lower index>
^ <upper index> _ <lower index> (in any order)


If an argument contains something different, the function tensor can misbehave (some forms of garbage in arguments are ignored by the current code, but I have not checked this extensively).


3. ^ <upper index> _ <lower index> means that these indices are on top of each other. But indices can have different widths (an index may be \mu_{100), for example); then they are currently typeset very poorly. What's really needed is following. Suppose some argument of the function tensor is ^a_b. Then both a and b should be typeset in the width which is the maximum of the width of a and that of b. At the moment I don't know how to do this.


4. Currently indices are not editable in place. You have to press <backspace> after them, and then edit them is the passive form. It would be nice to be able to replace, say, mu by nu directly in the typeset tensor.


The problem 3 is not very important in practice: widely used symmetric tensors (like \delta, the Ricci tensor, the energy-momentum tensor) have 2 indices, and they can be typeset in the standard TeXmacs way without using the function tensor. Theoretically, there can be a tensor with 10 indices symmetric in the 5-th and 6-th index, and I want to have 5--th upper and 6-th lower, then the function tensor should be used and currently can produce poor results. But such cases seem rather exotic.

I'll be grateful for suggestions of improvements,

Andrey

`