The Wolfram System offers a large number of functions to efficiently manipulate lists, matrices, and arrays of any depth and dimension. Among them there are functions to perform algebraic operations, like sums, products, inner or outer products, transpositions, etc. The Wolfram System also has powerful algorithms to manipulate algebraic combinations of expressions representing those arrays. These expressions are called symbolic arrays or symbolic tensors. By assuming given properties about those symbolic arrays (mainly rank, dimension, and symmetry), you can construct and prove results which are valid for arbitrary members of large domains of arrays obeying those properties.
Matrices are rank 2 arrays and can be symmetric, antisymmetric, or not have any symmetry at all. Higher-rank tensors can be fully symmetric or fully antisymmetric, but they can also have many other types of symmetries under transposition of their levels or slots. Relevant tensors in physics and mathematics usually have symmetry: the symmetric inertia tensors, the antisymmetric electromagnetic field, the rank 4 stiffness tensor in elasticity, the rank 4 Riemann curvature tensor of a manifold, the fully antisymmetric volume forms, etc. Even when you work with elementary objects without symmetry, like vectors, the repeated use of them leads to the appearance of symmetry. The Wolfram System introduces a general language to describe arbitrary transposition symmetries of arrays of any depth and dimension, both for ordinary arrays and for symbolic arrays. See "Tensor Symmetries" for a description of the language of symmetries.
A given symbolic expression expr will be taken to belong to a given domain adom of arrays by using an assumption of the form Element[expr,adom], where adom specifies the properties shared by all arrays of that domain.
arrays of given dimensions, component type, and symmetry
matrices of given dimensions, component type, and symmetry
vectors of given dimension and component type
A general symbolic tensor expression can be understood as a linear combination of terms formed by combining the symbolic tensors using three basic operations: tensor products, transpositions, and contractions. Other basic algebra operations can be decomposed in terms of these.
tensor product of tensors ti
transposition of tensor t by permutation perm
contraction of the pairs of slots in the tensor t
As usual in computer algebra, one of the most important computational steps is converting general expressions into a canonical form, if possible. When only transposition symmetries are involved, it is always possible to bring symbolic tensor polynomials to such canonical form, using specialized group theory algorithms. However, for complicated cases involving large ranks, it could consume considerable time and memory.
expand tensor sums and products
canonicalize terms with respect to symmetry
TensorExpand expands sums in products and applies basic identities:
TensorReduce applies the same operations, sorts the tensors lexicographically, and uses the symmetry information. In this example, the contraction term disappears because it involves the contraction of a symmetric and an antisymmetric tensor:
TensorReduce always moves TensorProduct inside TensorContract or TensorTranspose:
TensorReduce always moves TensorContract inside TensorTranspose:
The next example explores the trace of powers of an antisymmetric matrix. For such a matrix A in any dimension, Tr[MatrixPower[A,n]] vanishes for odd n but not for even n. This is illustrated by constructing the power and trace in terms of TensorProduct and TensorContract and then canonicalizing the expression using TensorReduce.
Construct a representation using TensorContract and TensorProduct:
A more complicated example is given by the canonicalization of scalar polynomials in the Riemann tensor. Here, only its transposition symmetries are used (also known as permutation symmetries or monoterm symmetries), and not the Riemann cyclic symmetries.