In mathematics, and in particular linear algebra, the transpose of a matrix is another matrix, produced by turning rows into columns and vice versa. Informally, the transpose of a square matrix is obtained by reflecting at the main diagonal (that runs from the top left to bottom right of the matrix). The transpose of the matrix A is written as Atr, tA, or AT, the latter notation being preferred in Wikipedia.
Formally, the transpose of the m-by-n matrix A is the n-by-m matrix AT defined by AT[i, j] = A[j, i] for 1 ≤ i ≤ n and 1 ≤ j ≤ m.
For example,
The transpose operation is self-inverse, i.e taking the transpose of the transpose amounts to doing nothing: (AT)T = A.
If A is an m-by-n and B an n-by-k matrix, then we have (AB)T = (BT)(AT). Note that the order of the factors switches. From this one can deduce that a square matrix A is invertible if and only if AT is invertible, and in this case we have (A-1)T = (AT)-1.
The dot product of two vectors expressed as columns of their coordinates can be computed as
If A is an arbitrary m-by-n matrix with real entries, then ATA is a positive semidefinite matrix.
A square matrix whose transpose is also its inverse is called an orthogonal matrix, i.e. G is orthogonal iff:
A square matrix whose transpose is equal to its negative is called skew-symmetric, i.e. A is skew-symmetric iff
The conjugate transpose of the complex matrix A, written as A*, is obtained by taking the transpose of A and then taking the complex conjugate of each entry.
If the matrix A describes a linear map with respect to two bases, then the matrix AT describes the transpose of that linear map with respect to the dual bases. See dual space for more details on this.
wikipedia.org dumped 2003-03-17 with terodump