The Einstein summation convention is used throughout.
|
By far the most important way to multiply matrices is the usual matrix multiplication. It is defined between two matrices only if the number of columns of the first matrix is the same as the number of rows of the second matrix. If A is an m-by-n matrix and B is an n-by-p matrix, then their product AB is an m-by-p matrix given by
for each pair i and j.
The following picture shows how to calculate the (AB)12 element of AB if A is a 2x4 matrix, and B is a 4x3 matrix. Elements from each matrix are paired off in the direction of the arrows; each pair is multiplied and the products are added. The location of the resulting number in AB corresponds to the row and column that were considered.
\begin{bmatrix} 1 & 0 & 2 \\ -1 & 3 & 1 \end{bmatrix}\times
\begin{bmatrix} 3 & 1 \\ 2 & 1 \\ 1 & 0 \end{bmatrix}</math>
This notion of multiplication is important because A and B are interpreted as linear transformations (which is almost universally done), then the matrix product AB corresponds to the composition of the two linear transformations, with B being applied first.
In general, matrix multiplication is not commutative, ie AB is not equal to BA.
For two matrices of the same dimensions, we have the Hadamard product or entrywise product. The Hadamard[?] product of two m-by-n matrices A and B, denoted by A · B, is an m-by-n matrix given by (A·B)[i,j]=A[i,j] * B[i,j]. For instance
\begin{bmatrix} 1 & 3 & 2 \\ 1 & 0 & 0 \\ 1 & 2 & 2 \end{bmatrix}\cdot
\begin{bmatrix} 0 & 0 & 2 \\ 7 & 5 & 0 \\ 2 & 1 & 1 \end{bmatrix}
\begin{bmatrix} 0 & 0 & 4 \\ 7 & 0 & 0 \\ 2 & 2 & 2 \end{bmatrix}</math>
Note that the Hadamard product is a submatrix of the Kronecker product (see below). Hadamard product is studied by matrix theorists, but it is virtually untouched by linear algebraists.
For any two arbitrary matrices A=(aij) and B, we have the direct product or Kronecker product A B defined as
\begin{bmatrix} a_{11}B & a_{12}B & \cdots & a_{1n}B \\ \vdots & \vdots & \cdots & \vdots \\ a_{n1}B & a_{n2}B & \cdots & a_{mn}B \end{bmatrix}</math>
(the HTML entity ⊗ (⊗) represents the direct product, but is not supported on older browsers)
Note that if A is m-by-n and B is p-by-r then A B is an mp-by-nr matrix. Again this multiplication is not commutative.
For example
\begin{bmatrix} 1 & 2 \\ 3 & 1 \\ \end{bmatrix}\otimes
\begin{bmatrix} 0 & 3 \\ 2 & 1 \\ \end{bmatrix}
\begin{bmatrix} 0 & 3 & 0 & 6 \\ 2 & 1 & 4 & 2 \\ 0 & 9 & 0 & 3 \\ 6 & 3 & 2 & 1 \end{bmatrix}</math>.
If A and B represent linear transformations V1 → W1 and V2 → W2, respectively, then A B represents the tensor product of the two maps, V1 V2 → W1 W2.
All three notions of matrix multiplication are associative
The scalar multiplication of a matrix A=(aij) and a scalar r gives the product
If we are concerns with matrices over a ring, then the above multiplicaion is sometimes called the left multiplication while the right multiplication is defined to be
When the underlying ring is commutative, for example, the real or complex number field, the two multiplications are the same. However, if the ring is not commutative, such as the quaternions, they may be different. For example
i\begin{bmatrix} i & 0 \\ 0 & j \\ \end{bmatrix}
i & 0 \\ 0 & j \\ \end{bmatrix}i</math>
wikipedia.org dumped 2003-03-17 with terodump