# How would you describe Matrix intuitively

Content:
“Preliminary remark
“The dimension
“The transposed matrix
“Square matrices
"The Rank
»Symmetrical and asymmetrical matrices

#### Preliminary remark

Colloquially, when you first look at a matrix, you think of a number grid or a table, depending on the application, this is a little bit correct. We have rows and columns with numbers entered. This results in the following matrices

\ begin {align *}
\ begin {pmatrix}
0 & 1 & 2 \\
3 & 4 & 5
\ begin {pmatrix}
0 & 1 & -1 \\
-1 & 0 & 1 \\
1 & -1 & 0
\ begin {pmatrix}
0 & 288 & 178 \\
288 & 0 & 249 \\
178 & 249 & 0
\ end {pmatrix}
\ end {align *}

partly from very obvious applications. In the following we try to formalize this "appearance" in order to do mathematics with it. The basic definition of a matrix and the types of calculation can be found here (LINK).

#### The dimension

The dimension of a matrix \ ((n \ times m) \) is the defining property. We will see that the dimension decides whether one can add or multiply matrices (or neither). Usually \ (n \ times m \) applies, but whether our matrix has the dimension \ ((n \ times m) \) or \ ((m \ times n) \) is a big difference. We have seen that for the addition of matrices the dimensions of the matrices must match. We "often" multiply matrices, however, here other requirements for well-definedness applied. Everything depends on the dimension!

#### The transposed matrix

A transposed matrix \ (A ^ t \) is colloquially the matrix that is tilted to the matrix \ (A \). Rows and columns are swapped, so \ (a_ {i, j} = a ^ t_ {j, i} \) applies in the transposed matrix \ (A ^ t \). The matrix is ​​not simply rotated 90 degrees (counterclockwise), let's look at some examples first.
\ begin {align *}
A = \ begin {pmatrix}
0 & 1 & 2 \\
3 & 4 & 5
\ end {pmatrix}, \ quad A ^ t =
\ begin {pmatrix}
0 & 3 \\
1 & 4 \\
2 & 5
B = \ begin {pmatrix}
1 & 2 \\
3 & 4
\ end {pmatrix}, \ quad B ^ t =
\ begin {pmatrix}
1 & 3 \\
2 & 4
\ end {pmatrix}
\ end {align *}
or just
\ begin {align *}
A =
\ begin {pmatrix}
a_ {1,1} & a_ {1,2} & \ dots & a_ {1, m} \
a_ {2,1} & a_ {2,2} & \ dots & a_ {2, m} \
\ vdots & \ ddots & \ ddots & \ vdots \
a_ {n, 1} & a_ {n, 2} & \ dots & a_ {n, m}
\ end {pmatrix}, \ quad A ^ t =
\ begin {pmatrix}
a_ {1,1} & a_ {2,1} & \ dots & a_ {n, 1} \
a_ {1,2} & a_ {2,2} & \ dots & a_ {n, 2} \
\ vdots & \ ddots & \ ddots & \ vdots \
a_ {1, m} & a_ {2, m} & \ dots & a_ {n, m}
\ end {pmatrix}
\ end {align *}

For a \ ((n \ times m) \) matrix \ (A \) it holds that \ (A ^ t \) has the dimension \ ((m \ times n) \), more precisely that \ ( A ^ t = (a_ {ji} \) We will see that the transposed matrix often helps us in applications.

#### Square matrices

Square matrices have the simple property that they have as many rows as columns. For every number there is a vector space \ (\ mathbb {R} ^ {(n \ times n)} \). examples are
\ begin {align *}
\ left (
\ begin {array} {ccc}
-5 & -4 & 7 \\
3 & -7 & 2 \\
-7 & 1 & -1 \\
\ end {array}
\ right), \ left (
\ begin {array} {ccc}
5 & -9 & -8 \\
3 & -8 & -3 \\
2 & -2 & 9 \\
\ end {array}
\ right)
\ end {align *}
from the \ (\ mathbb {R} ^ {(3 \ times 3)} \) or
\ begin {align *}
\ left (
\ begin {array} {cc}
7 & 3 \\
-9 & 9 \\
\ end {array}
\ right), \ left (
\ begin {array} {cc}
-2 & -1 \\
1 & 0 \\
\ end {array}
\ right)
\ end {align *}
from the \ (\ mathbb {R} ^ {(2 \ times 2)} \). The special thing is that for each of these vector spaces \ (\ mathbb {R} ^ {(n \ times n)} \) a closed multiplication can be defined, because the product of two matrices with dimension \ ((n \ times n) \) has dimension \ ((n \ times n) \) again! In formulas, for \ (A, B \ in \ mathbb {R} ^ {(n \ times n)} \) holds
\ begin {align *}
A \ cdot B = \ mathbb {R} ^ {(n \ times n)}.
\ end {align *}
We will see that this multiplication is very important in many areas, and that the square matrices therefore play a special role in matrices. The definition of this multiplication does not make \ (\ mathbb {R} ^ {(n \ times n)} \) a body! For example, the inverse element of multiplication is generally missing, so there is no multiplicative inverse matrix for
\ begin {align *}
\ begin {pmatrix}
0 & 1\\
0 & 0
\ end {pmatrix}
\ end {align *}
Find. Nonetheless, the square matrices are very important because of the possible multiplication. In the lecture on linear algebra you learn that you can use the so-called rank to determine whether a multiplicative inverse exists.

#### The Rank

If one considers the rows (or columns) of a matrix as vectors, one can define the rank. The rank denotes the number of linearly independent column vectors. Let's look at an example, the identity matrix
\ begin {align *}
\ begin {pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1\\
\ end {pmatrix}
\ end {align *}
consists of the three column vectors
\ begin {align *}
\ begin {pmatrix}
1 & 0 & 0\\
\ end {pmatrix}, \ begin {pmatrix}
0 & 1 & 0\\
\ end {pmatrix},
\ begin {pmatrix}
0 & 0 & 1\\
\ end {pmatrix}.
\ end {align *}
These represent the unit basis in \ (\ mathbb {R} ^ 3 \), so they are obviously linearly independent. The matrix has rank 3. A \ ((n \ times m) \) matrix can trivially have at most rank \ (m \), since it only has \ (m \) columns, and thus \ (m \) column vectors . But it can also be less, as the matrices have
\ begin {align *}
A = \ begin {pmatrix}
1 & 0 & 3\\
0 & 1 & 1
\ end {pmatrix}, B = \ begin {pmatrix}
0 & 1\\
0 & 0
\ end {pmatrix}
\ end {align *}
only ranks 2 and 1. Why? Without calculating: The column vectors from \ (A \) are elements of \ (\ mathbb {R} ^ 2 \), here there can be a maximum of two linearly independent vectors, the first two columns are also obvious. The third column \ (\ binom 3 1 \) is linearly dependent. In the second example we have the 0-vector as a column, this is linearly dependent on each vector.

One also shows in lienar algebra that it does not matter whether one defines row or column rank, they match. In the same way one shows the important theorem, if a square matrix has full rank, then it is invertible, so it has a multiplicative inverse! Therefore, as mentioned, \ (B \) was previously not invertible, i.e. it has no multiplicative inverse.

#### Symmetrical and skew-symmetrical matrices

Symmetrical matrices, more precisely, mirror-symmetrical matrices with respect to the main axis, are immediately noticeable,

\ begin {align *}
\ begin {pmatrix}
0 & 288 & 178 \\
288 & 0 & 249 \\
178 & 249 & 0
\ end {pmatrix},
\ begin {pmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\ end {pmatrix}, \ begin {pmatrix}
1 & 2\\
2 &1
\ end {pmatrix}.
\ end {align *}

The formula is \ (a_ {ij} = a_ {ji} \). You can also write this as \ (A = A ^ t \). Symmetrical matrices are particularly common in applications. They also have particularly beautiful properties and are therefore of great importance. Not quite as "important" are skew symmetrical matrices as we saw one at the beginning,
\ begin {align *}
\ begin {pmatrix}
0 & 1 & -1 \\
-1 & 0 & 1 \\
1 & -1 & 0
\ end {pmatrix}.
\ end {align *}
A matrix is ​​called skew-symmetric if \ (a_ {ij} = - a_ {ji} \) or \ (A ^ t = -A \) holds. You come across them in game theory, for example.