An introduction to MATRICES




Definitions

Matrix

A matrix is an ordered set of numbers listed rectangular form.

Example. Let A denote the matrix

 
        [2  5  7  8]
        [5  6  8  9]
        [3  9  0  1]

This matrix A has three rows and four columns. We say it is a 3 x 4 matrix.

We denote the element on the second row and fourth column with a2,4.

Square matrix

If a matrix A has n rows and n columns then we say it's a square matrix.

In a square matrix the elements ai,i , with i = 1,2,3,... , are called diagonal elements.
Remark. There is no difference between a 1 x 1 matrix and an ordinary number.

Diagonal matrix

A diagonal matrix is a square matrix with all de non-diagonal elements 0.
The diagonal matrix is completely defined by the diagonal elements.
Example.
 
        [7  0  0]
        [0  5  0]
        [0  0  6]

The matrix is denoted by diag(7 , 5 , 6)

Row matrix

A matrix with one row is called a row matrix.

[2 5 -1 5]

Column matrix

A matrix with one column is called a column matrix.
 
      [2]
      [4]
      [3]
      [0]

Matrices of the same kind

Matrix A and B are of the same kind if and only if
A has as many rows as B and A has as many columns as B

 
        [7  1  2]       [4  0  3]
        [0  5  6]  and  [1  1  4]
        [3  4  6]       [8  6  2]

The transposed matrix of a matrix

The n x m matrix B is the transposed matrix of the m x n matrix A if and only if
The ith row of A = the ith column of B for (i = 1,2,3,..m)
So ai,j = bj,i

 
The transposed matrix  of A is denoted T(A) or AT

               T
        [7  1 ]       [7  0  3]
        [0  5 ]    =  [1  5  4]
        [3  4 ]

0-matrix

When all the elements of a matrix A are 0, we call A a 0-matrix.
We write shortly 0 for a 0-matrix.

An identity matrix I

An identity matrix I is a diagonal matrix with all the diagonal elements = 1.

 
 [1]

 [1 0]
 [0 1]

 [1 0 0]
 [0 1 0]
 [0 0 1]

 ...

A scalar matrix S

A scalar matrix S is a diagonal matrix whose diagonal elements all contain the same scalar value.
a1,1 = ai,i for (i = 1,2,3,..n)

 
        [7  0  0]
        [0  7  0]
        [0  0  7]

The opposite matrix of a matrix

If we change the sign of all the elements of a matrix A, we have the opposite matrix -A.
If A' is the opposite of A then ai,j' = -ai,j, for all i and j.

A symmetric matrix

A square matrix is called symmetric if it is equal to its transpose.
Then ai,j = aj,i , for all i and j.

 
        [7  1  5]
        [1  3  0]
        [5  0  7]

A skew-symmetric matrix

A square matrix is called skew-symmetric if it is equal to the opposite of its transpose.
Then ai,j = -aj,i , for all i and j.

 
        [ 0  1 -5]
        [-1  0  0]
        [ 5  0  0]

The sum of matrices of the same kind

Sum of matrices

To add two matrices of the same kind, we simply add the corresponding elements.

Sum properties

Consider the set S of all n x m matrices (n and m fixed) and A and B are in S.
From the properties of real numbers it's immediate that

Scalar multiplication

Definition

To multiply a matrix with a real number, we multiply each element with this number.

Properties

Consider the set S of all n x m matrices (n and m fixed). A and B are in S; r and s are real numbers.
It is not difficult to see that:

 
    r(A+B) = rA+rB
    (r+s)A = rA+sA
    (rs)A = r(sA)
    (rA)T = r. AT

Sums in math

Because in the following, there is an intensive use of the properties of sums, the reader who is not familiar with these properties must read first Sums in math .
Remark. In this html document, for convenience, we'll write the word sum instead of the sigma sign.

Multiplication of a row matrix by a column matrix

This multiplication is only possible if the row matrix and the column matrix have the same number of elements. The result is a ordinary number ( 1 x 1 matrix).
To multiply the row by the column, you have to multiply all the corresponding elements, then make the sum of the results.
Example.

 
         [1]
[2 1 3]. [2] = [19]
         [5]

Multiplication of two matrices A.B

This product is defined only if A is a (l x m) matrix and B is a (m x n) matrix.
So the number of columns of A has to be equal to the number of rows of B.
The product C = A.B then is a (l x n) matrix.
The element of the i-th row and the j-th column of the product is found by multiplying the ith row of A by the jth column of B.
 
        ci,j = sumk (ai,k.bk,j)
Examples.
 
[1 2][1 3] = [5 7]
[2 1][2 2]   [4 8]

[1 3][1 2] = [7 5]
[2 2][2 1]   [6 6]

[1 1][2    2] = [0 0]
[1 1][-2  -2]   [0 0]

[ 1, 3, 2 ] [ 3,  -1, 4  ]    [ 1, 16, 5  ]
[ 4, 5, 3 ] [ -2, 3,  1  ] =  [ 8, 23, 18 ]
[ 2, 2, 1 ] [ 2,  4,  -1 ]    [ 4, 8,  9  ]
From these examples we see that the product is not commutative and that there are zero divisors. Zero divisors are matrices different from a zero matrix, such that the product is a zero matrix.

Application

A matrix A is called idempotent if and only if A2 = A.

Given:

 
     [1  b  c]
 A = [0  0  2]
     [0  0  1]
Find the set of all 3 x 3 matrices of type A such that A is idempotent.

Solution:

We calculate A2.

 
[1  b  2c+2b]
[0  0    2  ]
[0  0    1  ]

     A2 = A
<=>
    2c + 2b = c
<=>
    c = -2b
All requested matrices are
 
[1  b  -2b]
[0  0   2 ]  with b in R
[0  0   1 ]

Properties of multiplication of matrices

Associativity

If the multiplication is defined then A(B.C) = (A.B)C holds for all matrices A,B and C.
Proof:
We'll show that an element of A(B.C) is equal to the corresponding element of (A.B)C
First we calculate the element of the ith row and jth column of A(B.C)
 
Let D denote B.C, then
        dk,j = sump  bk,p.cp,j        (1)

Let E denote A.D then
        ei,j = sumk ai,k.dk,j (2)

(1) in (2) gives
        ei,j = sumk ai,k.(sump bk,p.cp,j)

<=>     ei,j = sumk,p ai,k.bk,p.cp,j

So the element of the ith row and jth column of A(B.C) is
        sumk,p ai,k.bk,p.cp,j               (3)
Now we calculate the element of the ith row and jth column of (A.B)C
 
Let D' denote A.B, then
        di,p' = sumk ai,k.bk,p        (4)

Let E' denote D'C then
        ei,j' = sump di,p'.cp,j       (5)

(4) in (5) gives
        ei,j' = sump (sumk ai,k.bk,p).cp,j

<=>     ei,j' = sumk,p ai,k.bk,p.cp,j

So the element of the ith row and jth column of (A.B)C is
        sumk,p ai,k.bk,p.cp,j               (6)

From (3) and (6)  => A(B.C) = (A.B)C

Distributivity

If the multiplication is defined then A(B+C) = A.B+A.C and (A+B).C = A.C+B.C hold for all matrices A,B and C. This theorem can be proved in the same way as above.

Theorem 1

For each A, there is always an identity matrix E and an identity matrix E' so that A.E = A and E'.A = A.
If A is a square matrix then E = E'.

Theorem 2

 
        (A.B)T = BT .AT
This theorem can be proved in the same way as above.
Example :
If we transpose the product
 
  [ 2 4 ] [x]
  [ 3 8 ] [y]
we get
 
  [x y ] [ 2 3 ]
         [ 4 8 ]

Theorem 3

If the multiplication is defined then we have for any A
 
        A.0 = 0 = 0.A

Theorem 4

r and s are real numbers and A , B are matrices. If the multiplication is defined then (rA)(sB) = (rs)(AB) This theorem can be proved in the same way as above.

Theorem 5

 
If D = diag(a,b,c) then D.D = diag( a2 , b2 , c2)
                        D.D.D = diag( a3 , b3 , c3)
                        .....
This property can be generalized for D = diag(a,b,c,d,e,...,l).

Next steps

  1. Determinants constitute a necessary intermediate step towards deeper insight about systems.

  2. Systems of linear equations,
    rank of a matrix,
    Cramers Rule,
    Classification of systems of linear equations, Investigation of systems with a parameter




Topics and Problems

MATH-abundance home page - tutorial

MATH-tutorial Index

The tutorial address is http://home.scarlet.be/math/

Copying Conditions

Send all suggestions, remarks and reports on errors to Johan.Claeys@ping.be     The subject of the mail must contain the flemish word 'wiskunde' because other mails are filtered to Trash