Matrix Algebra - Review

 

  1. Matrix:
  2. A matrix is a rectangular array of elements arranged in rows and columns. We usually use uppercase, boldface, letters (e.g A, B,..) to denote matrices. The element in the ith row and jth column of the matrix A is denoted by aij.

    We sometimes use the square bracket notatation [aij] to denote a matrix. That is, the matrix A with r rows and c columns may be equivalently represented thus:

    A º [aij]; i=1,...,r; j=1,...,c.

    A matrix with r rows and c columns is said to have dimension r x c.

  3. Square Matrix:
  4. A matrix of dimension r x c is said to be square if r=c.

  5. Vector:
  6. A matrix of dimension r x c is referred to as a column vector (or simply a vector) if c=1.

    A matrix of dimension r x c is referred to as a row vector if r=1.

  7. Transpose:
  8. The transpose of a matrix A is another matrix, denoted AT that is obtained by interchanging the corresponding columns and rows of A.

    Note that the transpose of a column vector is therefore a row vector and vice versa.

  9. Equality:
  10. Let A and B be matrices of the same dimension, then A = B Þ aij = bij " i, j.

  11. Addition & Subtraction:
  12. Let A and B be matrices of the same dimension, then if C is the sum or difference of A and B, then C will be another matrix of the same dimension as A and B:

    C = A + B = [aij + bij]; i=1,…,r; j=1,…,c.

    C = A - B = [aij - bij]; i=1,…,r; j=1,…,c.

  13. Multiplication:
    1. Scalar multiplication:
    2. Let A = [aij]; i=1,…,r; j=1,…,c and let l denote some scalar then l A = [l aij] " i, j.

    3. Matrix multiplication:

    Let A be a matrix of dimension r x c and B be a matrix of dimension c x s. The product AB is a matrix of dimension r x s. Let C be the matrix that represents this product, then the element in the ith row and jth column of C is given by S aikbkj; k=1,…,c.

    In general, the product AB is only defined when the number of rows of A equals the number of columns of B.

     

  14. Special Matrices:
    1. Symmetric Matrix:
    2. If A = AT then A is said to be symmetric. Symmetric matrices are square.

    3. Diagonal Matrix:
    4. A diagonal matrix is a square matrix whose off diagonal elements are all zeros.

    5. Identity Matrix:
    6. An identity matrix, denoted I, is a diagonal matrix whose diagonal elements are all ones.

      Note: Let A be a square matrix of dimension r x r then AI = IA = A.

    7. Scalar Matrix:
    8. A scalar matrix is a diagonal matrix whose diagonal elements are the same. A scalar matrix can be expressed as l I where l is the scalar.

    9. Unity Matrix:
    10. A unity matrix is a matrix where aij=1 " i, j. If the matrix is a column vector then it is denoted by 1 and if it is a square matrix then it is denoted by J.

      Note that for the n x 1 unity vector, 1T1 = n and 11T=J.

    11. Zero Vector:

    A zero vector is a vector where aij=0 " i, j.

  15. Linear Dependence & Rank:
    1. Linear Dependence:
    2. To illustrate the idea of linear independence, consider some matrix A of dimension 3 x 4. Let us think of the columns of this matrix (i.e. C1, C2, C3, C4) as vectors. Now let us say that C1T= [1 2 3], C2T= [2 2 4], C3T= [5 10 15], C4T= [1 6 1]. Notice that C3T= 5*C1T.

      We say that the columns of A are linearly dependent since one of the columns can be obtained as a linear combination of the others.

      In general, let C1, Cc be c column vectors of a matrix of dimension r x c and let l 1, l c, be c scalars, not all zero, then if l 1C1 + l 2C2 +…+ l cCc = 0, where 0 denotes the zero vector, then the c column vectors are linearly dependent. Now, if the only set of scalars for which the equality holds is let l 1 = l 2 =…= l c =0 then the set of c columns is linearly independent. For example, in the above example, l 1 = 5, l 2 = 0, l 3 = -1, l 4 = 0 and so:

      l 1C1 + l 2C2 + l 3C3 + l 4C4 = 0

      Notice that some of l j = 0 for j=2, 4. For linear dependence, it is only required that not all l j = 0.

    3. Rank:

      The rank of a matrix is defined as the maximum number of linearly independent columns in the matrix. Note that for the example above, the rank of A is clearly not 4 since we have shown that one column may be obtained from the others. As it turns out, the rank of A is 3 since columns C1, C2, and C4 are linearly independent.

      Note: The rank of a matrix is unique and can equivalently be defined as the maximum number of linearly independent rows. It follows that the rank of an r x c matrix cannot exceed min(r, c).

  16. Inverse:
  17. The inverse of a matrix A is another matrix, denoted A-1, such that:

    A-1A = A A-1 = I

    Note: The inverse of a matrix is defined for square matrices only. However many square matrices do not have an inverse. Note that the inverse of a square matrix of dimension r x r exists iff the rank of the matrix is r. Such a matrix is said to be nonsingular whereas a matrix of dimension r with rank < r is said to be singular. If a matrix has an inverse then it is unique.

  18. Determinants:

    The determinant of the kxk mtrix A, denoted by |A|, is the scalar:

    1. |A| = a11; k=1
    2. |A| = Skj=1 a1j |A1j| (-1)1+j ; k>1
    where A1j is the (k-1)x(k-1) matrix obtained by deleting the first row and jth column of the kxk matrix A.

    Note

    1. If I is the kxk identity matrix then |I|=1
    2. If A, B are kxk matrices then:
      1. |A| = |AT|
      2. If each element of a row or column of A is zero then: |A| = 0
      3. If two rows of A are identical then: |A| = 0
      4. |AB| = |A||B|
      5. If c is a scalar then |cA| = ck|A|

    Note
    We may now provide an expression for A-1. In general, A-1 has j,ith entry [|Aij|/|A|](-1)i+j where Aij is the (k-1)x(k-1) matrix obtained by deleting the ith row and jth column of the kxk matrix A.
    See this link for example problems.

  19. Trace:

    Let A be a kxk matrix. The trace of A, denoted tr(A), is the sum of the diagonal elemnets of A. That is, tr(A) = Ski=1aii

    If A, B are kxk matrices then:

    1. tr(cA) = c(tr(A))
    2. tr(A+B) = tr(A) + tr(B)
    3. tr(A-B) = tr(A) - tr(B)
    4. tr(AB) = tr(BA)
    5. tr(B-1AB) = tr(A)
    6. tr(AAT) = Ski=1 Skj=1aij2

  20. Orthogonal Matrix:

    A square matrix A is said to be orthogonal if its rows, considered as vectors, are mutually perpendicular and have unit lengths.

    Note that this means:

    AAT = I

    Also, A is orthogonal iff:

    A-1 = AT

  21. General Properties:
  22. Let A, B, C be matrices of appropriate dimension.

    1. A + B = B + A
    2. (A + B) + C = A + (B + C)
    3. (AB)C = A(BC)
    4. C(A + B) = CA + CB
    5. l (A + B) = l A + l B
    6. (AT)T = A
    7. (A + B)T = AT + BT
    8. (AB)T = BTAT
    9. (ABC)T = CTBTAT
    10. (AB)-1 = B-1A-1
    11. (ABC)-1 = C-1B-1A-1
    12. (A-1)-1 = A
    13. (AT)-1 = (A-1)T

  23. SLR & Matrix Algebra:

    Recall that the Simple Linear Regression model states that, given a set of n linearly related pairs (xi, yi), we may express the yi’s in terms of the xi’s thus:

    yi = b 0 + b 1xi + e i; i=1,…,n

    where for a particular value of xi, the corresponding residuals (i.e. e i):

    1. Are normally distributed.
    2. Are homoscedastic (i.e. s e i=s e j " i, j; i¹ j).
    3. Are unbiased.

    Let Y be the vector of the n yi values. Let X be an n x 2 matrix, where the first column of X is a column of n 1’s and the second column of X is a column of the n xi values. Let b be the vector of coefficients (i.e. b 0, b 1). Let e be the vector of the n residual values (i.e. e i). The SLR model may be expressed in matrix terms thus:

    Y = Xb + e