23. May 2024

Matrix basics

Understanding basic computations of matrix

Introduction

Matrix computation is a fundamental concept in mathematics, with broad applications in fields such as physics, engineering, computer science, economics, and statistics. Understanding the basics of matrix computation provides a foundation for solving complex problems in these areas.

Basic Operations

✔ Addition and Subtraction

Matrices can be added or subtracted if they have the same dimensions. The operation is performed element-wise.

\[ \begin{pmatrix} a & b \\ c & d \end{pmatrix} + \begin{pmatrix} e & f \\ g & h \end{pmatrix} = \begin{pmatrix} a+e & b+f \\ c+g & d+h \end{pmatrix} \]

✔ Scalar Multiplication

Each element of a matrix is multiplied by a scalar, i.e. a single number.

\[ k \cdot \begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} ka & kb \\ kc & kd \end{pmatrix} \]

✔ Matrix Multiplication

For two matrices to be multiplied, the number of columns in the first matrix must equal the number of rows in the second. The element in the ith row and jth column of the resulting matrix is computed as the dot product of the ith row of the first matrix and the jth column of the second matrix.

\[ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \cdot \begin{pmatrix} e & f \\ g & h \end{pmatrix} = \begin{pmatrix} ae + bg & af + bh \\ ce + dg & cf + dh \end{pmatrix} \]

✔ Transpose

The transpose of a matrix is obtained by swapping rows with columns.

\[ \begin{pmatrix} a & b \\ c & d \end{pmatrix}^T = \begin{pmatrix} a & c \\ b & d \end{pmatrix} \]

✔ Determinant

The determinant is a scalar value that is a function of the entries of a square matrix.
For a 2x2 matrix:

\[ \text{det} \begin{pmatrix} a & b \\ c & d \end{pmatrix} = ad - bc \]

✔ Inverse

The inverse of a matrix \(A\) is denoted as \(A^{-1}\) and is defined such that \(AA^{-1} = A^{-1}A = I\), where \(I\) is the identity matrix. Not all matrices have inverses; a matrix must be square and have a non-zero determinant to have an inverse.

Eigenvalues and Eigenvectors

Given a square matrix \(A\), an eigenvector \(v\) and its corresponding eigenvalue \(\lambda\) satisfy the equation:

\[ A v = \lambda v \]

This equation can be rewritten as:

\[ (A - \lambda I)v = 0 \]

where \(I\) is the identity matrix. Non-trivial solutions exist if and only if the determinant of \((A - \lambda I)\) is zero:

\[ \text{det}(A - \lambda I) = 0 \]

This equation, called the characteristic equation, can be solved to find the eigenvalues \(\lambda\).

Matrix Decompositions

Matrix decompositions simplify complex matrix computations and are used in numerous numerical algorithms. Some common decompositions include:

✔ LU Decomposition

Factorizes a matrix \(A\) into a lower triangular matrix \(L\) and an upper triangular matrix \(U\) such that \(A = LU\).

\[ A = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix} = LU \] \[ L = \begin{pmatrix} 1 & 0 & 0 \\ l_{21} & 1 & 0 \\ l_{31} & l_{32} & 1 \end{pmatrix}, U = \begin{pmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \end{pmatrix} \]

✔ QR Decomposition

Decomposes a matrix \(A\) into an orthogonal matrix \(Q\) and an upper triangular matrix \(R\) such that \(A = QR\).

\[ A = QR \]

✔ Singular Value Decomposition (SVD)

Decomposes a matrix \(A\) into three matrices \(U\), \(\Sigma\), and \(V^T\) such that \(A = U \Sigma V^T\). \(U\) and \(V\) are orthogonal matrices, and \(\Sigma\) is a diagonal matrix of singular values.

\[ A = U \Sigma V^T \]

✔ Cholesky Decomposition

Cholesky decomposition is a factorization of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose. For a matrix \(A\):

\[ A = LL^* \]

✔ Eigenvalue Decomposition

For a square matrix \(A\), eigenvalue decomposition involves decomposing \(A\) into the product of its eigenvectors and eigenvalues:

\[ A = V \Lambda V^{-1} \]

where \(V\) is a matrix whose columns are the eigenvectors of \(A\), and \(\Lambda\) is a diagonal matrix whose diagonal elements are the eigenvalues of \(A\).

Conjugate Transpose

The conjugate transpose of a matrix is an operation that involves taking the transpose of a matrix and then taking the complex conjugate of each entry. The conjugate transpose of a matrix \(A\) is often denoted by \(A^*\) or \(A^H\).

Given a matrix \(A\), the conjugate transpose \(A^*\) is formed by two steps:

  1. Transpose

    Flip the matrix over its diagonal, switching the row and column indices of each element.

  2. Complex Conjugate

    Replace each element with its complex conjugate.

If \(A\) is a matrix with elements \(a_{ij}\):

\[ A = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix} \]

The conjugate transpose \(A^*\) is given by:

\[ A^* = \begin{pmatrix} \overline{a_{11}} & \overline{a_{21}} & \cdots & \overline{a_{m1}} \\ \overline{a_{12}} & \overline{a_{22}} & \cdots & \overline{a_{m2}} \\ \vdots & \vdots & \ddots & \vdots \\ \overline{a_{1n}} & \overline{a_{2n}} & \cdots & \overline{a_{mn}} \end{pmatrix} \]

Special Matrix Forms

✔ Hermitian Matrix

A Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose. Formally, a matrix \(A\) is Hermitian if:

\[ A = A^* \]

\[ A_{example} = \begin{pmatrix} a & b + ci \\ b - ci & d \end{pmatrix} \]

✔ Jordan Form

The Jordan form of a matrix is a block diagonal matrix consisting of Jordan blocks, which simplifies many matrix computations. Any square matrix \(A\) can be decomposed into:

\[ A = PJP^{-1} \]

where \(J\) is the Jordan matrix and \(P\) is the matrix of generalized eigenvectors.

References