Unless stated otherwise, a scalar is a complex number where real numbers are a subset of the complex number. Matrices A=[aij] and B=[bij] are of the same size, i.e. m×n. The sum of A and B is denoted by A+B and is defined by
A+B=[aij+bij].
Example: Suppose A=[1324] and B=[5768]. Then A+B=[1+53+72+64+8]=[610812].
Properties of Matrix Addition:
For any m×n matrices A, B, and C, the following properties hold:
A+B is again an m×n matrix. (Closure Property)
A+B=B+A. (Commutative Property)
(A+B)+C=A+(B+C). (Associative Property)
There is a unique m×n matrix O such that A+O=A for all m×n matrices A. This matrix O is called the zero matrix and is denoted by O. (Addive Identity)
There is a unique m×n matrix −A such that A+(−A)=O. This matrix −A is called the negative of A. (Additive Inverse)
The scalar multiplication of a matrix A by a scalar c is denoted by cA and is defined by
cA=[caij].
Example: Suppose A=[1324] and c=2. Then cA=[2×12×32×22×4]=[2648].
Properties of Scalar Multiplication:
For any m×n matrices A and B and scalars α and β, the following properties hold:
αA is again an m×n matrix. (Closure Property)
α(βA)=(αβ)A. (Associative Property)
α(A+B)=αA+αB. (Distributive Property)
(α+β)A=αA+βA. (Distributive Property)
There is 1 such that 1A=A for all m×n matrices A. This 1 is called the multiplicative identity.
3.2 Matrix Multiplication
The product of two matrices A and B is denoted by AB and is defined by
Example: Suppose there are two 2×2 matrices A=[1324] and B=[5768]. Then AB=[1×5+2×73×5+4×71×6+2×83×6+4×8]=[19432250].
Remarks: Matrix multiplication is not commutative. That is, in general, AB=BA.
Matrix Multiplication and Addition by using Python
Though it is not always necessary to use Python to perform matrix multiplication and addition, it is a good practice to do so. The following code shows how to perform matrix multiplication and addition using Python.
import numpy as np
A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8]])
AB = np.dot(A, B)
AplusB = A + B
print(AB)
print(AplusB)
3.2 Transposition and Symmetric Matrices
A matrix operation that is not derived from scalar multiplication and matrix addition is called a matrix operation. The transpose of a matrix A is denoted by AT and is defined by
AT=[aji].
Example: Suppose A=[1324]. Then AT=[1234].
Sometimes a matrix may include complex numbers. In this case, then we may have to take the conjugate of the complex number. The conjugate transpose of a matrix A is denoted by A∗ and is defined by
A∗=[aˉji].
Example: Suppose A=[1−4i3+2i2+3i4−1i]. Then A∗=[1+4i2−3i3−2i4+1i].
Properties of Transposition:
For any m×n matrix A and n×p matrix B, and scalar c, the following properties hold:
(AT)T=A.
(cA)T=cAT.
(A+B)T=AT+BT.
For the complex conjugate transpose, the following properties hold:
(A∗)∗=A.
(cA)∗=cˉA∗.
(A+B)∗=A∗+B∗.
Sometimes, a transposition of a matrix is the same as the original matrix. In this case, the matrix is called a symmetric matrix. That is, a matrix A is symmetric if A=AT.
Example: Suppose A=[1223]. Then AT=[1223] which is still A.
Definition: Let A=[aij] be a square matrix.
A is said to be a symmetric matrix if A=AT.
A is said to be a skew-symmetric matrix if A=−AT.
A is said to be a Hermitian matrix if A=A∗. This is the complex analog of a symmetric matrix.
A is said to be a skew-Hermitian matrix if A=−A∗. This is the complex analog of a skew-symmetric matrix.
Transposition and Symmetric Matrices by using Python
The following code shows how to perform transposition and check if a matrix is symmetric using Python.
import numpy as np
A = np.array([[1, 2], [2, 3]])
AT = np.transpose(A)
if np.array_equal(A, AT):
print("The matrix is symmetric.")
else:
print("The matrix is not symmetric.")
3.3 Linearity
The concept of linearity is the underlying theme of our subject. In elementary
mathematics the term “linear function” refers to straight lines, but in higher
mathematics linearity means something much more general. Recall that a function f is simply a rule for associating points in one set D - called the domain
of f — to points in another set R - the range of f. A linear function is a
particular type of function that is characterized by the following two properties.
Additivity: For any two points x and y in the domain of f, the value of f at the sum x+y is the sum of the values of f at x and y. In symbols, f(x+y)=f(x)+f(y).
Homogeneity: For any point x in the domain of f and any scalar c, the value of f at the product cx is the product of the value of f at x and c. In symbols, f(cx)=cf(x).
These two properties may be combined into a single property called linearity. A function f is linear if it satisfies the following property:
f(cx+y)=cf(x)+f(y).
for all points x and y in the domain of f and all scalars c. The linearity of a function is a fundamental concept in mathematics. It is the key to understanding the behavior of many physical systems and is the basis for the development of the calculus of variations, which is a powerful tool for solving optimization problems.
There are also two more terminologies I would like to introduce here.
The trace of a square matrix A is denoted by tr(A) and is defined by
tr(A)=i=1∑naii.
The linear combination of matrices Ai is denoted by ∑i=1nciAi and is defined by
i=1∑nciAi=c1A1+c2A2+⋯+cnAn.
3.4 Matrix Inversion
Recall that the inverse of a square matrix A is denoted by A−1 and is defined by
AA−1=A−1A=I,
where I is the identity matrix. The inverse of a matrix may not always exist. If it does exist, then the matrix is said to be invertible or nonsingular. If the inverse does not exist, then the matrix is said to be noninvertible or singular.
Existence of Inverse
For an n×n matrix A, the following statements are equivalent:
A is invertible which means A−1 exists.
rank(A)=n.
Ax=0 implies x=0.
A can be transformed into the identity matrix by a sequence of elementary row operations(Gauss-Jordan elimination).
Properties of Inverse
For any invertible n×n matrices A and B, the following properties hold:
(A−1)−1=A.
The product of two invertible matrices is invertible and (AB)−1=B−1A−1.
Inverse of a transpose is the transpose of the inverse, i.e. (AT)−1=(A−1)T. For the complex conjugate transpose, (A∗)−1=(A−1)∗.
3.5 Inverses of Sums and Sensitivity
By previous section, we may see that by the reverse order for inverses of products, we have
(AB)−1=B−1A−1.
But the inverse of a sum is not as simple as the inverse of a product. Since the derivation is not trivial, we will skip this part. We usually use the Sherman-Morrison formula to calculate the inverse of a sum.
The Sherman-Morrison formula states that for any invertible n×n matrix A and n×1 vectors u and v, if A+uvT is invertible, then
(A+uvT)−1=A−1−1+vTA−1uA−1uvTA−1.
It is important to note that the Sherman-Morrison formula is not a general formula for the inverse of a sum. It is a special formula that applies only when the sum is of a particular form.
Recall that we have talked about ill-conditioned matrices in the previous chapter. We know that when we perturb the constant vector b in the linear system Ax=b, the solution x will also be perturbed.
Therefore, we define the following:
Definition: A nonsingular matrix A is said to be ill-conditioned if a small perturbation in the matrix A results in a large change in the inverse of A. The degree of ill-conditioning of a matrix is measured by the condition number of the matrix. We denote the condition number of a matrix A by κ(A) and it is defined by
κ(A)=∥A∥∥A−1∥.
where ∥⋅∥ is the matrix norm.
The matrix norm is a generalization of the vector norm. For a matrix A, the matrix norm is defined by
∥A∥=imaxj∑∣aij∣=maximum absolute row sum.
The condition number of a matrix is a measure of how well-conditioned or ill-conditioned the matrix is. The condition number of a matrix is a nonnegative number. The larger the condition number, the more ill-conditioned the matrix is.
3.6 LU Decomposition
We have now come full circle, and we are back to where the text began—solving
a nonsingular system of linear equations using Gaussian elimination with back
substitution. This time, however, the goal is to describe and understand the
process in the context of matrices.
If Ax=b is a system of linear equations, then we can write it as Ax=LUx=b, where L is a lower triangular matrix and U is an upper triangular matrix. The process of decomposing a matrix A into the product of a lower triangular matrix L and an upper triangular matrix U is called the LU decomposition.
The LU decomposition is a fundamental concept in numerical linear algebra. It is used to solve systems of linear equations, compute the inverse of a matrix, and calculate the determinant of a matrix. The LU decomposition is also used in the Cholesky decomposition, which is used to solve systems of linear equations with symmetric positive definite matrices.
Theorem: Let A be an n×n matrix. Then A has an LU decomposition if and only if all leading principal minors of A are nonzero.
Algorithm for LU Decomposition:
Start with the matrix A.
Perform Gaussian elimination to obtain an upper triangular matrix U.
The lower triangular matrix L is obtained by setting the elements below the diagonal of U to zero and setting the diagonal elements of L to one.
The LU decomposition of A is given by A=LU.
The system of linear equations Ax=b can be solved by solving the two systems of linear equations Ly=b and Ux=y.
The solution to the system of linear equations is given by x=U−1L−1b.
The inverse of the matrix A is given by A−1=U−1L−1.
Example: Suppose we are having a 3×3 matrix
A=24627182722.
We try to find the LU decomposition of A. We apply the Gaussian elimination to A and obtain
U=200230234.
For the lower triangular matrix L, we apply the following: