Note for Linear Algebra 3
Lecture 17: Determinants and Their Properties
Determinant
The determinant of matrix A is a number associated with the matrix, denoted as
Properties of determinants
The sign of the determinant value will be reversed when rows are exchanged
The determinant of a permutation matrix is 1 or -1, depending on the parity of the number of rows exchanged
Two rows being equal makes the determinant equal to 0 (which can be directly deduced from property two)
Matrix elimination does not change its determinant (proof is below)
A certain row is 0, the determinant is 0 (multiplying by 0 is equivalent to a certain row being 0, resulting in 0)
When and only when A is a singular matrix
(Proof see below)
The determinant is linear by row, but the determinant itself is not linear
Proof that elimination does not change the determinant
Proof that the transpose does not change the determinant
Translation:
The above four matrices are all triangular matrices, the determinant equals the product of the diagonal elements, the transpose has no effect, so they are equal
Triangular matrix determinant
- The determinant of the triangular matrix U is the product of the elements on the diagonal (the pivot product)
- Why do the other elements of the triangular matrix not work? Because by elimination we can obtain a matrix with only diagonal elements, and elimination does not change the determinant
- Why is it the product of the diagonal elements? Because after elimination, the diagonal elements can be successively extracted, yielding
, where the determinant of the unit matrix is 1 - The determinant of a singular matrix is 0, and it has rows of all zeros; the determinant of an invertible matrix is not 0, and it can be reduced to a triangular matrix, with the determinant being the product of the diagonal elements of the triangular matrix
A little more
- The determinant obtained from odd-numbered permutations and even-numbered permutations is definitely different (signs differ), which means the matrices after odd-numbered and even-numbered permutations will not be the same, i.e., permutations strictly distinguish between odd and even
Eighteenth Lecture: Determinant Formulas and Algebraic Cofactors
Determinant formula
Derive the 2x2 determinant
We can find that this method involves taking one row at a time, decomposing this row (determinants are linear by rows), extracting factors, obtaining the unit matrix through row exchanges, and then obtaining the answer through properties one and two
If expanded to a 3x3 matrix, the first row is decomposed into three parts, each of which is further decomposed into three parts for the second row, resulting in a total of 27 parts. The parts that are not zero are those matrices where there are elements in each row and column.
For example
Extract the factors to obtain
, swap the second and third rows to get the identity matrix, so the answer is , and since a row swap was performed, the answer is negative,A matrix of size n*n can be divided into
parts, because the first row is divided into n parts, the second row cannot be repeated, and n-1 rows are chosen, each with one repetition, thus obtaining partsThe determinant formula is the sum of these
parts
Algebraic cofactor
- $det=a{11}(a{22}a{33}-a{23}a{32})+a{12}(….)+a_{13}(….)$
- Extract a factor, the remainder formed by the remaining factors, i.e., the content within the parentheses, is the minor determinant
- From the matrix perspective, selecting an element, its algebraic cofactor is the determinant of the matrix obtained by excluding the row and column of this element
- The algebraic cofactor of $a{ij}
c{ij}$ - Pay attention to the sign of the algebraic cofactor, which is related to the parity of
. Even numbers take the positive sign, and odd numbers take the negative sign. Here, the symbol refers to the sign in front of the determinant after the normal calculation of the submatrix corresponding to the algebraic cofactor - $detA=a{11}C{11}+a{12}C{12}+….+a{1n}C{1n}$
19th Lecture: Cramer’s Rule, Inverse Matrix, Volume
Invertible matrix
Only when the determinant is not zero is the matrix invertible
Invertible matrix formula
The algebraic cofactor of $C{ij}
A{ij}$Proof: i.e., prove
On the diagonal are determinants, because $det=a{11}(a{22}a{33}-a{23}a{32})+a{12}(….)+a_{13}(….)$ other positions are all 0, because the algebraic cofactor of row a multiplied by row b is equivalent to calculating the determinant of a matrix where row a and row b are equal, and the determinant is 0
Kramer’s Rule
Ax=b
Kramer’s rule states that the determinant of matrix
is obtained by replacing the ith column of matrix with b, while keeping the rest unchanged
Volume
The determinant of A can represent a volume, for example, the determinant of a 3x3 matrix represents a volume within a three-dimensional space
Each row of the matrix represents one edge of a box (originating from the same vertex), and the determinant is the volume of the box; the sign of the determinant represents the left-hand or right-hand system.
(1) The unit matrix corresponds to the unit cube, with a volume of 1
For the orthogonal matrix Q,
The box corresponding to Q is the unit cube corresponding to the unit matrix rotated by an angle in space
(3a) If a row of a matrix is doubled, i.e., one set of edges of the box is doubled, the volume is also doubled. From the perspective of determinants, the factor can be factored out, so the determinant is also doubled
(2) Swapping two rows of a permutation matrix does not change the volume of the box
(3b) A row of the matrix is split, and the box is also divided into two parts accordingly
The above, the three properties of determinants (1, 2, 3a, 3b) can all be verified in terms of volume
Lecture 20: Eigenvalues and Eigenvectors
Feature vector
- Given matrix A, matrix A can be regarded as a function acting on a vector x, resulting in the vector Ax
- When
is parallel to , i.e., , we call the eigenvector and the eigenvalue - If A is a singular matrix,
is an eigenvalue
Several examples
If A is a projection matrix, it can be observed that its eigenvectors are any vectors on the projection plane, because
represents the projection onto the plane, and all vectors on the plane remain unchanged after projection, thus being parallel. At the same time, the eigenvalues are 1. If a vector is perpendicular to the plane, , the eigenvalue is 0. Therefore, the eigenvectors of the projection matrix A fall into two cases, with eigenvalues of 1 or 0.Another example
An n*n matrix has n eigenvalues
The sum of the eigenvalues equals the sum of the diagonal elements, this sum being called the trace
How to solve
The visible equation has non-zero solutions,
must be singular, i.e.:Because the unit matrix is added, the eigenvector remains unchanged as x, and the eigenvalue is increased by the coefficient of the unit matrix, i.e.,
The eigenvalues of A+B are not necessarily the sum of the eigenvalues of A and B, because their eigenvectors may not be the same. Similarly, the eigenvalues of AB are not necessarily the product of their eigenvalues.
For another example, consider the rotation matrix Q
However, it can be seen that
has no real solutionsConsider an even worse case (the matrix is more asymmetric, and it is even harder to obtain real eigenvalues)
21st Lecture: Diagonalization and Powers of A
Diagonalization
Assuming A has n linearly independent eigenvectors, arranged as columns to form the matrix S, i.e., the eigenvector matrix
All discussions about matrix diagonalization presented here are under the premise that S is invertible, i.e., the n eigenvectors are linearly independent
Assuming S is invertible, i.e., the n eigenvectors are linearly independent, we can obtain
is a diagonal matrix, here we obtain a matrix decomposition other than andThe two equations above regarding
indicate that the squared eigen vectors remain unchanged, the eigenvalues are squared, and similarly for the K-th powerEigenvalues and eigenvectors help us understand matrix powers. When calculating matrix powers, we can decompose the matrix into the form of a matrix of eigenvectors multiplied by a diagonal matrix, where K multiplications can cancel each other out, as shown in the above formula
What kind of matrix’s power tends to 0 (stable)
Which matrices can be diagonalized? If all eigenvalues are different, then A can be diagonalized
If matrix A is already diagonal, then
is the same as AThe number of times an eigenvalue repeats is called the algebraic multiplicity, for triangular matrices, such as
For
, the geometric multiplicity is 1, while the algebraic multiplicity of the eigenvalue is 2The eigenvector is only (1,0), therefore, for a triangular matrix, it cannot be diagonalized, and there do not exist two linearly independent eigenvectors.
A’s power
Most matrices have a set of linearly independent eigenvalues that can be diagonalized. If diagonalization is possible, we need to focus on how to solve for the powers of A.
Because the n feature vectors are mutually linearly independent, they can serve as a set of bases to cover the entire n-dimensional space, and naturally,
can be represented as a linear combination of the feature vectors, with C being the linear coefficient vector. The above formula has derived the method for solving matrix powers, and the next example will be given using the Fibonacci sequence.The growth rate of the Fibonacci sequence is how fast? Determined by the eigenvalues, we attempt to construct vectors to find the matrix relationship of the iterative Fibonacci sequence
Define vector
Using this vector, the first two equations can be written in matrix form
Obtaining two eigenvalues, it is easy to obtain the corresponding eigenvectors
Returning to the Fibonacci sequence, the growth rate of the Fibonacci sequence is determined by the eigenvalues of the “sequence update matrix” we construct, and as can be seen from $A^{100}u0=c_1 \lambda _1^100 x_1 + c_2 \lambda _2^100 x_2 +…+c_n \lambda _n^100 x_n
F{100}$ can be written in the following formThere are initial values
Among them, $x1,x_2
F{100}$ can be obtained
Summary
- We find that under the condition of A being invertible, A can be decomposed into the form
- This form has a characteristic that facilitates the calculation of the power of A, as it can be observed that the unit matrix of the eigenvalues of A’s power is the power of the unit matrix of A’s eigenvalues
- We attempt to apply this feature in solving the Fibonacci sequence, first converting the sequence update into a matrix form
- Determine the eigenvalues and eigenvectors of the matrix
- From the expansion of the power series of A, it can be seen that the power of A is mainly determined by the larger eigenvalues, therefore $F{100}
F{100} \approx c_1 {(\frac {1 + \sqrt 5}2)}^{100}$ - By the initial value $F0
F{100}$ - This is an example of a difference equation; the next section will discuss differential equations
Lecture 22: Differential Equations and exp(At)
Differential Equation
The solutions to linear equations with constant coefficients are in exponential form; if the solution to the differential equation is in exponential form, one can find the solution by using linear algebra to determine the exponents and coefficients
For example
First, we list the coefficient matrix and find the eigenvalues and eigenvectors of the matrix
is a solution of this singular matrix, and from the trace it can be seen that the second eigenvalue is , and two eigenvectors are obtainedThe general solution form of the differential equation will be
Why?
In the differential equation
, the form of the solution isIn the differential equation
, the form of the solution isSolved from the initial values, i.e., the coefficient matrix C multiplied by the eigenvector matrix S yields the initial values
It can be seen that as t approaches infinity, the solution of the example equation is reduced to only the steady-state part, i.e.,
When does the solution tend towards 0? There exist negative eigenvalues because
needs to tend towards 0If the eigenvalues are complex? The magnitude of the imaginary part is 1, so if the real part of the complex number is negative, the solution still tends towards 0
When does a steady state exist? Only 0 and negative eigenvalues exist, as in the example above
When does the solution fail to converge? Any eigenvalue has a real part greater than 0
The sign of the coefficient matrix changes, the eigenvalues also change sign, the steady-state solution remains steady-state, and the convergent solution will become divergent
How to directly determine if the solution converges from a matrix? That is, do all the real parts of the eigenvalues have a value less than 0?
The trace of the matrix should be less than 0, but the sum of the diagonal elements being 0 does not necessarily converge, as
Therefore, another condition is required: the value of the determinant is the product of the eigenvalues, so the value of the determinant should be greater than 0
exp(At)
- Can the solution be expressed in the form of
- Matrix A represents
coupling, first we need to diagonalize u to decouple
21st Lecture: Markov Matrix; Fourier Series
Markov matrix
A typical Markov matrix
Each element is greater than or equal to 0, the sum of each column is 1, and the powers of the Markov matrix are all Markov matrices
is an eigenvalue, and the absolute values of the other eigenvalues are all less than 1In the previous lecture, we discussed that the power of a matrix can be decomposed into
When A is a Markov matrix, there is only one eigenvalue of 1, and the other eigenvalues are less than 1. As k increases, the terms with eigenvalues less than 1 tend to approach 0, retaining only the term with the eigenvalue of 1, and the elements of the corresponding eigenvector are all greater than 0
When the sum of each column is 1, there necessarily exists an eigenvalue
Proof:
If 1 is an eigenvalue, then
should be singular. It can be seen that the sum of each column of is 0, indicating that the row vectors are linearly dependent, i.e., the matrix is singular, and the all-ones vector lies in the left null space.For the Markov matrix A, we study
An example, u is the population in Massachusetts and California, A is the population mobility matrix
It can be seen that each year (k), 80% of people stay in Massachusetts, 20% move to California, and 10% from California also relocate to Massachusetts
On the Markov matrix A
For the eigenvalue of 1, the eigenvector is easily found as
, and for the eigenvalue of 0.7, the eigenvector is (-1,1).Obtain the formula we are to study
Assuming there were initially 0 people in California and 1000 people in Massachusetts, i.e.,
, substituting this into the formula yields . It can be seen that after many years, the populations of California and Massachusetts will stabilize, each accounting for one-third and two-thirds of the total 1000 people, respectively.The vector with a sum of 1 is another way to define a Markov matrix
Fourier Series
Discussion of projection problems with standard orthogonal bases
If
is a set of standard orthogonal bases, any vector is a linear combination of this set of basesWe now need to determine the linear combination coefficients
, . One method is to take the inner product of and , and calculate the coefficients one by oneWritten in matrix form
Now discussing Fourier series
We hope to decompose the function
The key is that in this decomposition,
constitutes an infinite orthogonal basis for a set of function spaces, i.e., the inner products of these functions are 0 (the inner product of vectors is a discrete value summation, while the inner product of functions is a continuous value integration).How to calculate the Fourier coefficients?
Using the previous vector example to calculate
Sequentially compute the inner product of
with each element of the orthogonal basis, obtaining the corresponding coefficient multiplied by , for example
Lecture 22: Symmetric Matrices and Positive Definiteness
Symmetric matrix
The eigenvalues of a symmetric matrix are real numbers, and the eigenvectors corresponding to distinct eigenvalues are mutually orthogonal
For a general matrix
, S is the matrix of eigenvectorsFor the symmetric matrix
, Q is the matrix of standard orthogonal eigenvectorsWhy are all eigenvalues real numbers?
Conjugate both left and right, as we are now only considering the real matrix A, $Ax^{}=\lambda ^{} x^{*}$
and its conjugate are eigenvalues; now take the transpose of both sides of the equation, $x^{ T}A^T=x^{ T} \lambda ^{* T}$In the above formula,
, and both sides are multiplied by , comparing with $x^{ T}A\lambda x^{ T}x \lambda ^{*}=\lambda$ , i.e., the eigenvalues are real numbersIt is evident that for multiple matrices, the condition
is required to satisfy symmetryFor symmetric matrices
So every symmetric matrix is a combination of some mutually orthogonal projection matrices
For symmetric matrices, the number of positive principal minors is equal to the number of positive eigenvalues, and the product of the principal minors equals the product of the eigenvalues, which equals the determinant of the matrix
Positivity
- Positive definite matrices are symmetric matrices, a subclass of symmetric matrices, whose all eigenvalues are positive, all leading principal minors are positive, and all subdeterminants are positive
- The sign of eigenvalues is related to stability
- The eigenvalue, determinant, and main element are unified as one in linear algebra