Chapter 26 Linear Algebra in Differential Equations¶
Prerequisites: Eigenvalues and diagonalization (Ch6) · Jordan canonical form (Ch12) · Matrix functions (Ch13)
Chapter arc: \(\mathbf{x}' = A\mathbf{x}\) (homogeneous systems) → \(e^{At}\) (matrix exponential) → real parts of eigenvalues (stability) → phase plane (geometric classification) → companion matrix (higher-order to systems) → variation of parameters (nonhomogeneous) → Sturm-Liouville (PDEs) → Floquet (periodic systems)
Linear differential equations represent one of the most classical and important application domains of linear algebra. The solutions of linear constant-coefficient ODE systems can be completely described by the matrix exponential \(e^{At}\), while the long-term behavior of the system (stability, oscillation, decay) is entirely determined by the eigenvalues of the coefficient matrix \(A\). This chapter begins with first-order constant-coefficient linear systems and progresses through the matrix exponential, stability analysis, phase plane classification, higher-order equations, nonhomogeneous equations, eigenvalue problems in PDEs, and Floquet theory for periodic-coefficient systems.
26.1 Linear ODE Systems¶
Core structure: The solution space of \(\mathbf{x}' = A\mathbf{x}\) is an \(n\)-dimensional linear space → each column of the fundamental matrix \(\Phi(t)\) is a solution → \(\Phi(t) = e^{At}\) when \(\Phi(0) = I\)
Link: The dimension theorem from Ch4 (linear spaces) is directly manifested here
Linear constant-coefficient ODE systems are fundamental models in dynamical systems, control theory, and physics.
Definition 26.1 (Linear constant-coefficient ODE system)
Let \(A \in \mathbb{R}^{n \times n}\) be a constant matrix. A linear constant-coefficient ordinary differential equation system is
where \(\mathbf{x}(t) \in \mathbb{R}^n\) is the state vector, \(t \ge 0\).
Definition 26.2 (Fundamental matrix)
An \(n \times n\) matrix-valued function \(\Phi(t)\) is called a fundamental matrix of the equation \(\mathbf{x}' = A\mathbf{x}\) if
If furthermore \(\Phi(0) = I\), then \(\Phi(t)\) is called the principal fundamental matrix or state transition matrix.
Theorem 26.1 (Solution space structure)
The solution set of the homogeneous equation \(\mathbf{x}' = A\mathbf{x}\) forms an \(n\)-dimensional linear space over \(\mathbb{R}^n\). If \(\Phi(t)\) is any fundamental matrix, the general solution is
The unique solution satisfying the initial condition \(\mathbf{x}(0) = \mathbf{x}_0\) is \(\mathbf{x}(t) = \Phi(t)\Phi(0)^{-1}\mathbf{x}_0\).
Proof
Linearity: If \(\mathbf{x}_1(t)\) and \(\mathbf{x}_2(t)\) are solutions, then \(\alpha \mathbf{x}_1(t) + \beta \mathbf{x}_2(t)\) is also a solution, since
Dimension is \(n\): The ODE existence and uniqueness theorem guarantees a unique solution for each initial value \(\mathbf{x}_0 \in \mathbb{R}^n\). The map from initial values to solutions \(\mathbf{x}_0 \mapsto \mathbf{x}(t)\) is a linear bijection, so the solution space is isomorphic to \(\mathbb{R}^n\) and has dimension \(n\).
The \(n\) columns of the fundamental matrix \(\Phi(t)\) are linearly independent (since \(\det \Phi(t) \neq 0\)), forming a basis for the solution space. \(\blacksquare\)
Example 26.1
Solution of a two-dimensional linear system. Consider
The characteristic equation \(\lambda^2 + 3\lambda + 2 = 0\) gives \(\lambda_1 = -1\), \(\lambda_2 = -2\). The corresponding eigenvectors are \(\mathbf{v}_1 = (1, -1)^T\), \(\mathbf{v}_2 = (1, -2)^T\). The general solution is
The fundamental matrix is \(\Phi(t) = \begin{pmatrix} e^{-t} & e^{-2t} \\ -e^{-t} & -2e^{-2t} \end{pmatrix}\), satisfying \(\Phi'(t) = A\Phi(t)\).
Theorem 26.2 (Liouville's formula)
Let \(\Phi(t)\) be a fundamental matrix of \(\mathbf{x}' = A(t)\mathbf{x}\) (where \(A(t)\) may be time-varying). Then
In particular, for constant-coefficient systems \(A(t) \equiv A\), we have \(\det \Phi(t) = \det \Phi(0) \cdot e^{t\, \operatorname{tr}(A)}\).
Proof
Let \(W(t) = \det \Phi(t)\) (the Wronskian). Using the formula for differentiating a determinant with respect to a matrix,
where \(\Phi_i(t)\) is obtained by replacing the \(i\)-th row of \(\Phi(t)\) with its derivative. Since \(\Phi' = A\Phi\), the derivative of the \(i\)-th row equals the product of the \(i\)-th row of \(A\) with \(\Phi\). After expansion, only the diagonal entries \(a_{ii}\) contribute to the determinant, yielding \(W'(t) = \operatorname{tr}(A(t)) W(t)\). Solving this scalar ODE gives the result. \(\blacksquare\)
26.2 Matrix Exponential and Solution Representation¶
Core formula: \(e^{At} = \sum_{k=0}^{\infty} \frac{(At)^k}{k!}\) → when diagonalizable, \(e^{At} = P\, \text{diag}(e^{\lambda_i t})\, P^{-1}\) → for Jordan blocks, terms involve \(t^k e^{\lambda t}\)
Link: The most important special case of Ch13 matrix function theory
The matrix exponential is the central tool in linear ODE theory, perfectly unifying matrix algebra and differential equations.
Definition 26.3 (Matrix exponential)
For \(A \in \mathbb{R}^{n \times n}\), the matrix exponential is defined as
This series converges absolutely for all \(A\).
Theorem 26.3 (Matrix exponential and ODEs)
The unique solution of \(\mathbf{x}' = A\mathbf{x}\), \(\mathbf{x}(0) = \mathbf{x}_0\) is
\(e^{At}\) is the principal fundamental matrix: \(\frac{d}{dt}e^{At} = Ae^{At}\), \(e^{A \cdot 0} = I\).
Proof
Differentiating term by term:
Therefore \(\mathbf{x}(t) = e^{At}\mathbf{x}_0\) satisfies \(\mathbf{x}' = Ae^{At}\mathbf{x}_0 = A\mathbf{x}(t)\) and \(\mathbf{x}(0) = I\mathbf{x}_0 = \mathbf{x}_0\). By the uniqueness theorem, this is the unique solution. \(\blacksquare\)
Theorem 26.4 (Computing the matrix exponential)
Let \(A = PJP^{-1}\) be the Jordan decomposition, \(J = \operatorname{diag}(J_1, \ldots, J_s)\), where each \(J_i\) is a Jordan block. Then
For a \(k \times k\) Jordan block \(J_i = \lambda_i I + N\) (\(N\) nilpotent),
Proof
From \(A = PJP^{-1}\) we get \(A^k = PJ^k P^{-1}\), hence
Since \(J\) is block diagonal, \(e^{Jt} = \operatorname{diag}(e^{J_1 t}, \ldots, e^{J_s t})\). For a Jordan block \(J_i = \lambda_i I + N\), since \(\lambda_i I\) and \(N\) commute, \(e^{J_i t} = e^{\lambda_i t} e^{Nt}\), where \(N^k = 0\) (for \(k \ge\) block size), so \(e^{Nt}\) is a finite sum, yielding the upper triangular matrix form. \(\blacksquare\)
Example 26.2
Matrix exponential of a system with repeated eigenvalues. Let
\(A\) has a single eigenvalue \(\lambda = 2\) with Jordan block \(J = \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix}\) (\(P = I\)).
For initial value \(\mathbf{x}_0 = (1, 3)^T\), the solution is
The presence of the \(te^{2t}\) term is the hallmark behavior of Jordan blocks (non-diagonalizable matrices).
Definition 26.4 (Fundamental properties of the matrix exponential)
The matrix exponential satisfies the following properties:
- \(e^{O} = I\) (\(O\) is the zero matrix).
- \((e^{A})^{-1} = e^{-A}\).
- \(e^{(s+t)A} = e^{sA}e^{tA}\) (semigroup property).
- If \(AB = BA\), then \(e^{A+B} = e^{A}e^{B}\).
- \(\det(e^{A}) = e^{\operatorname{tr}(A)}\).
- In general, \(e^{A+B} \neq e^{A}e^{B}\).
Example 26.3
Non-commutativity of the matrix exponential. Let \(A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}\), \(B = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}\).
\(e^A = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}\), \(e^B = \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}\), \(e^A e^B = \begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix}\).
But \(A + B = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\) has eigenvalues \(\pm 1\),
Since \(AB \neq BA\), the Baker-Campbell-Hausdorff formula introduces nontrivial correction terms.
26.3 Stability Analysis¶
Criterion chain: All \(\operatorname{Re}(\lambda_i) < 0\) \(\iff\) asymptotically stable → Lyapunov equation \(A^T P + PA = -Q\) has positive definite solution \(\iff\) Hurwitz matrix
Applications: Convergence of control systems, stability of filters in signal processing
The stability of linear systems is entirely determined by the eigenvalues of the coefficient matrix — one of the most profound applications of linear algebra in dynamical systems.
Definition 26.5 (Stability classification)
For the system \(\mathbf{x}' = A\mathbf{x}\), the equilibrium \(\mathbf{x} = \mathbf{0}\) is called:
- Asymptotically stable: if \(\lim_{t \to \infty} \mathbf{x}(t) = \mathbf{0}\) for all initial values \(\mathbf{x}_0\);
- Stable (Lyapunov stable): if for every \(\varepsilon > 0\), there exists \(\delta > 0\) such that \(\|\mathbf{x}_0\| < \delta\) implies \(\|\mathbf{x}(t)\| < \varepsilon\) for all \(t \ge 0\);
- Unstable: if it is not stable.
Theorem 26.5 (Eigenvalue stability criterion)
For the constant-coefficient system \(\mathbf{x}' = A\mathbf{x}\):
- Asymptotically stable \(\iff\) all eigenvalues of \(A\) satisfy \(\operatorname{Re}(\lambda_i) < 0\) (\(A\) is a Hurwitz matrix).
- Stable \(\iff\) all eigenvalues satisfy \(\operatorname{Re}(\lambda_i) \le 0\), and eigenvalues with \(\operatorname{Re}(\lambda_i) = 0\) have Jordan blocks of size \(1\).
- Unstable \(\iff\) there exists \(\operatorname{Re}(\lambda_i) > 0\), or there exists \(\operatorname{Re}(\lambda_i) = 0\) with Jordan block size \(> 1\).
Proof
From \(\mathbf{x}(t) = e^{At}\mathbf{x}_0\) and the Jordan decomposition \(e^{At} = Pe^{Jt}P^{-1}\), each Jordan block contributes terms of the form \(t^j e^{\lambda_i t}\) (\(0 \le j <\) block size).
- If \(\operatorname{Re}(\lambda_i) < 0\), then \(|t^j e^{\lambda_i t}| = t^j e^{\operatorname{Re}(\lambda_i) t} \to 0\) (\(t \to \infty\)), regardless of \(j\).
- If \(\operatorname{Re}(\lambda_i) = 0\) and \(j = 0\), then \(|e^{\lambda_i t}| = 1\), bounded but not tending to zero.
- If \(\operatorname{Re}(\lambda_i) = 0\) and \(j \ge 1\), then \(|t^j e^{\lambda_i t}| = t^j \to \infty\).
- If \(\operatorname{Re}(\lambda_i) > 0\), then \(|e^{\lambda_i t}| \to \infty\). \(\blacksquare\)
Theorem 26.6 (Lyapunov stability theorem)
\(A\) is a Hurwitz matrix if and only if for any positive definite matrix \(Q \succ 0\), the Lyapunov equation
has a unique positive definite solution \(P \succ 0\). In this case, \(V(\mathbf{x}) = \mathbf{x}^T P \mathbf{x}\) is a Lyapunov function for the system.
Proof
Sufficiency: If \(P \succ 0\) satisfies \(A^T P + PA = -Q \prec 0\), define \(V(\mathbf{x}) = \mathbf{x}^T P \mathbf{x} > 0\). Differentiating along trajectories:
Necessity: If \(A\) is Hurwitz, define \(P = \int_0^{\infty} e^{A^T t} Q e^{At}\, dt\) (convergent since \(\operatorname{Re}(\lambda_i) < 0\)). Direct verification shows \(A^T P + PA = -Q\) and \(P \succ 0\). Uniqueness follows from the linearity of the Lyapunov equation and the Hurwitz condition. \(\blacksquare\)
Example 26.4
Stability analysis of a circuit system. An RLC circuit has the state equation
The characteristic equation is \(\lambda^2 + \frac{R}{L}\lambda + \frac{1}{LC} = 0\), with eigenvalues
Since \(R, L, C > 0\), we have \(\operatorname{Re}(\lambda_{1,2}) < 0\), so the system is asymptotically stable. Physical meaning: the resistor dissipates energy, and current and voltage eventually decay to zero. When \(R^2 > 4L/C\), the system is overdamped (two real negative eigenvalues); when \(R^2 < 4L/C\), it is underdamped (complex conjugate eigenvalues, decaying oscillation); when \(R^2 = 4L/C\), it is critically damped.
26.4 Phase Plane Analysis¶
Classification: Complete classification of \(2 \times 2\) systems depends on \(\operatorname{tr}(A)\) and \(\det(A)\) → the sign and real/imaginary nature of eigenvalues determine nodes/saddles/spirals/centers
Geometry: The qualitative shape of trajectories is completely determined by eigenvalues and eigenvectors
Phase plane analysis of two-dimensional linear systems \(\mathbf{x}' = A\mathbf{x}\) serves as the starting point for understanding higher-dimensional systems.
Definition 26.6 (Classification of critical points)
For the two-dimensional system \(\mathbf{x}' = A\mathbf{x}\) (\(A \in \mathbb{R}^{2 \times 2}\), \(\det A \neq 0\)), let the eigenvalues be \(\lambda_1, \lambda_2\). The type of the critical point at the origin is:
| Eigenvalue condition | Type |
|---|---|
| \(\lambda_1, \lambda_2 \in \mathbb{R}\), same sign | Node |
| \(\lambda_1, \lambda_2 \in \mathbb{R}\), opposite signs | Saddle point |
| \(\lambda_{1,2} = \alpha \pm \beta i\), \(\alpha \neq 0\) | Spiral/focus |
| \(\lambda_{1,2} = \pm \beta i\), \(\alpha = 0\) | Center |
| \(\lambda_1 = \lambda_2 \in \mathbb{R}\), non-diagonalizable | Degenerate/improper node |
Theorem 26.7 (Trace-determinant classification)
Let \(\tau = \operatorname{tr}(A)\), \(\delta = \det(A)\), \(\Delta = \tau^2 - 4\delta\). Then:
- \(\delta < 0\): Saddle point (unstable).
- \(\delta > 0\), \(\Delta > 0\), \(\tau < 0\): Stable node.
- \(\delta > 0\), \(\Delta > 0\), \(\tau > 0\): Unstable node.
- \(\delta > 0\), \(\Delta < 0\), \(\tau < 0\): Stable spiral.
- \(\delta > 0\), \(\Delta < 0\), \(\tau > 0\): Unstable spiral.
- \(\delta > 0\), \(\tau = 0\): Center.
Proof
The eigenvalues are \(\lambda_{1,2} = \frac{\tau \pm \sqrt{\Delta}}{2}\), with \(\lambda_1 \lambda_2 = \delta\) and \(\lambda_1 + \lambda_2 = \tau\).
- \(\delta < 0\) means \(\lambda_1, \lambda_2\) have opposite signs.
- \(\delta > 0\), \(\Delta > 0\): two real roots of the same sign, with the sign of \(\tau\) determining stability.
- \(\delta > 0\), \(\Delta < 0\): complex conjugate roots \(\alpha \pm \beta i\) (\(\alpha = \tau/2\)), with the sign of \(\alpha\) determining stability.
- \(\tau = 0\) implies \(\alpha = 0\), giving purely imaginary eigenvalues. \(\blacksquare\)
Example 26.5
Phase plane of a spring-damper system. The spring-mass system \(mx'' + cx' + kx = 0\) becomes
Here \(\tau = -c/m < 0\), \(\delta = k/m > 0\), \(\Delta = c^2/m^2 - 4k/m\).
- Underdamped (\(c^2 < 4mk\)): \(\Delta < 0\), stable spiral. Trajectories spiral toward the origin.
- Overdamped (\(c^2 > 4mk\)): \(\Delta > 0\), stable node. Trajectories approach the origin monotonically along eigenvector directions.
- Undamped (\(c = 0\)): \(\tau = 0\), center. Trajectories are ellipses centered at the origin, corresponding to energy conservation.
26.5 Higher-Order Linear ODEs and the Companion Matrix¶
Reduction: \(n\)-th order linear ODE → first-order \(n \times n\) system → the companion matrix \(C\)'s characteristic polynomial = the original equation's characteristic polynomial
Link: The characteristic polynomial theory of Ch6 is directly applied here
Higher-order linear ODEs can be converted to first-order systems by introducing state variables, with the corresponding coefficient matrix having a special companion matrix structure.
Definition 26.7 (Higher-order linear ODE)
An \(n\)-th order linear constant-coefficient ODE is
Definition 26.8 (Companion matrix)
The companion matrix corresponding to the above equation is
The characteristic polynomial of \(C\) is exactly \(\lambda^n + a_{n-1}\lambda^{n-1} + \cdots + a_1\lambda + a_0\).
Theorem 26.8 (Equivalence of higher-order ODEs and first-order systems)
Let \(x_1 = y\), \(x_2 = y'\), \(\ldots\), \(x_n = y^{(n-1)}\). Then the higher-order ODE is equivalent to the first-order system
where \(\mathbf{x} = (x_1, \ldots, x_n)^T\) and \(C\) is the companion matrix. The eigenvalues of \(C\) are exactly the characteristic roots of the original equation, and the Jordan structure of \(C\) determines the form of the solution.
Proof
By definition, \(x_i' = x_{i+1}\) (\(1 \le i \le n-1\)), and
This is precisely the form \(\mathbf{x}' = C\mathbf{x}\). For \(C\)'s characteristic polynomial: expanding the determinant \(\det(\lambda I - C)\) along the last row and using the structure of \(1\)'s above the diagonal, one obtains \(\lambda^n + a_{n-1}\lambda^{n-1} + \cdots + a_0\) through successive simplification. \(\blacksquare\)
Example 26.6
A fourth-order equation as a system. The equation \(y^{(4)} + 2y''' + 3y'' + 2y' + y = 0\) corresponds to the companion matrix
The characteristic polynomial is \(\lambda^4 + 2\lambda^3 + 3\lambda^2 + 2\lambda + 1 = (\lambda^2 + \lambda + 1)^2\). The eigenvalues are \(\lambda = \frac{-1 \pm i\sqrt{3}}{2}\) (each with multiplicity 2). The solutions include \(e^{-t/2}\cos(\frac{\sqrt{3}}{2}t)\), \(e^{-t/2}\sin(\frac{\sqrt{3}}{2}t)\), and their products with \(t\).
26.6 Nonhomogeneous Equations and Variation of Parameters¶
General solution of nonhomogeneous = general homogeneous solution + particular solution → particular solution given by convolution integral with \(e^{At}\) (variation of parameters) → essentially a Green's function / impulse response
Link: A direct manifestation of Ch4 quotient spaces and affine subspaces
The solution of nonhomogeneous linear systems \(\mathbf{x}' = A\mathbf{x} + \mathbf{f}(t)\) relies on the matrix exponential and variation of parameters.
Definition 26.9 (Nonhomogeneous linear ODE system)
A nonhomogeneous linear ODE system is
where \(\mathbf{f}(t) \in \mathbb{R}^n\) is a given continuous forcing term.
Theorem 26.9 (Variation of parameters formula)
The solution of the nonhomogeneous system \(\mathbf{x}' = A\mathbf{x} + \mathbf{f}(t)\), \(\mathbf{x}(0) = \mathbf{x}_0\) is
The first term is the homogeneous solution (free response), and the second is a particular solution (forced response) — the convolution of the matrix exponential with the forcing term.
Proof
Set \(\mathbf{x}(t) = e^{At}\mathbf{c}(t)\) (the core Ansatz of variation of parameters). Substituting into the equation:
Canceling \(Ae^{At}\mathbf{c}(t)\) yields \(e^{At}\mathbf{c}'(t) = \mathbf{f}(t)\), i.e., \(\mathbf{c}'(t) = e^{-At}\mathbf{f}(t)\). Integrating:
Therefore \(\mathbf{x}(t) = e^{At}\mathbf{x}_0 + \int_0^t e^{A(t-s)}\mathbf{f}(s)\, ds\). \(\blacksquare\)
Example 26.7
A forced oscillation system. Consider the system
The homogeneous part has eigenvalues \(\pm i\) (center), with \(e^{At} = \begin{pmatrix} \cos t & \sin t \\ -\sin t & \cos t \end{pmatrix}\).
The forced response is
Computing the first component: \(\int_0^t \sin(t-s)\cos s\, ds = \frac{t}{2}\sin t\) (resonance). Since the driving frequency matches the natural frequency, the amplitude grows linearly — this is the linear algebra explanation of the resonance phenomenon.
26.7 Eigenvalue Problems in Linear PDEs¶
Separation of variables → spatial part becomes eigenvalue problem \(\mathcal{L}u = \lambda u\) → Sturm-Liouville theory guarantees real eigenvalues, orthogonal eigenfunctions → spectral theorem on infinite-dimensional inner product spaces
Link: An infinite-dimensional generalization of the Ch8 spectral theorem for inner product spaces
The method of separation of variables in partial differential equations converts PDEs into ODE eigenvalue problems. Sturm-Liouville theory is the infinite-dimensional generalization of the finite-dimensional spectral theorem.
Definition 26.10 (Sturm-Liouville problem)
The Sturm-Liouville problem asks for \(\lambda\) and nonzero functions \(y(x)\) (\(x \in [a, b]\)) such that
subject to boundary conditions (e.g., \(y(a) = y(b) = 0\)), where \(p(x) > 0\) and \(w(x) > 0\) is the weight function.
Theorem 26.10 (Sturm-Liouville spectral theorem)
Under appropriate regularity conditions, the Sturm-Liouville problem has the following properties:
- There exist countably infinitely many real eigenvalues \(\lambda_1 < \lambda_2 < \cdots\), with \(\lambda_n \to \infty\).
-
The corresponding eigenfunctions \(\{y_n(x)\}\) are orthogonal with respect to the weight function \(w(x)\):
\[ \int_a^b y_m(x) y_n(x) w(x)\, dx = 0, \quad m \neq n. \] -
\(\{y_n\}\) forms a complete orthogonal basis for \(L^2_w([a, b])\) (analogous to the orthogonal eigenbasis of a finite-dimensional symmetric matrix).
Proof
Self-adjointness: Define the operator \(\mathcal{L}y = \frac{1}{w}[-(py')' + qy]\). Through integration by parts and the boundary conditions, one can verify that \(\langle \mathcal{L}y, z \rangle_w = \langle y, \mathcal{L}z \rangle_w\), i.e., \(\mathcal{L}\) is self-adjoint with respect to the \(L^2_w\) inner product.
Orthogonality: If \(\mathcal{L}y_m = \lambda_m y_m\) and \(\mathcal{L}y_n = \lambda_n y_n\) (\(\lambda_m \neq \lambda_n\)), then
Therefore \((\lambda_m - \lambda_n)\langle y_m, y_n \rangle_w = 0\), i.e., \(\langle y_m, y_n \rangle_w = 0\). This proof is exactly parallel to showing that eigenvectors of a symmetric matrix corresponding to distinct eigenvalues are orthogonal. \(\blacksquare\)
Example 26.8
Separation of variables for the heat equation. Consider the one-dimensional heat equation
Setting \(u(x, t) = X(x)T(t)\), separation of variables gives \(T'/T = X''/X = -\lambda\).
Spatial problem: \(X'' = -\lambda X\), \(X(0) = X(\pi) = 0\). This is a Sturm-Liouville problem with \(p = w = 1\), \(q = 0\). Eigenvalues \(\lambda_n = n^2\), eigenfunctions \(X_n = \sin(nx)\) (\(n = 1, 2, 3, \ldots\)).
Temporal problem: \(T_n' = -n^2 T_n\), solution \(T_n = e^{-n^2 t}\).
Complete solution:
Each mode \(n\) decays at rate \(n^2\) — higher-frequency components decay faster, explaining the smoothing effect of heat conduction. The eigenvalues \(n^2\) completely determine the time scales of diffusion.
26.8 Floquet Theory¶
Problem: \(\mathbf{x}' = A(t)\mathbf{x}\), \(A(t+T) = A(t)\) → solutions are no longer \(e^{At}\) → Floquet's theorem: \(\Phi(t) = P(t)e^{Bt}\) (periodic part \(\times\) exponential part) → stability determined by eigenvalues of \(B\)
Applications: Parametric resonance (Mathieu equation), Bloch's theorem in crystal lattices
When the coefficient matrix has periodicity \(A(t+T) = A(t)\), Floquet theory reduces the periodic system to a constant-coefficient problem.
Definition 26.11 (Monodromy matrix)
For the periodic system \(\mathbf{x}' = A(t)\mathbf{x}\) (\(A(t+T) = A(t)\)), let \(\Phi(t)\) be the fundamental matrix satisfying \(\Phi(0) = I\). The monodromy matrix is defined as
The eigenvalues of \(M\) are called Floquet multipliers.
Definition 26.12 (Floquet exponents)
If \(\rho\) is a Floquet multiplier, then \(\mu = \frac{1}{T}\ln \rho\) (with appropriate branch) is called a Floquet exponent. These are the "effective eigenvalues" of the equivalent constant-coefficient system.
Theorem 26.11 (Floquet's theorem)
Let \(A(t)\) be a continuous \(T\)-periodic matrix. The fundamental matrix \(\Phi(t)\) (\(\Phi(0) = I\)) can be written as
where \(P(t+T) = P(t)\) is a \(T\)-periodic nonsingular matrix and \(B\) is a constant matrix satisfying \(e^{BT} = M = \Phi(T)\).
Proof
Key observation: \(\Psi(t) = \Phi(t+T)\) is also a fundamental matrix, since
By uniqueness of fundamental matrices (up to right multiplication by a constant matrix), \(\Phi(t+T) = \Phi(t)M\).
Choose \(B\) such that \(e^{BT} = M\) (this always exists — take the matrix logarithm). Define \(P(t) = \Phi(t)e^{-Bt}\). Then
Therefore \(P(t)\) is \(T\)-periodic. \(\blacksquare\)
Theorem 26.12 (Floquet stability criterion)
The periodic system \(\mathbf{x}' = A(t)\mathbf{x}\) is asymptotically stable at the equilibrium if and only if all Floquet multipliers satisfy \(|\rho_i| < 1\) (equivalently, all Floquet exponents satisfy \(\operatorname{Re}(\mu_i) < 0\)).
Proof
From \(\Phi(nT) = M^n\) and \(\Phi(t) = P(t)e^{Bt}\), we have \(\|\Phi(nT)\| = \|M^n\|\). \(M^n \to 0\) if and only if all eigenvalues (Floquet multipliers) of \(M\) have modulus less than \(1\). Between periods, \(P(t)\) is bounded, so \(\|\Phi(t)\| \to 0\) if and only if \(|\rho_i| < 1\). \(\blacksquare\)
Example 26.9
The Mathieu equation and parametric resonance. The Mathieu equation
becomes the system \(\mathbf{x}' = A(t)\mathbf{x}\) with \(A(t) = \begin{pmatrix} 0 & 1 \\ -\delta - \varepsilon\cos 2t & 0 \end{pmatrix}\), period \(T = \pi\).
In the parameter plane \((\delta, \varepsilon)\), there exist stability regions (\(|\rho_i| \le 1\)) and instability regions/resonance tongues (\(|\rho_i| > 1\)). When \(\varepsilon = 0\), the instability regions degenerate to points \(\delta = n^2\) (\(n = 1, 2, \ldots\)). When \(\varepsilon > 0\), the instability regions expand into "tongue"-shaped domains.
Physical application: An inverted pendulum can be stabilized under appropriate frequency and amplitude of vertical oscillation (the Kapitza pendulum) — Floquet analysis gives the precise stability conditions.
Example 26.10
Floquet theory and Bloch's theorem. In solid-state physics, the Schrodinger equation for an electron in a lattice
is a periodic-coefficient ODE. Floquet's theorem guarantees the existence of solutions in the Bloch wave form
where \(k\) is the wave number (imaginary part of the Floquet exponent). The energies \(E(k)\) for different values of \(k\) form the band structure, with band gaps corresponding to boundaries where the Floquet multipliers have unit modulus.