The Eigenvalue Method

Solving linear systems using linear algebra

This chapter reveals why eigenvalues and eigenvectors matter so profoundly. They are not just abstract concepts from linear algebra. They are the key to understanding how dynamical systems behave, predicting whether solutions grow or decay, and writing down explicit formulas for trajectories in phase space.

The Central Question

We have a linear system:

x=Ax\mathbf{x}' = A\mathbf{x}

where AA is a constant 2×22 \times 2 matrix. We want to find all solutions. But how?

For a scalar equation x=axx' = ax, the solution is simply x(t)=x0eatx(t) = x_0 e^{at}. The exponential function solves it because differentiation pulls down a factor of aa: (eat)=aeat(e^{at})' = ae^{at}.

Can we find something similar for systems? The answer is yes, and the key lies in finding special directions where the matrix AA acts like a simple scalar multiplication.

Eigenvectors: The Special Directions

An eigenvector v\mathbf{v} of matrix AA is a nonzero vector that is simply scaled when multiplied by AA:

Av=λvA\mathbf{v} = \lambda \mathbf{v}

The scalar λ\lambda is called the eigenvalue associated with v\mathbf{v}. In this special direction, the matrix AA acts like multiplying by the number λ\lambda.

Here is the crucial insight: if we can find an eigenvector of AA, we can construct a solution to x=Ax\mathbf{x}' = A\mathbf{x}.

Why Eigenvectors Give Solutions

Suppose v\mathbf{v} is an eigenvector of AA with eigenvalue λ\lambda. Consider the function:

x(t)=eλtv\mathbf{x}(t) = e^{\lambda t}\mathbf{v}

Let us verify this is a solution. The left side of x=Ax\mathbf{x}' = A\mathbf{x} is:

x=ddt(eλtv)=λeλtv\mathbf{x}' = \frac{d}{dt}(e^{\lambda t}\mathbf{v}) = \lambda e^{\lambda t}\mathbf{v}

The right side is:

Ax=A(eλtv)=eλt(Av)=eλt(λv)=λeλtvA\mathbf{x} = A(e^{\lambda t}\mathbf{v}) = e^{\lambda t}(A\mathbf{v}) = e^{\lambda t}(\lambda \mathbf{v}) = \lambda e^{\lambda t}\mathbf{v}

Both sides are equal. We have found a solution.

This is the essence of the eigenvalue method: eigenvectors point in directions where motion stays along that direction. A solution starting on an eigenvector line simply grows or decays exponentially along that line, depending on the sign of λ\lambda.

Interactive: Eigenvectors as Straight-Line Solutions

Eigenvector v₁ (λ₁)Eigenvector v₂ (λ₂)

Both eigenvalues are negative: solutions decay to zero along both eigendirections (stable node).

Notice how trajectories that start along an eigenvector direction remain on that line forever. They simply scale: growing if λ>0\lambda > 0, decaying if λ<0\lambda < 0. These are the simplest possible solutions to the system.

Finding Eigenvalues

To find eigenvalues, we start from Av=λvA\mathbf{v} = \lambda \mathbf{v}, which can be rewritten as:

(AλI)v=0(A - \lambda I)\mathbf{v} = \mathbf{0}

For this to have a nonzero solution v\mathbf{v}, the matrix (AλI)(A - \lambda I) must be singular. This happens exactly when its determinant is zero:

det(AλI)=0\det(A - \lambda I) = 0

This is the characteristic equation. For a 2×22 \times 2 matrix:

A=[abcd]A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}

the characteristic equation becomes:

det[aλbcdλ]=(aλ)(dλ)bc=0\det \begin{bmatrix} a - \lambda & b \\ c & d - \lambda \end{bmatrix} = (a - \lambda)(d - \lambda) - bc = 0

Expanding:

λ2(a+d)λ+(adbc)=0\lambda^2 - (a + d)\lambda + (ad - bc) = 0

This is a quadratic in λ\lambda. The solutions are the eigenvalues.

Finding Eigenvectors

Once we have an eigenvalue λ\lambda, we find its eigenvector by solving:

(AλI)v=0(A - \lambda I)\mathbf{v} = \mathbf{0}

This gives a system of linear equations. Since the matrix is singular, the equations are dependent, and we get a family of solutions forming a line through the origin. We typically pick one nonzero vector from this line as our eigenvector.

The General Solution: Distinct Real Eigenvalues

If AA has two distinct real eigenvalues λ1\lambda_1 and λ2\lambda_2 with corresponding eigenvectors v1\mathbf{v}_1 and v2\mathbf{v}_2, then:

x1(t)=eλ1tv1andx2(t)=eλ2tv2\mathbf{x}_1(t) = e^{\lambda_1 t}\mathbf{v}_1 \quad \text{and} \quad \mathbf{x}_2(t) = e^{\lambda_2 t}\mathbf{v}_2

are two independent solutions. The general solution is their linear combination:

x(t)=c1eλ1tv1+c2eλ2tv2\mathbf{x}(t) = c_1 e^{\lambda_1 t}\mathbf{v}_1 + c_2 e^{\lambda_2 t}\mathbf{v}_2

The constants c1c_1 and c2c_2 are determined by the initial condition x(0)\mathbf{x}(0).

Interactive: Building the General Solution

x(t)=1.00e0.5tv1+0.50e1.5tv2\mathbf{x}(t) = 1.00 e^{-0.5t} \mathbf{v}_1 + 0.50 e^{-1.5t} \mathbf{v}_2

The general solution is a linear combination of the two eigenvector solutions. Each component decays along its eigenvector direction at a rate determined by its eigenvalue.

c₁e^(λ₁t)v₁c₂e^(λ₂t)v₂x(t) = sum

The general solution is a superposition of the two eigenvector solutions. Each component evolves independently: the component along v1\mathbf{v}_1 grows or decays at rate λ1\lambda_1, while the component along v2\mathbf{v}_2 grows or decays at rate λ2\lambda_2.

Applying Initial Conditions

Given an initial condition x(0)=x0\mathbf{x}(0) = \mathbf{x}_0, we need to find c1c_1 and c2c_2 such that:

x0=c1v1+c2v2\mathbf{x}_0 = c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2

This is just expressing x0\mathbf{x}_0 as a linear combination of the eigenvectors. Since the eigenvectors form a basis (they are linearly independent), there is exactly one such decomposition.

Example: Consider x=Ax\mathbf{x}' = A\mathbf{x} where:

A=[1221]A = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix}

The characteristic equation is:

(1λ)24=0    λ22λ3=0    (λ3)(λ+1)=0(1 - \lambda)^2 - 4 = 0 \implies \lambda^2 - 2\lambda - 3 = 0 \implies (\lambda - 3)(\lambda + 1) = 0

So λ1=3\lambda_1 = 3 and λ2=1\lambda_2 = -1.

For λ1=3\lambda_1 = 3: Solving (A3I)v=0(A - 3I)\mathbf{v} = \mathbf{0} gives v1=[11]\mathbf{v}_1 = \begin{bmatrix} 1 \\ 1 \end{bmatrix}.

For λ2=1\lambda_2 = -1: Solving (A+I)v=0(A + I)\mathbf{v} = \mathbf{0} gives v2=[11]\mathbf{v}_2 = \begin{bmatrix} 1 \\ -1 \end{bmatrix}.

The general solution is:

x(t)=c1e3t[11]+c2et[11]\mathbf{x}(t) = c_1 e^{3t} \begin{bmatrix} 1 \\ 1 \end{bmatrix} + c_2 e^{-t} \begin{bmatrix} 1 \\ -1 \end{bmatrix}

If x(0)=[20]\mathbf{x}(0) = \begin{bmatrix} 2 \\ 0 \end{bmatrix}, then c1+c2=2c_1 + c_2 = 2 and c1c2=0c_1 - c_2 = 0, giving c1=c2=1c_1 = c_2 = 1.

Complex Eigenvalues: Spiraling Solutions

The discriminant of the characteristic equation may be negative, giving complex eigenvalues. For real matrices, complex eigenvalues always come in conjugate pairs:

λ=α±βi\lambda = \alpha \pm \beta i

where α\alpha and β\beta are real. The real part α\alpha controls growth or decay. The imaginary part β\beta controls the rotation frequency.

Deriving the real solutions: The complex eigenvalue λ=α+βi\lambda = \alpha + \beta i has a complex eigenvector v\mathbf{v}. The complex solution eλtv=eαteiβtve^{\lambda t}\mathbf{v} = e^{\alpha t}e^{i\beta t}\mathbf{v} involves complex exponentials. Using Euler's formula eiβt=cosβt+isinβte^{i\beta t} = \cos\beta t + i\sin\beta t, we can write:

eλtv=eαt(cosβt+isinβt)ve^{\lambda t}\mathbf{v} = e^{\alpha t}(\cos\beta t + i\sin\beta t)\mathbf{v}

Separating this into real and imaginary parts gives two real, linearly independent solutions. If v=p+iq\mathbf{v} = \mathbf{p} + i\mathbf{q} where p\mathbf{p} and q\mathbf{q} are real vectors, the two real solutions are:

x1(t)=eαt(pcosβtqsinβt)\mathbf{x}_1(t) = e^{\alpha t}(\mathbf{p}\cos\beta t - \mathbf{q}\sin\beta t) x2(t)=eαt(psinβt+qcosβt)\mathbf{x}_2(t) = e^{\alpha t}(\mathbf{p}\sin\beta t + \mathbf{q}\cos\beta t)

The general solution is x(t)=c1x1(t)+c2x2(t)\mathbf{x}(t) = c_1\mathbf{x}_1(t) + c_2\mathbf{x}_2(t).

The key geometric insight is simpler: solutions spiral around the origin.

Interactive: Complex Eigenvalues and Spiraling

λ=0.20±1.00i\lambda = -0.20 \pm 1.00i

Stable spiral: solutions spiral inward

The real part α controls decay/growth. The imaginary part β controls the rotation frequency.

The real part determines the fate of the spiral:

  • α<0\alpha < 0: stable spiral (solutions spiral inward toward the origin)
  • α>0\alpha > 0: unstable spiral (solutions spiral outward to infinity)
  • α=0\alpha = 0: center (solutions orbit in closed curves, neither approaching nor fleeing)

The imaginary part β\beta determines how fast the spiral rotates.

Repeated Eigenvalues

When the characteristic equation has a repeated root λ\lambda (discriminant equals zero), we need a second, independent solution. The approach depends on whether there are one or two independent eigenvectors.

If there is only one eigenvector v\mathbf{v}, the general solution takes the form:

x(t)=c1eλtv+c2eλt(tv+w)\mathbf{x}(t) = c_1 e^{\lambda t}\mathbf{v} + c_2 e^{\lambda t}(t\mathbf{v} + \mathbf{w})

where w\mathbf{w} is a generalized eigenvector satisfying (AλI)w=v(A - \lambda I)\mathbf{w} = \mathbf{v}.

The factor of tt appears, just as it did for repeated roots in second-order equations. This is the deficient case, producing a degenerate node.

The Complete Picture: From Matrix to Phase Portrait

The eigenvalues of AA completely determine the qualitative behavior of the phase portrait:

EigenvaluesBehavior
λ1<λ2<0\lambda_1 < \lambda_2 < 0Stable node
λ1>λ2>0\lambda_1 > \lambda_2 > 0Unstable node
λ1<0<λ2\lambda_1 < 0 < \lambda_2Saddle point
α±βi\alpha \pm \beta i with α<0\alpha < 0Stable spiral
α±βi\alpha \pm \beta i with α>0\alpha > 0Unstable spiral
±βi\pm \beta i (pure imaginary)Center

Interactive: From Matrix to Phase Portrait

A=[0.01.02.01.0]A = \begin{bmatrix} 0.0 & 1.0 \\ -2.0 & -1.0 \end{bmatrix}
Eigenvalues:
λ1=0.50+1.32i\lambda_1 = -0.50 + 1.32i
λ2=0.501.32i\lambda_2 = -0.50 - 1.32i

stable spiral

Complex eigenvalues produce spiraling motion. The real part determines growth/decay.

Adjust the matrix entries and watch how the eigenvalues change, and with them the entire character of the phase portrait. The connection between algebra (eigenvalues) and geometry (trajectories) is direct and powerful.

The Eigenvalue Method: Summary

The complete algorithm for solving x=Ax\mathbf{x}' = A\mathbf{x}:

  1. Find eigenvalues: Solve det(AλI)=0\det(A - \lambda I) = 0
  2. Find eigenvectors: For each eigenvalue λ\lambda, solve (AλI)v=0(A - \lambda I)\mathbf{v} = \mathbf{0}
  3. Write general solution:
    • Distinct real: x(t)=c1eλ1tv1+c2eλ2tv2\mathbf{x}(t) = c_1 e^{\lambda_1 t}\mathbf{v}_1 + c_2 e^{\lambda_2 t}\mathbf{v}_2
    • Complex α±βi\alpha \pm \beta i: Use eαtcosβte^{\alpha t}\cos\beta t and eαtsinβte^{\alpha t}\sin\beta t
    • Repeated: Include a factor of tt and use generalized eigenvector if needed
  4. Apply initial conditions: Solve for c1c_1 and c2c_2 from x(0)=x0\mathbf{x}(0) = \mathbf{x}_0

Why This Matters

The eigenvalue method is more than a technique for solving equations. It reveals why linear systems behave the way they do.

Eigenvalues tell you about stability: are solutions attracted to an equilibrium or repelled from it? Eigenvectors tell you about geometry: in which directions does the attraction or repulsion occur?

This framework extends far beyond 2×22 \times 2 systems. In higher dimensions, the same principles apply: eigenvalues classify behavior, and eigenvectors identify the special directions where that behavior is most apparent.

The next chapter will classify all possible behaviors systematically, connecting the eigenvalue structure to named patterns like nodes, spirals, saddles, and centers.

Key Takeaways

  • If Av=λvA\mathbf{v} = \lambda \mathbf{v}, then x(t)=eλtv\mathbf{x}(t) = e^{\lambda t}\mathbf{v} solves x=Ax\mathbf{x}' = A\mathbf{x}
  • Eigenvectors define directions where solutions move in straight lines, simply scaling by eλte^{\lambda t}
  • Eigenvalues are found from the characteristic equation det(AλI)=0\det(A - \lambda I) = 0
  • For distinct real eigenvalues, the general solution is c1eλ1tv1+c2eλ2tv2c_1 e^{\lambda_1 t}\mathbf{v}_1 + c_2 e^{\lambda_2 t}\mathbf{v}_2
  • Complex eigenvalues α±βi\alpha \pm \beta i produce spiraling solutions; the real part controls decay/growth, the imaginary part controls rotation
  • The sign of eigenvalues determines stability: negative means decay toward equilibrium, positive means growth away from it
  • Initial conditions determine the coefficients c1c_1 and c2c_2 by decomposing the initial state into eigenvector components