Differential Equations · Chapter 4

Wronskian & Cramer's Rule

How do you know if two solutions are truly different building blocks — or just the same thing in disguise?

Linear Independence Wronskian Cramer's Rule

The Hero Idea

The Big Question

If I have two candidate solution functions, how do I know whether they are truly two different building blocks, or just the same thing wearing a fake mustache?

Everything in this topic flows from that one question. Think of it like building with Lego bricks. If your two "different" bricks are actually identical, you haven't gained anything — you still only have one shape to work with.

For a second-order differential equation, the general solution needs two genuinely different pieces. If your two solutions secretly point in the same direction, you're stuck with half a toolkit.

The Wronskian is the detector. Cramer's Rule is the solver. Together they form the engine of variation of parameters — one of the most powerful methods for attacking nonhomogeneous ODEs.

\[ \underbrace{\text{Linear Independence}}_{\text{the concept}} \;\longrightarrow\; \underbrace{W(y_1,y_2)}_{\text{the test}} \;\longrightarrow\; \underbrace{\text{Cramer's Rule}}_{\text{the solver}} \;\longrightarrow\; \underbrace{y_p}_{\text{particular solution}} \]

Scaffolding — Setting the Stage

🎯
Objective

Determine whether two functions are genuinely independent — and understand exactly how the Wronskian and Cramer's Rule reveal that.

📦
Our Variables

Two candidate solution functions \(y_1(x)\) and \(y_2(x)\). Later, unknown functions \(u_1(x)\) and \(u_2(x)\) for variation of parameters.

Known Form

General solution of a second-order linear homogeneous ODE: \(y = c_1 y_1 + c_2 y_2\) — but only if \(y_1, y_2\) are independent.

⚠️
The Constraint

If \(y_2 = k \cdot y_1\) for some constant \(k\), we don't have two directions — just one rescaled. That breaks the general solution.

We assume all functions are differentiable to the order we need. That's it. No other exotic assumptions.

Linear Independence — Before the Formula

Before touching the Wronskian formula, you need the idea behind it. This is where intuition lives.

Formal Definition

Two functions \(y_1\) and \(y_2\) are linearly independent if the equation \(c_1 y_1(x) + c_2 y_2(x) = 0\) for all \(x\) forces \(c_1 = 0\) and \(c_2 = 0\). Only the trivial combination kills them both.

Think of it like two arrows in space. If you can only make them cancel by pointing no arrow at all, they're truly different directions. If you can cancel them with nonzero amounts, one is just a flipped version of the other.

Dependent Example — Same Direction

Take \(y_1 = e^x\) and \(y_2 = 5e^x\). These are dependent because:

\[ y_2 = 5 \cdot y_1 \]

Same shape, just stretched. You're not getting a new building block.

Independent Example — Different Directions

Take \(y_1 = e^x\) and \(y_2 = xe^x\). No constant multiple of \(e^x\) ever gives you \(xe^x\) — the \(x\) makes it fundamentally different. These are genuinely independent.

DEPENDENT y₁ y₂=5y₁ same line → W = 0 INDEPENDENT y₁ y₂ span area → W ≠ 0

Dependent → collapse to a line. Independent → span a real area.

The Wronskian — The Big Move

For second-order ODEs, the "state" of a solution at any point \(x\) is captured by two numbers: the value and the slope. So instead of just looking at \(y_1\) and \(y_2\), we package each into a column vector that carries both:

\[ \begin{bmatrix} y_1(x) \\ y_1'(x) \end{bmatrix} \qquad \text{and} \qquad \begin{bmatrix} y_2(x) \\ y_2'(x) \end{bmatrix} \]

Now we form the matrix made of those two columns and take its determinant. That determinant is the Wronskian:

\[ W(y_1, y_2)(x) = \begin{vmatrix} y_1(x) & y_2(x) \\ y_1'(x) & y_2'(x) \end{vmatrix} = y_1(x)\,y_2'(x) - y_2(x)\,y_1'(x) \]

Expanding the \(2 \times 2\) determinant: top-left times bottom-right, minus top-right times bottom-left. That's it.

Why These Rows?

A second-order ODE cares about position and velocity at each point. Packing both into the columns means the Wronskian tests independence at exactly the right depth — the level that actually matters for the equation.

The Verdict

W ≠ 0

Functions are linearly independent. They point in genuinely different directions. Safe to use as a fundamental solution pair.

W = 0

Functions are linearly dependent. One is just a scaled copy of the other in the value-slope space. You don't have two real building blocks.

Why It Works — The Geometry

Let's see exactly why a dependent pair kills the Wronskian. Suppose \(y_2 = c \cdot y_1\). Differentiate both sides:

\[ y_2' = c \cdot y_1' \]

Now the two columns of our Wronskian matrix become:

\[ \text{Column 1: } \begin{bmatrix} y_1 \\ y_1' \end{bmatrix} \qquad \text{Column 2: } \begin{bmatrix} y_2 \\ y_2' \end{bmatrix} = c \begin{bmatrix} y_1 \\ y_1' \end{bmatrix} \]

Column 2 is just a scalar multiple of Column 1. A fundamental theorem of linear algebra says: whenever one column is a scalar multiple of another, the determinant is zero.

\[ W = y_1(c y_1') - y_2(y_1') = c\,y_1 y_1' - c\,y_1 y_1' = 0 \]
Geometric Heart

A \(2 \times 2\) determinant equals the signed area of the parallelogram formed by its two column vectors. If the columns lie on the same line, the parallelogram collapses flat — zero area. The Wronskian is literally an area test on value-slope data.

DEPENDENT — ZERO AREA col₁ col₂ area = 0 → W = 0 INDEPENDENT — NONZERO AREA W col₁ col₂ area ≠ 0 → W ≠ 0

Abel's Theorem — The Smarter Shortcut

The Big Advantage

Abel's theorem lets you understand the Wronskian without computing the whole determinant directly. It tells you the shape the Wronskian must have, using only the equation's coefficient — not the solutions themselves.

What the Wronskian Does vs. What Abel Gives You

The Wronskian is the direct test. You build the determinant from your solutions and their derivatives, compute it, and check whether it is nonzero. It is the measurement itself.

Abel's theorem is different. For a second-order linear homogeneous equation in standard form

\[ y'' + P(x)\,y' + Q(x)\,y = 0, \]

it tells you that the Wronskian of any two solutions must take the form

\[ W(x) = C\,e^{-\int P(x)\,dx}. \]

That is a powerful statement. You do not need to grind out

\[ W = \begin{vmatrix} y_1 & y_2 \\ y_1' & y_2' \end{vmatrix}. \]

The coefficient \(P(x)\) alone tells you the entire functional form of \(W\).

Three Practical Wins

🧮
Win 1 — Less Algebra

If the candidate solutions are ugly, expanding and simplifying the determinant is tedious. Abel sidesteps all of that. You only need \(P(x)\).

🔍
Win 2 — Uses the Equation, Not the Solutions

Abel's result depends only on the coefficient \(P(x)\), so it gives you information about the Wronskian even before you know the full pair of solutions.

🌐
Win 3 — Global Structural Insight

For solutions of a linear homogeneous equation, \(W\) is either identically zero or never zero on an interval. Abel explains why, and that is a much deeper fact than checking one point.

Why the Wronskian Can Never Be "Sometimes Zero"

This is the structural gem. From Abel's formula,

\[ W(x) = C\,e^{-\int P(x)\,dx}. \]

An exponential function never equals zero — it can get very small, but it never touches zero. So the only way \(W(x)\) can be zero anywhere is if \(C = 0\), which forces it to be zero everywhere. There is no in-between. The Wronskian lives on one of two roads: always zero, or never zero.

Why This Is Deeper Than a Point Check

When you compute \(W\) at one value of \(x\) and get a nonzero number, you have actually confirmed it is nonzero everywhere on the interval — because Abel tells you the whole shape is \(Ce^{(\cdots)}\), and exponentials never vanish. One point tells you the whole story.

The Mental Model

⚡ Wronskian vs. Abel — Two Different Roles
🔌 Wronskian — checking the car directly with a voltmeter. measurement
Abel's theorem — knowing the wiring rule of the whole car, so you already know what the voltage pattern must look like. law
One is a single measurement. The other is the rule governing all measurements. structure

When to Still Use the Wronskian Directly

Direct computation is still the right move when the solutions are simple, when you need a quick independence check, or when you are doing variation of parameters — where \(W\) appears explicitly in the numerator and denominator of Cramer's Rule and must be computed anyway.

Abel is better when you want a less brute-force understanding. It gives less computation, more theory, and immediate insight into whether \(W\) can ever cross zero.

\[ \textbf{Wronskian} = \text{direct test} \qquad \textbf{Abel} = \text{shortcut} + \text{theory behind the test} \]

Cramer's Rule — The Bridge

Cramer's Rule is a determinant-based way to solve a square linear system. Suppose you have the system:

\[ \begin{cases} a_1 u + b_1 v = r_1 \\ a_2 u + b_2 v = r_2 \end{cases} \]

In matrix form this is \(A \mathbf{x} = \mathbf{r}\). If \(\det(A) \neq 0\), the system has a unique solution. Cramer's Rule gives us that solution without full row reduction:

\[ u = \frac{\det(A_u)}{\det(A)}, \qquad v = \frac{\det(A_v)}{\det(A)} \]

Where \(A_u\) is the matrix with the first column replaced by the right-hand side \(\mathbf{r}\), and \(A_v\) is the matrix with the second column replaced.

The Key Rule for Constructing the Modified Matrices

\[ A_u = \begin{bmatrix} r_1 & b_1 \\ r_2 & b_2 \end{bmatrix}, \qquad A_v = \begin{bmatrix} a_1 & r_1 \\ a_2 & r_2 \end{bmatrix} \]
Why This Connects to the Wronskian

The Wronskian is also a \(2 \times 2\) determinant. When a differential equations method produces a linear system, the coefficient matrix is the Wronskian matrix. So Cramer's Rule and the Wronskian snap together naturally.

If \(\det(A) = 0\), Cramer's Rule breaks down — the system might have no solution or infinitely many. This is exactly why a zero Wronskian signals trouble: the whole solving machinery fails.

Worked Examples

Three examples — tap to expand each one and watch the computation unfold step by step.

▸ Example A
\(y_1 = e^x,\quad y_2 = xe^x\) — Independent pair
Step 1 — Differentiate

\(y_1' = e^x\)

\(y_2' = e^x + xe^x = e^x(1+x)\)

Product rule on \(xe^x\): derivative of \(x\) times \(e^x\), plus \(x\) times derivative of \(e^x\).

Step 2 — Set up the Wronskian
\[ W = \begin{vmatrix} e^x & xe^x \\ e^x & e^x(1+x) \end{vmatrix} \]
Step 3 — Factor out common terms

Factor \(e^x\) from each column:

\[ W = e^{2x} \begin{vmatrix} 1 & x \\ 1 & 1+x \end{vmatrix} \]

Each column had one factor of \(e^x\), so both come out front as \(e^{2x}\).

Step 4 — Expand the 2×2 determinant
\[ W = e^{2x}\Big(1 \cdot (1+x) - x \cdot 1\Big) = e^{2x}(1 + x - x) = e^{2x} \]
Result
\(W = e^{2x} \neq 0\) for all \(x\) — functions are linearly independent ✓
▸ Example B
\(y_1 = e^x,\quad y_2 = 5e^x\) — Dependent pair
Step 1 — Differentiate

\(y_1' = e^x, \quad y_2' = 5e^x\)

The derivative of a constant times a function is the constant times the derivative.

Step 2 — Compute
\[ W = \begin{vmatrix} e^x & 5e^x \\ e^x & 5e^x \end{vmatrix} = e^x(5e^x) - 5e^x(e^x) = 5e^{2x} - 5e^{2x} \]

Notice the two rows are identical — that's the visual signal that the determinant must be zero.

Result
\(W = 0\) everywhere — functions are linearly dependent ✗
▸ Example C
\(y_1 = \cos x,\quad y_2 = \sin x\) — Classic pair
Step 1 — Differentiate

\(y_1' = -\sin x, \quad y_2' = \cos x\)

Step 2 — Set up and expand
\[ W = \begin{vmatrix} \cos x & \sin x \\ -\sin x & \cos x \end{vmatrix} = \cos x \cdot \cos x - \sin x \cdot (-\sin x) \]
Step 3 — Apply the Pythagorean identity
\[ W = \cos^2 x + \sin^2 x = 1 \]

The Pythagorean identity \(\cos^2 + \sin^2 = 1\) appears here naturally. This is why sine and cosine are the perfect fundamental pair for \(y'' + y = 0\).

Result
\(W = 1 \neq 0\) everywhere — perfect independent pair ✓

This is exactly why \(\cos x\) and \(\sin x\) form the fundamental solution set for \(y'' + y = 0\). The Wronskian gives them its blessing.

Visual Intuition — The Arrow Picture

At any fixed \(x\), the Wronskian uses two arrows in the value-slope plane:

\[ \vec{v}_1 = \begin{bmatrix} y_1 \\ y_1' \end{bmatrix}, \qquad \vec{v}_2 = \begin{bmatrix} y_2 \\ y_2' \end{bmatrix} \]

Each arrow says: "here is where I am, and here is where I'm headed." The Wronskian measures whether those two arrows span a real 2D area, or collapse into a single line.

Independent

Arrows point in different directions. They form a parallelogram with real area. \(W \neq 0\).

Dependent

One arrow sits on top of the other. Parallelogram collapses to a line segment. Area \(= 0\), so \(W = 0\).

The Deep Reason

The \(2 \times 2\) determinant of a matrix equals the signed area of the parallelogram spanned by its columns. So the Wronskian is fundamentally a geometry question: do my value-slope vectors span a plane, or do they degenerate to a line?

Bridge to Differential Equations

For a second-order linear homogeneous ODE, we want the general solution:

\[ y = c_1 y_1 + c_2 y_2 \]

But this only works as a full general solution if \(y_1\) and \(y_2\) are genuinely independent. If one is a copy of the other, you've only got one real parameter — the two constants \(c_1\) and \(c_2\) collapse into one effective constant, and you can't satisfy arbitrary initial conditions.

🧱
W ≠ 0

You have two real, distinct building blocks. The general solution is complete. Any initial condition can be matched.

🚫
W = 0

One solution is redundant. You cannot satisfy the full set of initial conditions. You need to find a genuinely new solution.

The Wronskian is therefore not just a test you do for fun. It's the gatekeeper that confirms your solution set is actually complete.

Cramer's Rule in Variation of Parameters

Now we see the payoff. Consider the nonhomogeneous equation:

\[ y'' + P(x)y' + Q(x)y = f(x) \]

Let \(y_1\) and \(y_2\) be two independent solutions of the associated homogeneous equation. Variation of parameters says: try a particular solution of the form

\[ y_p = u_1(x)\,y_1(x) + u_2(x)\,y_2(x) \]

Here \(u_1\) and \(u_2\) are unknown functions — not constants. We need to find them.

The Simplification Constraint

To keep the algebra manageable, we impose one condition by choice (it's a clever trick, not a requirement from nature):

\[ u_1' y_1 + u_2' y_2 = 0 \]

This eliminates messy second-derivative terms. After differentiating \(y_p\) again and substituting into the ODE, we get a second equation:

\[ u_1' y_1' + u_2' y_2' = f(x) \]

The Linear System

Now the unknowns are \(u_1'\) and \(u_2'\). We have a 2×2 system:

System to Solve
\[ \begin{bmatrix} y_1 & y_2 \\ y_1' & y_2' \end{bmatrix} \begin{bmatrix} u_1' \\ u_2' \end{bmatrix} = \begin{bmatrix} 0 \\ f(x) \end{bmatrix} \]

↑ The coefficient matrix determinant is exactly W(y₁, y₂)

The coefficient matrix is the Wronskian matrix. Its determinant is \(W\). Now apply Cramer's Rule:

Solve for u₁'

Replace column 1 with the RHS \([0, f(x)]^T\):

\[ W_1 = \begin{vmatrix} 0 & y_2 \\ f(x) & y_2' \end{vmatrix} = -y_2 f(x) \]
\[ u_1' = \frac{W_1}{W} = \frac{-y_2 f(x)}{W} \]
Solve for u₂'

Replace column 2 with the RHS \([0, f(x)]^T\):

\[ W_2 = \begin{vmatrix} y_1 & 0 \\ y_1' & f(x) \end{vmatrix} = y_1 f(x) \]
\[ u_2' = \frac{W_2}{W} = \frac{y_1 f(x)}{W} \]
The Double Duty of W

The Wronskian does two jobs in one: it certifies independence of \(y_1\) and \(y_2\), and it appears as the denominator in the Cramer's Rule step. If \(W = 0\), both jobs fail simultaneously — you can't certify independence, and you can't solve the system. That's elegant and not a coincidence.

Integrate to Get \(u_1\) and \(u_2\)

Once you have \(u_1'\) and \(u_2'\), integrate:

\[ u_1 = \int \frac{-y_2 f(x)}{W}\,dx, \qquad u_2 = \int \frac{y_1 f(x)}{W}\,dx \]

Then the particular solution is:

\[ y_p = u_1(x)\,y_1(x) + u_2(x)\,y_2(x) \]

And the full general solution is:

\[ y = c_1 y_1 + c_2 y_2 + y_p \]

Lock-in Summary

Concept
Linear Independence

\(c_1 y_1 + c_2 y_2 = 0\) for all \(x\) only when \(c_1 = c_2 = 0\).

Formula
Wronskian

\(W = y_1 y_2' - y_2 y_1'\). Nonzero means independent. Zero means dependent.

Theorem
Abel's Theorem

\(W(x) = Ce^{-\int P(x)\,dx}\). The form of \(W\) is fully determined by \(P(x)\) alone — no need to compute the determinant directly.

Geometry
Area Test

W measures the signed area of the parallelogram formed by the value-slope column vectors.

Solver
Cramer's Rule

Replace one column with the RHS, compute the ratio of determinants. W is the denominator.

Application
Var. of Parameters

Uses both simultaneously: W certifies the pair is good, then appears as the denominator in Cramer's step.

Warning
W = 0

Certifies nothing. Cramer breaks. The pair is no good. Find a genuinely independent partner.

⚡ Battle Card — The Whole Machine
01 Need two real solution building blocks → check independence
02 Check independence → use Wronskian
03 \(W \neq 0\) → independent ✓
04 \(W = 0\) → dependent warning ✗
05 Variation of parameters → need \(u_1', u_2'\) → set up system
06 Solve system → Cramer's Rule
07 Denominator in Cramer's Rule = Wronskian
Same \(W\) does both: independence test and system solver slick