§ 5.6 · Second-Order Linear ODEs · Engineering Mathematics

Reduction of Order

You already hold one key. This page shows exactly how to use it to find the second — deriving every formula from scratch, with every algebraic step visible.

5.6.1 — Find \(y_2\) given \(y_1\) Wronskian · Abel's Theorem 3 Worked Examples Interactive Solver
§ 1

The Big Idea First

Think of a second-order ODE as a two-lock safe. Someone has already handed you the key for lock #1 — that's \(y_1\). You don't need to crack it from scratch. You just need lock #2.

A second-order linear homogeneous ODE always has two linearly independent solutions. The general solution is their combination:

$$ y = c_1\,y_1 + c_2\,y_2 $$

If someone hands you \(y_1\), you need a method to find \(y_2\) that is genuinely different — not just a scaled copy of \(y_1\).

Core idea

We look for \(y_2\) in the form \(y_2 = v(x)\,y_1(x)\). We take the known solution and multiply it by an unknown function \(v(x)\). Think of \(v(x)\) as a dimmer dial attached to \(y_1\) — instead of a fixed brightness (a constant), it's free to vary as \(x\) changes. Plugging this form into the ODE will tell us exactly what shape \(v\) needs to have. The key payoff: a 2nd-order ODE in \(y\) becomes a 1st-order ODE in \(v'\). The order has been reduced by one — hence the name.

Know \(y_1\)
given
Guess \(y_2 = v\,y_1\)
ansatz
Substitute
2nd order → 1st order
Solve for \(v'(x)\)
integrate
Get \(y_2\)
done
§ 2

The General Derivation — No Steps Skipped

Start with any second-order linear homogeneous ODE. We write it in standard form, meaning the coefficient in front of \(y''\) is exactly 1:

$$ y'' + P(x)\,y' + Q(x)\,y = 0 $$

We are told that \(y_1\) is a solution. That means, if you plug \(y_1\) in, you get zero:

$$ y_1'' + P(x)\,y_1' + Q(x)\,y_1 = 0 \quad \text{(this is our secret weapon)} $$

Step 1 — Make the Ansatz (educated guess)

We propose that the second solution has the form:

$$ y_2 = v(x)\,y_1(x) $$

Here \(v(x)\) is completely unknown. It's not a constant — it's a whole function we need to discover. Our goal is to find an equation that \(v\) must satisfy.

Step 2 — Differentiate twice using the product rule

Since both \(v\) and \(y_1\) depend on \(x\), every differentiation requires the product rule: \(\tfrac{d}{dx}[fg] = f'g + fg'\).

$$ y_2 = v\,y_1 $$ $$ y_2' = v'\,y_1 + v\,y_1' $$

Differentiate \(y_2'\) again. There are two terms, each a product, so product rule applies to each:

$$ y_2'' = \frac{d}{dx}(v'\,y_1) + \frac{d}{dx}(v\,y_1') $$ $$ = v''\,y_1 + v'\,y_1' + v'\,y_1' + v\,y_1'' $$ $$ = v''\,y_1 + 2v'\,y_1' + v\,y_1'' $$
Why three terms in \(y_2''\)?

Each of the two terms in \(y_2'\) is itself a product of two functions. Applying the product rule to each gives two new terms per term — but the middle terms \(v'\,y_1'\) add up into \(2v'\,y_1'\). That doubling is exactly why the coefficient is 2, not 1.

Step 3 — Substitute into the ODE

Now plug \(y_2, y_2', y_2''\) into \(y_2'' + P\,y_2' + Q\,y_2 = 0\). Write each piece out fully before grouping:

$$ \underbrace{(v''\,y_1 + 2v'\,y_1' + v\,y_1'')}_{y_2''} + P\underbrace{(v'\,y_1 + v\,y_1')}_{y_2'} + Q\underbrace{(v\,y_1)}_{y_2} = 0 $$

Step 4 — Group by what multiplies \(v''\), \(v'\), and \(v\)

Collect terms by the "flavor" of derivative of \(v\) they involve:

$$ \underbrace{y_1}_{}\cdot v'' \;+\; \underbrace{(2y_1' + P\,y_1)}_{}\cdot v' \;+\; \underbrace{(y_1'' + P\,y_1' + Q\,y_1)}_{\text{this is the ODE for }y_1}\cdot v = 0 $$
The magic cancellation

Look at the coefficient of \(v\). It's exactly \(y_1'' + P\,y_1' + Q\,y_1\) — precisely the ODE applied to \(y_1\). But \(y_1\) is a solution, so that whole bracket equals zero. The \(v\)-term disappears entirely.

What remains after the cancellation:

$$ y_1\,v'' + (2y_1' + P\,y_1)\,v' = 0 $$

This contains only \(v''\) and \(v'\) — no bare \(v\). That means if we rename \(w = v'\), we get a first-order ODE in \(w\). A second-order problem has been reduced to a first-order one.

Step 5 — Substitute \(w = v'\) and solve

$$ y_1\,w' + (2y_1' + P\,y_1)\,w = 0 $$

Divide both sides by \(y_1\) (assuming \(y_1 \neq 0\)):

$$ w' + \left(\frac{2y_1'}{y_1} + P\right)w = 0 $$

Separate variables — move everything with \(w\) to the left side, divide by \(w\):

$$ \frac{dw}{w} = -\left(\frac{2y_1'}{y_1} + P\right)dx $$

Integrate both sides. For the left side: \(\int dw/w = \ln|w|\). For the right side, note that \(\int \tfrac{2y_1'}{y_1}\,dx = 2\ln|y_1|\) (this is just the chain rule read backwards: the derivative of \(\ln|y_1|\) is \(y_1'/y_1\)):

$$ \ln|w| = -2\ln|y_1| - \int P\,dx $$

Exponentiate both sides. Use \(e^{-2\ln|y_1|} = e^{\ln(y_1^{-2})} = y_1^{-2}\):

$$ w = v' = \frac{e^{-\int P\,dx}}{y_1^2} $$

Integrate one more time to find \(v\), then multiply by \(y_1\):

$$ \boxed{y_2 = y_1 \int \frac{e^{-\int P\,dx}}{y_1^2}\,dx} $$

This is the closed-form formula for the second solution. Every symbol has a job: the exponential in the numerator is an integrating factor (it carries information about the "friction" coefficient \(P\)); the \(y_1^2\) in the denominator normalizes away the known solution; and the outer \(y_1\) builds \(y_2 = v \cdot y_1\).

§ 3

Why Do the \(v\)-Terms Always Cancel?

This is the heart of the whole method, so it deserves its own explanation.

The built-in memory of the equation

When we write \(y_2 = v \cdot y_1\) and substitute, the ODE is being applied to a product. The \(v\)-terms gather into exactly \((\text{ODE applied to }y_1) \cdot v\). Since \(y_1\) satisfies the ODE, that expression is zero — the equation has a "memory" of its own solution baked in, and it automatically kills any attempt to reproduce \(y_1\) in the substitution.

Here's an analogy: think of the ODE as a filter that kills its own known solutions. When you feed it \(v \cdot y_1\), the filter erases all the \(y_1\)-shaped content, and what's left (the equation in \(v'\) and \(v''\)) describes how the multiplier \(v\) must behave. You're seeing the "residue" after stripping out the known part.

Why this always works — no exceptions

The cancellation is algebraic and universal. It doesn't depend on what \(y_1\) is, what \(P(x)\) and \(Q(x)\) are, or whether the ODE has nice coefficients. As long as \(y_1\) genuinely solves the ODE, the \(v\)-terms will always vanish. The method never fails on homogeneous second-order linear ODEs.

§ 4

Why Is \(y_2\) Independent of \(y_1\)? — The Wronskian

There's a worry: what if the \(y_2\) we produce is just a multiple of \(y_1\) in disguise? If that happened, we'd have gained nothing. We need to prove this can't occur.

Wronskian — what it is and why it matters

The Wronskian of two functions is defined as the determinant:

$$W(y_1, y_2) = \begin{vmatrix} y_1 & y_2 \\ y_1' & y_2' \end{vmatrix} = y_1\,y_2' - y_2\,y_1'$$

Two solutions are linearly independent if and only if \(W \neq 0\) on the interval. Think of the Wronskian as a test for "are these two curves really going in different directions?" — if \(W = 0\), one curve is just a scaled version of the other; if \(W \neq 0\), they are genuinely independent.

Computing \(W(y_1, y_2)\) for our pair

We have \(y_2 = v\,y_1\), so \(y_2' = v'\,y_1 + v\,y_1'\). Plug into the definition:

$$ W = y_1\,(v'\,y_1 + v\,y_1') - (v\,y_1)\,y_1' $$ $$ = y_1^2\,v' + v\,y_1\,y_1' - v\,y_1\,y_1' $$ $$ = y_1^2\,v' $$

The two \(v\,y_1\,y_1'\) terms cancel. Now recall the formula we derived: \(v' = e^{-\int P\,dx}/y_1^2\). Substitute:

$$ W = y_1^2 \cdot \frac{e^{-\int P\,dx}}{y_1^2} = e^{-\int P\,dx} $$
Guaranteed independence

The Wronskian equals \(e^{-\int P\,dx}\). An exponential is never zero for any finite \(x\). Therefore \(y_1\) and \(y_2\) are always linearly independent — the method is guaranteed to produce a genuinely new solution, every single time.

Bonus: Abel's Theorem falls out for free

Abel's theorem

We just showed that \(W(x) = C\,e^{-\int P(x)\,dx}\) for some constant \(C\). This means: the Wronskian of any two solutions either is identically zero on an interval, or it is never zero on that interval. It cannot gradually drift from nonzero to zero. This is Abel's theorem — and we didn't need to look it up. It appeared automatically from the derivation.

§ 5

Full Worked Example — Every Line Shown

Let's apply everything to a concrete Euler-type equation:

$$ 2t^2\,y'' + t\,y' - 3y = 0, \qquad \text{given } y_1 = t^{-1} $$
What we need to do

A second-order equation needs two independent solutions to write the complete general solution. We have \(y_1 = t^{-1}\). Our job is to find \(y_2\) — without guessing — using the method we just derived.

Step 1 — The ansatz: multiply \(y_1\) by an unknown function

Write \(y_2 = v(t) \cdot t^{-1}\). Here \(v(t)\) is completely unknown. Multiplying by a fixed constant would just give another copy of \(y_1\); a varying \(v(t)\) is how we escape.

$$ y_2 = v(t)\cdot t^{-1} $$

Step 2 — Differentiate: apply the product rule

Start with the definition. Two functions of \(t\) multiplied together:

$$ y_2 = v(t) \cdot t^{-1} $$

Every time we differentiate, both parts contribute — hence product rule throughout.

Product rule: \(\tfrac{d}{dt}[fg] = f'g + fg'\). Here \(f = v\) and \(g = t^{-1}\):

$$ y_2' = v' \cdot t^{-1} + v \cdot (t^{-1})' $$

Evaluate the derivative of the power: \((t^{-1})' = -t^{-2}\):

$$ y_2' = t^{-1}v' - t^{-2}v $$

Differentiate \(y_2' = t^{-1}v' - t^{-2}v\). Each term is a product, so apply product rule to each separately.

Term 1: \(\dfrac{d}{dt}(t^{-1}v')\)

$$ (t^{-1})'\cdot v' + t^{-1}\cdot v'' = -t^{-2}v' + t^{-1}v'' $$

Term 2: \(\dfrac{d}{dt}(-t^{-2}v)\)

$$ (-t^{-2})'\cdot v + (-t^{-2})\cdot v' = 2t^{-3}v - t^{-2}v' $$

Add both results:

$$ y_2'' = t^{-1}v'' - 2t^{-2}v' + 2t^{-3}v $$

Step 3 — Substitute into the DE and watch the collapse

Drop \(y_2, y_2', y_2''\) into \(2t^2\,y'' + t\,y' - 3y = 0\):

Expand every term — full distribution
$$ 2t^2\!\left(t^{-1}v'' - 2t^{-2}v' + 2t^{-3}v\right) + t\!\left(t^{-1}v' - t^{-2}v\right) - 3(t^{-1}v) = 0 $$

Distribute \(2t^2\) into each term in the first bracket:

$$ 2t^2 \cdot t^{-1}v'' = 2tv'' \qquad 2t^2 \cdot (-2t^{-2}v') = -4v' \qquad 2t^2 \cdot 2t^{-3}v = 4t^{-1}v $$

Distribute \(t\) into the second bracket:

$$ t \cdot t^{-1}v' = v' \qquad t \cdot (-t^{-2}v) = -t^{-1}v $$

Third term: \(-3\cdot t^{-1}v = -3t^{-1}v\). Now collect by derivative type:

$$ (2tv'') + (-4v' + v') + (4t^{-1}v - t^{-1}v - 3t^{-1}v) = 0 $$

After collecting, here is what happens to each group:

v″ group\(2t\,v''\)stays
v′ group\(-4v' + v' = -3v'\)stays
v group\(4t^{-1}v - t^{-1}v - 3t^{-1}v = 0\)cancels!
$$ 2t\,v'' - 3v' = 0 $$
Why did the \(v\)-terms cancel?

Because \(t^{-1}\) was already a solution of the original ODE. The equation has a built-in memory of this fact — substituting a known solution always kills the zeroth-order terms. This is the mechanism that makes the method work.

Step 4 — Reduce the order: let \(w = v'\)

We now have \(2t\,v'' - 3v' = 0\). Notice: it contains \(v''\) and \(v'\) but no bare \(v\). It is really an equation about the function \(v'\). Give it its own name:

$$ w = v' \qquad \Longrightarrow \qquad w' = v'' $$
Velocity analogy

Imagine you have an equation involving only velocity and acceleration, but not position. You don't need position yet. Call velocity \(w\), solve for it, then integrate at the end to recover position. Here \(v\) plays the role of "position," \(v'\) is "velocity," and we only need velocity for now.

The equation becomes first-order in \(w\):

$$ 2t\,w' - 3w = 0 $$
1

Separate variables

Rearrange to put \(w\)'s on the left and \(t\)'s on the right:

$$ \frac{w'}{w} = \frac{3}{2t} \qquad \Longrightarrow \qquad \frac{1}{w}\,dw = \frac{3}{2t}\,dt $$
2

Integrate both sides

Left side: \(\int dw/w = \ln|w|\). Right side: \(\int \tfrac{3}{2t}\,dt = \tfrac{3}{2}\ln|t|\).

$$ \ln|w| = \frac{3}{2}\ln|t| + C $$
3

Exponentiate — convert from logarithm form

Apply \(e^{(\cdot)}\) to both sides. Use \(e^{\frac{3}{2}\ln|t|} = |t|^{3/2} = t^{3/2}\) (for \(t > 0\)) and absorb \(e^C\) into a new constant \(C\):

$$ w = C\,t^{3/2} $$

Step 5 — Integrate \(w\) to find \(v\)

Since \(w = v'\):

$$ v' = C\,t^{3/2} $$

Integrate using the power rule \(\bigl(\int t^n\,dt = \tfrac{t^{n+1}}{n+1}\bigr)\):

$$ v = C\int t^{3/2}\,dt = C \cdot \frac{t^{5/2}}{5/2} = \frac{2C}{5}\,t^{5/2} $$

Add the constant of integration and rename constants for cleanliness:

$$ v = A\,t^{5/2} + K $$

Step 6 — Build \(y_2\) and discard the redundant piece

Recall \(y_2 = v \cdot t^{-1}\). Substitute \(v = At^{5/2} + K\):

$$ y_2 = \left(At^{5/2} + K\right)t^{-1} = A\,t^{3/2} + K\,t^{-1} $$
First term\(A\,t^{3/2}\)new — keep it
Second term\(K\,t^{-1}\)= old y₁ — discard
Why discard the \(Kt^{-1}\) piece?

The \(Kt^{-1}\) term is just a copy of the original \(y_1\) hiding inside \(y_2\). If we kept it, the "second" solution would be \(At^{3/2} + Kt^{-1}\) — but the \(Kt^{-1}\) part is already covered by \(c_1 y_1\) in the general solution. Keeping it would be double-counting. Strip it out. What remains — \(t^{3/2}\) — traces a completely different curve that cannot be made from \(t^{-1}\) by any scaling.

$$ y_2 = t^{3/2} $$

Step 7 — The general solution

General Solution $$ y(t) = C_1\,t^{-1} + C_2\,t^{3/2} $$

Step 8 — Verify \(y_2 = t^{3/2}\) satisfies the DE

Never skip verification — it's insurance against algebra errors.

Solution + derivatives
$$ y = t^{3/2} $$ $$ y' = \tfrac{3}{2}\,t^{1/2} $$ $$ y'' = \tfrac{3}{4}\,t^{-1/2} $$
Substitute into \(2t^2y'' + ty' - 3y\)
$$ 2t^2 \cdot \tfrac{3}{4}t^{-1/2} + t \cdot \tfrac{3}{2}t^{1/2} - 3t^{3/2} $$ $$ = \tfrac{3}{2}t^{3/2} + \tfrac{3}{2}t^{3/2} - 3t^{3/2} $$ $$ = 0 \;\checkmark $$

All three terms are proportional to \(t^{3/2}\) and cancel exactly to zero. The solution is confirmed.

Interactive — Feel What \(C_2\,t^{3/2}\) Adds

When \(C_2 = 0\), you only see \(y_1 = t^{-1}\) (dashed). As soon as \(C_2 \neq 0\), the second solution switches on and bends the curve in a genuinely new direction. That is linear independence made visible — two shapes so different that no scaling of one can produce the other.

Solution family — drag the sliders
1.0
0.0

Dashed = \(C_1 t^{-1}\) only  ·  Solid amber = full solution \(C_1 t^{-1} + C_2 t^{3/2}\)

§ 6

More Worked Examples — Using the Formula

Now that we have the formula \(y_2 = y_1 \int \tfrac{e^{-\int P\,dx}}{y_1^2}\,dx\), we can apply it efficiently. The key preparatory step is always: rewrite in standard form first (divide so the leading coefficient of \(y''\) is 1) to read off \(P(x)\).

Ex 1 Euler-type: \(x^2y'' - 3xy' + 4y = 0\), given \(y_1 = x^2\)

Standard form: Divide by \(x^2\):

$$ y'' - \frac{3}{x}\,y' + \frac{4}{x^2}\,y = 0 \quad \Rightarrow \quad P(x) = -\frac{3}{x} $$

Verify \(y_1 = x^2\): compute \(y_1' = 2x,\; y_1'' = 2\). Substitute into the original ODE:

$$ x^2(2) - 3x(2x) + 4(x^2) = 2x^2 - 6x^2 + 4x^2 = 0 \;\checkmark $$
  • 1
    Compute \(\displaystyle e^{-\int P\,dx}\). Here \(P = -3/x\), so \(-P = 3/x\): $$ e^{-\int P\,dx} = e^{\int 3/x\,dx} = e^{3\ln x} = x^3 $$ the negative sign in front of \(P\) flips \(-3/x\) to \(+3/x\); integrating \(3/x\) gives \(3\ln x\)
  • 2
    Form \(v'\): $$ v' = \frac{e^{-\int P\,dx}}{y_1^2} = \frac{x^3}{(x^2)^2} = \frac{x^3}{x^4} = \frac{1}{x} $$ direct substitution into the formula; \((x^2)^2 = x^4\)
  • 3
    Integrate: \(\displaystyle v = \int \frac{1}{x}\,dx = \ln x\) standard integral; drop the integration constant — we only need one particular \(v\)
  • 4
    Build \(y_2 = v\,y_1 = (\ln x)\cdot x^2 = x^2 \ln x\) multiply \(v\) back by \(y_1 = x^2\)
General solution

\(\displaystyle y = c_1\,x^2 + c_2\,x^2\ln x\)

The \(x^2 \ln x\) structure is the Euler-equation version of the "repeated root" pattern. Just as a repeated root in constant-coefficient ODEs gives \(e^{rx}(c_1 + c_2 x)\), here the Euler equation gives \(x^r(c_1 + c_2 \ln x)\). The logarithm is forced by the degeneracy.

Ex 2 Legendre-type: \((1-x^2)y'' - 2xy' + 2y = 0\), given \(y_1 = x\)

Standard form: Divide by \((1-x^2)\):

$$ y'' - \frac{2x}{1-x^2}\,y' + \frac{2}{1-x^2}\,y = 0 \quad \Rightarrow \quad P(x) = -\frac{2x}{1-x^2} $$

Verify \(y_1 = x\): \(y_1' = 1,\; y_1'' = 0\). Then \((1-x^2)(0) - 2x(1) + 2(x) = 0\). ✓

  • 1
    Compute \(-\int P\,dx = \int \dfrac{2x}{1-x^2}\,dx\). Let \(u = 1-x^2\), so \(du = -2x\,dx\). Then: $$ \int \frac{2x}{1-x^2}\,dx = -\int \frac{du}{u} = -\ln|1-x^2| $$ u-substitution: \(2x\,dx = -du\), so the numerator flips sign
  • 2
    \(\displaystyle e^{-\int P\,dx} = e^{-\ln|1-x^2|} = \frac{1}{1-x^2}\) \(e^{-\ln f} = 1/f\) by definition of logarithm
  • 3
    \(\displaystyle v' = \frac{1/(1-x^2)}{x^2} = \frac{1}{x^2(1-x^2)}\) divide by \(y_1^2 = x^2\)
  • 4
    Partial fraction decomposition: write $$ \frac{1}{x^2(1-x^2)} = \frac{A}{x} + \frac{B}{x^2} + \frac{C}{1-x} + \frac{D}{1+x} $$ Multiply both sides by \(x^2(1-x^2)\) and match coefficients. Solving gives \(A=0, B=1, C=1/2, D=1/2\): $$ v' = \frac{1}{x^2} + \frac{1/2}{1-x} + \frac{1/2}{1+x} $$ decompose into pieces whose antiderivatives we know
  • 5
    Integrate term by term: $$ v = -\frac{1}{x} - \frac{1}{2}\ln|1-x| + \frac{1}{2}\ln|1+x| = -\frac{1}{x} + \frac{1}{2}\ln\left|\frac{1+x}{1-x}\right| $$
  • 6
    \(\displaystyle y_2 = v\,y_1 = x\!\left(-\frac{1}{x} + \frac{1}{2}\ln\left|\frac{1+x}{1-x}\right|\right) = -1 + \frac{x}{2}\ln\left|\frac{1+x}{1-x}\right|\) multiply through by \(y_1 = x\); the constant \(-1\) is absorbed into \(c_2\)
General solution

\(\displaystyle y = c_1\,x + c_2\!\left(\frac{x}{2}\ln\left|\frac{1+x}{1-x}\right| - 1\right)\)

The logarithmic term is what makes this genuinely different from the polynomial \(y_1 = x\). No amount of scaling of \(x\) could produce a logarithm.

Ex 3 Constant-coefficient repeated root: \(y'' - 4y' + 4y = 0\), given \(y_1 = e^{2x}\)

The characteristic equation is \(r^2 - 4r + 4 = (r-2)^2 = 0\), giving \(r = 2\) as a double root. We can derive the second solution from first principles — no guessing required.

Standard form is already achieved: \(P(x) = -4\).

  • 1
    \(\displaystyle e^{-\int P\,dx} = e^{-\int(-4)\,dx} = e^{4x}\) \(-P = +4\); integrating gives \(4x\)
  • 2
    \(\displaystyle v' = \frac{e^{4x}}{(e^{2x})^2} = \frac{e^{4x}}{e^{4x}} = 1\) the numerator and denominator are the exact same exponential — they cancel perfectly
  • 3
    \(\displaystyle v = \int 1\,dx = x\)
  • 4
    \(\displaystyle y_2 = v\,y_1 = x \cdot e^{2x} = x\,e^{2x}\) this is the famous second solution for repeated roots — now we know exactly why the factor \(x\) appears
The deeper story behind \(x\,e^{rx}\)

This is where the \(xe^{rx}\) formula for repeated roots actually comes from. In every repeated-root case, the integrating factor \(e^{-\int P\,dx}\) will always cancel with \(y_1^2 = e^{2rx}\), leaving \(v' = 1\), giving \(v = x\), giving \(y_2 = x\,e^{rx}\). It's not magic — it's the algebraic structure of the equation forcing this outcome every time.

§ 7

Patterns & Special Cases

Reduction of order always works. But certain equation types produce clean patterns worth recognizing.

Equation type Given \(y_1\) \(v'\) simplifies to Result \(y_2\)
Constant coeff., repeated root \(r\) \(e^{rx}\) \(1\) \(x\,e^{rx}\)
Euler equation, repeated root \(m\) \(x^m\) \(1/x\) \(x^m \ln x\)
Legendre \(P_1\) (degree 1) \(x\) \(\dfrac{1}{x^2(1-x^2)}\) \(\tfrac{x}{2}\ln\left|\tfrac{1+x}{1-x}\right| - 1\)
Bessel order 0 \(J_0(x)\) \(\dfrac{1}{J_0^2(x)}\) \(Y_0(x)\) (Neumann function)
Why do repeated roots always produce a factor of \(x\) or \(\ln x\)?

When a root is repeated, the two solution "slots" have collapsed onto each other. The second solution that should be "pointing in a different direction" has no other direction to go — it's stuck on the same exponential or power track as the first. The only way to restore independence is to introduce a slowly-growing factor that tilts the solution away: \(x\) for exponential families, \(\ln x\) for power families. These are precisely what the integral \(\int v'\,dx\) produces in the degenerate case, because \(v'\) collapses to a constant or \(1/x\) respectively. Degeneracy forces a logarithm (or linear factor) — every time.

When \(P(x) = 0\)

If the ODE has no \(y'\) term at all — i.e., \(P(x) = 0\) everywhere — the formula simplifies dramatically:

$$ v' = \frac{1}{y_1^2} \qquad \Rightarrow \qquad y_2 = y_1 \int \frac{dx}{y_1^2} $$

No integrating factor to compute. This is the cleanest form and shows up frequently in physics (e.g., Sturm–Liouville problems, Bessel's equation near the origin).

§ 8

Constant-Coefficient Explorer

For \(ay'' + by' + cy = 0\), enter the coefficients. The explorer computes the discriminant, identifies the root type, and applies reduction of order in real time — showing you both solutions and the graph.

Reduction of order — live computation
Enter coefficients and click Compute.
§ 9

Full Summary

The method in one line

If \(y_1\) solves \(y'' + Py' + Qy = 0\), then \(\displaystyle y_2 = y_1 \int \frac{e^{-\int P\,dx}}{y_1^2}\,dx\) is a second, independent solution.

The 5-Step Recipe

When it works

Any homogeneous second-order linear ODE where you know one solution \(y_1\). No restrictions on the coefficient functions \(P\) and \(Q\).

Watch out for

Always divide to standard form first. Drop the integration constant when finding \(v\) — you only need one particular \(v\).