You already hold one key. This page shows exactly how to use it to find the second — deriving every formula from scratch, with every algebraic step visible.
Think of a second-order ODE as a two-lock safe. Someone has already handed you the key for lock #1 — that's \(y_1\). You don't need to crack it from scratch. You just need lock #2.
A second-order linear homogeneous ODE always has two linearly independent solutions. The general solution is their combination:
If someone hands you \(y_1\), you need a method to find \(y_2\) that is genuinely different — not just a scaled copy of \(y_1\).
We look for \(y_2\) in the form \(y_2 = v(x)\,y_1(x)\). We take the known solution and multiply it by an unknown function \(v(x)\). Think of \(v(x)\) as a dimmer dial attached to \(y_1\) — instead of a fixed brightness (a constant), it's free to vary as \(x\) changes. Plugging this form into the ODE will tell us exactly what shape \(v\) needs to have. The key payoff: a 2nd-order ODE in \(y\) becomes a 1st-order ODE in \(v'\). The order has been reduced by one — hence the name.
Start with any second-order linear homogeneous ODE. We write it in standard form, meaning the coefficient in front of \(y''\) is exactly 1:
We are told that \(y_1\) is a solution. That means, if you plug \(y_1\) in, you get zero:
We propose that the second solution has the form:
Here \(v(x)\) is completely unknown. It's not a constant — it's a whole function we need to discover. Our goal is to find an equation that \(v\) must satisfy.
Since both \(v\) and \(y_1\) depend on \(x\), every differentiation requires the product rule: \(\tfrac{d}{dx}[fg] = f'g + fg'\).
Differentiate \(y_2'\) again. There are two terms, each a product, so product rule applies to each:
Each of the two terms in \(y_2'\) is itself a product of two functions. Applying the product rule to each gives two new terms per term — but the middle terms \(v'\,y_1'\) add up into \(2v'\,y_1'\). That doubling is exactly why the coefficient is 2, not 1.
Now plug \(y_2, y_2', y_2''\) into \(y_2'' + P\,y_2' + Q\,y_2 = 0\). Write each piece out fully before grouping:
Collect terms by the "flavor" of derivative of \(v\) they involve:
Look at the coefficient of \(v\). It's exactly \(y_1'' + P\,y_1' + Q\,y_1\) — precisely the ODE applied to \(y_1\). But \(y_1\) is a solution, so that whole bracket equals zero. The \(v\)-term disappears entirely.
What remains after the cancellation:
This contains only \(v''\) and \(v'\) — no bare \(v\). That means if we rename \(w = v'\), we get a first-order ODE in \(w\). A second-order problem has been reduced to a first-order one.
Divide both sides by \(y_1\) (assuming \(y_1 \neq 0\)):
Separate variables — move everything with \(w\) to the left side, divide by \(w\):
Integrate both sides. For the left side: \(\int dw/w = \ln|w|\). For the right side, note that \(\int \tfrac{2y_1'}{y_1}\,dx = 2\ln|y_1|\) (this is just the chain rule read backwards: the derivative of \(\ln|y_1|\) is \(y_1'/y_1\)):
Exponentiate both sides. Use \(e^{-2\ln|y_1|} = e^{\ln(y_1^{-2})} = y_1^{-2}\):
Integrate one more time to find \(v\), then multiply by \(y_1\):
This is the closed-form formula for the second solution. Every symbol has a job: the exponential in the numerator is an integrating factor (it carries information about the "friction" coefficient \(P\)); the \(y_1^2\) in the denominator normalizes away the known solution; and the outer \(y_1\) builds \(y_2 = v \cdot y_1\).
This is the heart of the whole method, so it deserves its own explanation.
When we write \(y_2 = v \cdot y_1\) and substitute, the ODE is being applied to a product. The \(v\)-terms gather into exactly \((\text{ODE applied to }y_1) \cdot v\). Since \(y_1\) satisfies the ODE, that expression is zero — the equation has a "memory" of its own solution baked in, and it automatically kills any attempt to reproduce \(y_1\) in the substitution.
Here's an analogy: think of the ODE as a filter that kills its own known solutions. When you feed it \(v \cdot y_1\), the filter erases all the \(y_1\)-shaped content, and what's left (the equation in \(v'\) and \(v''\)) describes how the multiplier \(v\) must behave. You're seeing the "residue" after stripping out the known part.
The cancellation is algebraic and universal. It doesn't depend on what \(y_1\) is, what \(P(x)\) and \(Q(x)\) are, or whether the ODE has nice coefficients. As long as \(y_1\) genuinely solves the ODE, the \(v\)-terms will always vanish. The method never fails on homogeneous second-order linear ODEs.
There's a worry: what if the \(y_2\) we produce is just a multiple of \(y_1\) in disguise? If that happened, we'd have gained nothing. We need to prove this can't occur.
The Wronskian of two functions is defined as the determinant:
$$W(y_1, y_2) = \begin{vmatrix} y_1 & y_2 \\ y_1' & y_2' \end{vmatrix} = y_1\,y_2' - y_2\,y_1'$$
Two solutions are linearly independent if and only if \(W \neq 0\) on the interval. Think of the Wronskian as a test for "are these two curves really going in different directions?" — if \(W = 0\), one curve is just a scaled version of the other; if \(W \neq 0\), they are genuinely independent.
We have \(y_2 = v\,y_1\), so \(y_2' = v'\,y_1 + v\,y_1'\). Plug into the definition:
The two \(v\,y_1\,y_1'\) terms cancel. Now recall the formula we derived: \(v' = e^{-\int P\,dx}/y_1^2\). Substitute:
The Wronskian equals \(e^{-\int P\,dx}\). An exponential is never zero for any finite \(x\). Therefore \(y_1\) and \(y_2\) are always linearly independent — the method is guaranteed to produce a genuinely new solution, every single time.
We just showed that \(W(x) = C\,e^{-\int P(x)\,dx}\) for some constant \(C\). This means: the Wronskian of any two solutions either is identically zero on an interval, or it is never zero on that interval. It cannot gradually drift from nonzero to zero. This is Abel's theorem — and we didn't need to look it up. It appeared automatically from the derivation.
Let's apply everything to a concrete Euler-type equation:
A second-order equation needs two independent solutions to write the complete general solution. We have \(y_1 = t^{-1}\). Our job is to find \(y_2\) — without guessing — using the method we just derived.
Write \(y_2 = v(t) \cdot t^{-1}\). Here \(v(t)\) is completely unknown. Multiplying by a fixed constant would just give another copy of \(y_1\); a varying \(v(t)\) is how we escape.
Start with the definition. Two functions of \(t\) multiplied together:
Every time we differentiate, both parts contribute — hence product rule throughout.
Product rule: \(\tfrac{d}{dt}[fg] = f'g + fg'\). Here \(f = v\) and \(g = t^{-1}\):
Evaluate the derivative of the power: \((t^{-1})' = -t^{-2}\):
Differentiate \(y_2' = t^{-1}v' - t^{-2}v\). Each term is a product, so apply product rule to each separately.
Term 1: \(\dfrac{d}{dt}(t^{-1}v')\)
Term 2: \(\dfrac{d}{dt}(-t^{-2}v)\)
Add both results:
Drop \(y_2, y_2', y_2''\) into \(2t^2\,y'' + t\,y' - 3y = 0\):
Distribute \(2t^2\) into each term in the first bracket:
Distribute \(t\) into the second bracket:
Third term: \(-3\cdot t^{-1}v = -3t^{-1}v\). Now collect by derivative type:
After collecting, here is what happens to each group:
| v″ group | \(2t\,v''\) | stays |
| v′ group | \(-4v' + v' = -3v'\) | stays |
| v group | \(4t^{-1}v - t^{-1}v - 3t^{-1}v = 0\) | cancels! |
Because \(t^{-1}\) was already a solution of the original ODE. The equation has a built-in memory of this fact — substituting a known solution always kills the zeroth-order terms. This is the mechanism that makes the method work.
We now have \(2t\,v'' - 3v' = 0\). Notice: it contains \(v''\) and \(v'\) but no bare \(v\). It is really an equation about the function \(v'\). Give it its own name:
Imagine you have an equation involving only velocity and acceleration, but not position. You don't need position yet. Call velocity \(w\), solve for it, then integrate at the end to recover position. Here \(v\) plays the role of "position," \(v'\) is "velocity," and we only need velocity for now.
The equation becomes first-order in \(w\):
Rearrange to put \(w\)'s on the left and \(t\)'s on the right:
Left side: \(\int dw/w = \ln|w|\). Right side: \(\int \tfrac{3}{2t}\,dt = \tfrac{3}{2}\ln|t|\).
Apply \(e^{(\cdot)}\) to both sides. Use \(e^{\frac{3}{2}\ln|t|} = |t|^{3/2} = t^{3/2}\) (for \(t > 0\)) and absorb \(e^C\) into a new constant \(C\):
Since \(w = v'\):
Integrate using the power rule \(\bigl(\int t^n\,dt = \tfrac{t^{n+1}}{n+1}\bigr)\):
Add the constant of integration and rename constants for cleanliness:
Recall \(y_2 = v \cdot t^{-1}\). Substitute \(v = At^{5/2} + K\):
| First term | \(A\,t^{3/2}\) | new — keep it |
| Second term | \(K\,t^{-1}\) | = old y₁ — discard |
The \(Kt^{-1}\) term is just a copy of the original \(y_1\) hiding inside \(y_2\). If we kept it, the "second" solution would be \(At^{3/2} + Kt^{-1}\) — but the \(Kt^{-1}\) part is already covered by \(c_1 y_1\) in the general solution. Keeping it would be double-counting. Strip it out. What remains — \(t^{3/2}\) — traces a completely different curve that cannot be made from \(t^{-1}\) by any scaling.
Never skip verification — it's insurance against algebra errors.
All three terms are proportional to \(t^{3/2}\) and cancel exactly to zero. The solution is confirmed.
When \(C_2 = 0\), you only see \(y_1 = t^{-1}\) (dashed). As soon as \(C_2 \neq 0\), the second solution switches on and bends the curve in a genuinely new direction. That is linear independence made visible — two shapes so different that no scaling of one can produce the other.
Dashed = \(C_1 t^{-1}\) only · Solid amber = full solution \(C_1 t^{-1} + C_2 t^{3/2}\)
Now that we have the formula \(y_2 = y_1 \int \tfrac{e^{-\int P\,dx}}{y_1^2}\,dx\), we can apply it efficiently. The key preparatory step is always: rewrite in standard form first (divide so the leading coefficient of \(y''\) is 1) to read off \(P(x)\).
Standard form: Divide by \(x^2\):
Verify \(y_1 = x^2\): compute \(y_1' = 2x,\; y_1'' = 2\). Substitute into the original ODE:
\(\displaystyle y = c_1\,x^2 + c_2\,x^2\ln x\)
The \(x^2 \ln x\) structure is the Euler-equation version of the "repeated root" pattern. Just as a repeated root in constant-coefficient ODEs gives \(e^{rx}(c_1 + c_2 x)\), here the Euler equation gives \(x^r(c_1 + c_2 \ln x)\). The logarithm is forced by the degeneracy.
Standard form: Divide by \((1-x^2)\):
Verify \(y_1 = x\): \(y_1' = 1,\; y_1'' = 0\). Then \((1-x^2)(0) - 2x(1) + 2(x) = 0\). ✓
\(\displaystyle y = c_1\,x + c_2\!\left(\frac{x}{2}\ln\left|\frac{1+x}{1-x}\right| - 1\right)\)
The logarithmic term is what makes this genuinely different from the polynomial \(y_1 = x\). No amount of scaling of \(x\) could produce a logarithm.
The characteristic equation is \(r^2 - 4r + 4 = (r-2)^2 = 0\), giving \(r = 2\) as a double root. We can derive the second solution from first principles — no guessing required.
Standard form is already achieved: \(P(x) = -4\).
This is where the \(xe^{rx}\) formula for repeated roots actually comes from. In every repeated-root case, the integrating factor \(e^{-\int P\,dx}\) will always cancel with \(y_1^2 = e^{2rx}\), leaving \(v' = 1\), giving \(v = x\), giving \(y_2 = x\,e^{rx}\). It's not magic — it's the algebraic structure of the equation forcing this outcome every time.
Reduction of order always works. But certain equation types produce clean patterns worth recognizing.
| Equation type | Given \(y_1\) | \(v'\) simplifies to | Result \(y_2\) |
|---|---|---|---|
| Constant coeff., repeated root \(r\) | \(e^{rx}\) | \(1\) | \(x\,e^{rx}\) |
| Euler equation, repeated root \(m\) | \(x^m\) | \(1/x\) | \(x^m \ln x\) |
| Legendre \(P_1\) (degree 1) | \(x\) | \(\dfrac{1}{x^2(1-x^2)}\) | \(\tfrac{x}{2}\ln\left|\tfrac{1+x}{1-x}\right| - 1\) |
| Bessel order 0 | \(J_0(x)\) | \(\dfrac{1}{J_0^2(x)}\) | \(Y_0(x)\) (Neumann function) |
When a root is repeated, the two solution "slots" have collapsed onto each other. The second solution that should be "pointing in a different direction" has no other direction to go — it's stuck on the same exponential or power track as the first. The only way to restore independence is to introduce a slowly-growing factor that tilts the solution away: \(x\) for exponential families, \(\ln x\) for power families. These are precisely what the integral \(\int v'\,dx\) produces in the degenerate case, because \(v'\) collapses to a constant or \(1/x\) respectively. Degeneracy forces a logarithm (or linear factor) — every time.
If the ODE has no \(y'\) term at all — i.e., \(P(x) = 0\) everywhere — the formula simplifies dramatically:
No integrating factor to compute. This is the cleanest form and shows up frequently in physics (e.g., Sturm–Liouville problems, Bessel's equation near the origin).
For \(ay'' + by' + cy = 0\), enter the coefficients. The explorer computes the discriminant, identifies the root type, and applies reduction of order in real time — showing you both solutions and the graph.
If \(y_1\) solves \(y'' + Py' + Qy = 0\), then \(\displaystyle y_2 = y_1 \int \frac{e^{-\int P\,dx}}{y_1^2}\,dx\) is a second, independent solution.
Any homogeneous second-order linear ODE where you know one solution \(y_1\). No restrictions on the coefficient functions \(P\) and \(Q\).
Always divide to standard form first. Drop the integration constant when finding \(v\) — you only need one particular \(v\).