Cauchy–Euler
Equations
What happens when an ODE has a built-in sense of scale — and why that forces you to think in power laws instead of exponentials.
§1What makes Cauchy–Euler special?
The standard second-order Cauchy–Euler equation looks like this:
At first glance, it might just look like a random ODE with variable coefficients. But look more carefully at the pattern. The coefficient in front of \(y''\) is \(x^2\). The coefficient in front of \(y'\) is \(x^1\). The coefficient in front of \(y\) is \(x^0 = 1\). In every term, the power of \(x\) exactly matches how many derivatives you took. That is not a coincidence — it is the fingerprint of Cauchy–Euler.
In a Cauchy–Euler equation, the \(k\)-th derivative is always multiplied by \(x^k\). This means differentiating and multiplying by \(x\) cancel each other out, degree by degree. That cancellation is what makes the equation solvable with a neat trick.
Why power functions are the right trial solution
For constant-coefficient ODEs, the trick is to guess \(y = e^{rx}\). Why? Because derivatives of \(e^{rx}\) stay proportional to \(e^{rx}\), so when you substitute into the equation, every term has the same exponential factor and you can cancel it out.
For Cauchy–Euler, the same logic applies — but the building block changes. Try \(y = x^m\) and watch what happens to the derivatives:
Now multiply each derivative by the equation's coefficient:
Every single term becomes a constant times \(x^m\). The \(x\)-dependence factors right out, and you're left with a pure algebraic equation in \(m\). This is exactly what happened with \(e^{rx}\) in constant-coefficient equations — the ODE collapses into algebra.
A geometric way to see it: scale symmetry
Constant-coefficient ODEs are symmetric under translation: if you shift \(x \to x + c\), the coefficients don't change. Exponentials are the natural functions for that kind of symmetry.
Cauchy–Euler ODEs are symmetric under scaling: if you replace \(x \to kx\), the equation transforms in a self-similar way. Power laws \(x^m\) are the natural functions for that kind of symmetry — multiply \(x\) by \(k\) and \(x^m\) just multiplies by \(k^m\), a constant factor.
Shift-invariant. Exponentials \(e^{rx}\) are the eigenfunctions of translation.
Scale-invariant. Power laws \(x^m\) are the eigenfunctions of scaling.
§2Deriving the auxiliary equation
Start with the general equation and substitute \(y = x^m\) all the way through, with no steps hidden.
\[am^2 + (b-a)m + c = 0\] This is also sometimes written directly as \(am(m-1) + bm + c = 0\) — they are the same thing. Notice the middle coefficient is \((b-a)\), not just \(b\). This trips people up often.
The auxiliary equation for Cauchy–Euler is not \(am^2 + bm + c = 0\). You must expand \(am(m-1)\) to get the correct quadratic. The constant-coefficient characteristic equation is structurally different.
§3The three cases — same trinity as before
The auxiliary equation is a quadratic in \(m\). Quadratics have exactly three root patterns, and each one leads to a fundamentally different shape for the general solution.
Two distinct real roots \(m_1 \neq m_2\). Pure power laws.
One repeated real root \(m_0\). Power law plus a logarithm.
Complex conjugate roots \(\alpha \pm i\beta\). Oscillation in \(\ln x\).
This is the same structure as constant-coefficient ODEs. The forms change, but the logic of three cases is identical. That parallel is worth holding onto.
§4Case I — Distinct real roots
If the auxiliary equation gives two different real roots \(m_1\) and \(m_2\), then two independent solutions are simply:
They are linearly independent on \(x > 0\) because you can't write \(x^{m_1}\) as a constant multiple of \(x^{m_2}\) when \(m_1 \neq m_2\). The general solution follows directly from the superposition principle:
Each power law \(x^m\) is a "mode" of the system with its own scaling behavior. One mode might decay like \(x^{-1}\) (shrinks as x grows), another might grow like \(x^4\). The full solution is a mixture of these two modes. The constants \(c_1, c_2\) set how much of each mode is present — and initial conditions determine those constants.
§5Case II — Repeated real root
If the discriminant is zero, the auxiliary equation gives a single repeated root \(m = m_0\). Then \(y_1 = x^{m_0}\) is one solution, but the equation is second-order — we need two linearly independent solutions.
Where does the second one come from?
The "squeezed roots" intuition
Imagine starting with two distinct roots \(m_1\) and \(m_2\) and slowly letting them merge together: \(m_2 \to m_1\). When they were still distinct, the two solutions were \(x^{m_1}\) and \(x^{m_2}\). As they squeeze together, those two functions collapse into each other and we lose independence. Something new has to appear.
The thing that appears is the derivative with respect to the root parameter. Define:
For constant-coefficient ODEs, a repeated root \(r\) gives the second solution \(xe^{rx}\) — a factor of \(x\) appears. For Cauchy–Euler, a repeated root \(m_0\) gives the second solution \(x^{m_0}\ln x\) — a factor of \(\ln x\) appears. In both cases, something extra attaches to the original solution when the two modes collapse into one.
Formal proof that \(x^{m_0}\ln x\) actually works
Let the auxiliary polynomial be \(p(m) = am(m-1) + bm + c\). A repeated root means \(p(m_0) = 0\) and \(p'(m_0) = 0\). We already know \(y_1 = x^{m_0}\) satisfies \(L[y_1] = 0\). Now verify \(y_2 = x^{m_0}\ln x\).
Now substitute into \(L[y] = ax^2y'' + bxy' + cy\) and collect by terms with and without \(\ln x\):
The \(\ln x\) coefficient equals \(p(m_0)\), which is zero because \(m_0\) is a root. The constant coefficient equals \(p'(m_0)\), which is zero because \(m_0\) is a repeated root (a root of both \(p\) and its derivative). That is why the repeated-root condition is exactly what you need for \(x^{m_0}\ln x\) to work — not just any root, but a double root.
§6Case III — Complex conjugate roots
If the auxiliary equation yields complex roots \(m = \alpha \pm i\beta\) with \(\beta \neq 0\), the two complex solutions can be converted into two real solutions using Euler's formula.
The argument of the trig functions is \(\beta\ln x\), not \(\beta x\). This means the solution doesn't repeat at fixed intervals of \(x\) — it repeats at fixed intervals of \(\ln x\). Every time \(x\) grows by a multiplicative factor of \(e\), the oscillation completes another cycle. This is what "periodicity in the log scale" means.
Compare to constant-coefficient complex roots: those give \(e^{\alpha x}\cos(\beta x)\), which oscillates at fixed intervals of \(x\). Cauchy–Euler instead gives \(x^\alpha\cos(\beta\ln x)\) — the same structure, but with \(x \leftrightarrow e^x\) throughout.
\(e^{\alpha x}\cos(\beta x)\) — oscillates with a fixed period in \(x\).
\(x^\alpha\cos(\beta\ln x)\) — oscillates with a fixed period in \(\ln x\).
§7The hidden engine: \(t = \ln x\)
There is a deeper reason everything works out the way it does. The substitution \(t = \ln x\) (equivalently \(x = e^t\)) transforms any Cauchy–Euler equation into a constant-coefficient equation in \(t\). This is not just a trick — it reveals that the two equation types are actually the same equation written in different coordinate systems.
Cauchy–Euler is a constant-coefficient ODE in disguise. The disguise is the coordinate change \(t = \ln x\), which turns the multiplicative structure of \(x\) into the additive structure of \(t\).
Full derivation of the substitution
Let \(t = \ln x\), so \(x = e^t\). Write \(y(x) = Y(t)\).
The characteristic equation of this constant-coefficient ODE in \(t\) is: \[ar^2 + (b-a)r + c = 0\] This is identical to the Cauchy–Euler auxiliary equation with \(r = m\). So the root analysis — real, repeated, complex — is the same in both frameworks. The solution forms look different only because \(e^{rt}\) in \(t\)-space becomes \(e^{r\ln x} = x^r\) back in \(x\)-space. The underlying algebra is the same.
§8Worked Examples
Six fully worked examples, one for each combination of root type and IVP / no IVP. Every step is shown.
Distinct Real Roots — \(x^2y'' - 2xy' - 4y = 0\)
Identify \(a=1,\; b=-2,\; c=-4\). Assume \(y = x^m\).
One mode grows fast \((x^4)\), the other decays \((x^{-1})\). The general solution balances both.
Repeated Root — \(4x^2y'' + 8xy' + y = 0\)
Identify \(a=4,\; b=8,\; c=1\). Assume \(y = x^m\).
Complex Roots — \(x^2y'' + xy' + y = 0\)
Identify \(a=1,\; b=1,\; c=1\). Assume \(y = x^m\).
Since \(\alpha = 0\), the \(x^\alpha\) factor is just 1. The oscillation in \(\ln x\) has no power-law envelope — it neither grows nor decays.
Distinct Roots + IVP — \(x^2y'' - xy' - 3y = 0,\quad y(1)=2,\; y'(1)=5\)
Repeated Root + IVP — \(x^2y'' - 3xy' + 4y = 0,\quad y(1)=1,\; y'(1)=0\)
Complex Roots + IVP — \(x^2y'' - 2xy' + 5y = 0,\quad y(1)=3,\; y'(1)=1\)
§9Interactive Explorer
Adjust the coefficients \(a\), \(b\), \(c\) and watch the roots, solution type, and graph update in real time. The plot shows \(y(x)\) for \(x \in (0.1, 5]\) with initial conditions \(y(1)=1, y'(1)=0\).
Equation: ax²y″ + bxy′ + cy = 0
Solution Plot — y(1)=1, y′(1)=0
§10Common Mistakes
The auxiliary equation for \(ax^2y'' + bxy' + cy = 0\) is \(am(m-1) + bm + c = 0\), which expands to \(am^2 + (b-a)m + c = 0\). The middle coefficient is \((b-a)\), not just \(b\). Writing \(am^2 + bm + c = 0\) is the constant-coefficient equation, not Cauchy–Euler.
Exponentials are the trial solution for constant-coefficient ODEs. Cauchy–Euler needs power functions. The whole method is built on \(y = x^m\).
When differentiating \(y_2 = x^{m_0}\ln x\), you must use the product rule. The derivative is \(m_0 x^{m_0-1}\ln x + x^{m_0-1}\), not just \(m_0 x^{m_0-1}\ln x\).
Differentiating \(\cos(\beta\ln x)\) gives \(-\sin(\beta\ln x)\cdot\dfrac{\beta}{x}\). The \(\dfrac{1}{x}\) factor from differentiating \(\ln x\) must appear. Leaving it out gives the wrong derivative.
Since \(\ln x\) only makes sense for \(x > 0\), the formulas as written apply on \((0, \infty)\). On \((-\infty, 0)\), replace \(x\) with \(|x|\) throughout, including in \(\ln|x|\).
§11Summary Reference
The master equation is \(ax^2y'' + bxy' + cy = 0\). Try \(y = x^m\) and get the auxiliary equation:
| Root Type | Condition | General Solution |
|---|---|---|
| Distinct Real | \(m_1 \neq m_2\), both real | \(c_1 x^{m_1} + c_2 x^{m_2}\) |
| Repeated Real | discriminant \(= 0\), root \(m_0\) | \(c_1 x^{m_0} + c_2 x^{m_0}\ln x\) |
| Complex | \(m = \alpha \pm i\beta\), \(\beta \neq 0\) | \(x^\alpha\!\left(c_1\cos(\beta\ln x) + c_2\sin(\beta\ln x)\right)\) |
Constant-coefficient ODEs have translation symmetry → exponentials \(e^{rx}\) are the natural solutions. Cauchy–Euler ODEs have scale symmetry → power laws \(x^m\) are the natural solutions. The substitution \(t = \ln x\) converts between the two worlds, and the auxiliary equations are identical.
Every solution form in Cauchy–Euler is the direct image of a constant-coefficient solution form under \(x = e^t\):
\(e^{m_1 t}\) and \(e^{m_2 t}\) \(\;\longrightarrow\;\) \(x^{m_1}\) and \(x^{m_2}\)
\(e^{m_0 t}\) and \(te^{m_0 t}\) \(\;\longrightarrow\;\) \(x^{m_0}\) and \(x^{m_0}\ln x\)
\(e^{\alpha t}\cos(\beta t)\) and \(e^{\alpha t}\sin(\beta t)\) \(\;\longrightarrow\;\) \(x^\alpha\cos(\beta\ln x)\) and \(x^\alpha\sin(\beta\ln x)\)