Ordinary Differential Equations

Cauchy–Euler
Equations

What happens when an ODE has a built-in sense of scale — and why that forces you to think in power laws instead of exponentials.

Scale Symmetry Three Root Cases 6 Worked Examples Interactive Explorer
Scroll to begin

§1What makes Cauchy–Euler special?

The standard second-order Cauchy–Euler equation looks like this:

\[ ax^2 y'' + bxy' + cy = 0 \]

At first glance, it might just look like a random ODE with variable coefficients. But look more carefully at the pattern. The coefficient in front of \(y''\) is \(x^2\). The coefficient in front of \(y'\) is \(x^1\). The coefficient in front of \(y\) is \(x^0 = 1\). In every term, the power of \(x\) exactly matches how many derivatives you took. That is not a coincidence — it is the fingerprint of Cauchy–Euler.

Core Idea — The Matching Pattern

In a Cauchy–Euler equation, the \(k\)-th derivative is always multiplied by \(x^k\). This means differentiating and multiplying by \(x\) cancel each other out, degree by degree. That cancellation is what makes the equation solvable with a neat trick.

Why power functions are the right trial solution

For constant-coefficient ODEs, the trick is to guess \(y = e^{rx}\). Why? Because derivatives of \(e^{rx}\) stay proportional to \(e^{rx}\), so when you substitute into the equation, every term has the same exponential factor and you can cancel it out.

For Cauchy–Euler, the same logic applies — but the building block changes. Try \(y = x^m\) and watch what happens to the derivatives:

\(\displaystyle y = x^m\)
trial guess
\(\displaystyle y' = mx^{m-1}\)
one derivative drops the power by 1
\(\displaystyle y'' = m(m-1)x^{m-2}\)
two derivatives drop it by 2

Now multiply each derivative by the equation's coefficient:

\(\displaystyle x^2 \cdot y'' = x^2 \cdot m(m-1)x^{m-2} = m(m-1)\,x^m\)
x² cancels the x^{m−2}
\(\displaystyle x \cdot y' = x \cdot mx^{m-1} = m\,x^m\)
x cancels the x^{m−1}
\(\displaystyle 1 \cdot y = x^m\)
plain x^m stays x^m
The Payoff

Every single term becomes a constant times \(x^m\). The \(x\)-dependence factors right out, and you're left with a pure algebraic equation in \(m\). This is exactly what happened with \(e^{rx}\) in constant-coefficient equations — the ODE collapses into algebra.

A geometric way to see it: scale symmetry

Constant-coefficient ODEs are symmetric under translation: if you shift \(x \to x + c\), the coefficients don't change. Exponentials are the natural functions for that kind of symmetry.

Cauchy–Euler ODEs are symmetric under scaling: if you replace \(x \to kx\), the equation transforms in a self-similar way. Power laws \(x^m\) are the natural functions for that kind of symmetry — multiply \(x\) by \(k\) and \(x^m\) just multiplies by \(k^m\), a constant factor.

Constant Coefficient

Shift-invariant. Exponentials \(e^{rx}\) are the eigenfunctions of translation.

Cauchy–Euler

Scale-invariant. Power laws \(x^m\) are the eigenfunctions of scaling.


§2Deriving the auxiliary equation

Start with the general equation and substitute \(y = x^m\) all the way through, with no steps hidden.

\[ ax^2 y'' + bxy' + cy = 0 \]
\(\displaystyle y = x^m,\quad y' = mx^{m-1},\quad y'' = m(m-1)x^{m-2}\)
trial solution and its derivatives
\(\displaystyle ax^2\bigl(m(m-1)x^{m-2}\bigr) + bx\bigl(mx^{m-1}\bigr) + c\bigl(x^m\bigr) = 0\)
direct substitution
\(\displaystyle am(m-1)x^m + bmx^m + cx^m = 0\)
simplify each power of x
\(\displaystyle x^m\bigl(am(m-1) + bm + c\bigr) = 0\)
factor out x^m
\(\displaystyle am(m-1) + bm + c = 0\)
x^m ≠ 0 for x > 0, so drop it
\(\displaystyle am^2 - am + bm + c = 0\)
expand am(m−1)
\(\displaystyle am^2 + (b-a)m + c = 0\)
collect like terms — this is the auxiliary equation
The Auxiliary Equation

\[am^2 + (b-a)m + c = 0\] This is also sometimes written directly as \(am(m-1) + bm + c = 0\) — they are the same thing. Notice the middle coefficient is \((b-a)\), not just \(b\). This trips people up often.

Watch Out

The auxiliary equation for Cauchy–Euler is not \(am^2 + bm + c = 0\). You must expand \(am(m-1)\) to get the correct quadratic. The constant-coefficient characteristic equation is structurally different.


§3The three cases — same trinity as before

The auxiliary equation is a quadratic in \(m\). Quadratics have exactly three root patterns, and each one leads to a fundamentally different shape for the general solution.

Case I

Two distinct real roots \(m_1 \neq m_2\). Pure power laws.

Case II

One repeated real root \(m_0\). Power law plus a logarithm.

Case III

Complex conjugate roots \(\alpha \pm i\beta\). Oscillation in \(\ln x\).

This is the same structure as constant-coefficient ODEs. The forms change, but the logic of three cases is identical. That parallel is worth holding onto.


§4Case I — Distinct real roots

If the auxiliary equation gives two different real roots \(m_1\) and \(m_2\), then two independent solutions are simply:

\[ y_1 = x^{m_1}, \qquad y_2 = x^{m_2} \]

They are linearly independent on \(x > 0\) because you can't write \(x^{m_1}\) as a constant multiple of \(x^{m_2}\) when \(m_1 \neq m_2\). The general solution follows directly from the superposition principle:

General Solution — Distinct Real Roots
\[ y = c_1 x^{m_1} + c_2 x^{m_2} \]
Intuition

Each power law \(x^m\) is a "mode" of the system with its own scaling behavior. One mode might decay like \(x^{-1}\) (shrinks as x grows), another might grow like \(x^4\). The full solution is a mixture of these two modes. The constants \(c_1, c_2\) set how much of each mode is present — and initial conditions determine those constants.


§5Case II — Repeated real root

If the discriminant is zero, the auxiliary equation gives a single repeated root \(m = m_0\). Then \(y_1 = x^{m_0}\) is one solution, but the equation is second-order — we need two linearly independent solutions.

Where does the second one come from?

The "squeezed roots" intuition

Imagine starting with two distinct roots \(m_1\) and \(m_2\) and slowly letting them merge together: \(m_2 \to m_1\). When they were still distinct, the two solutions were \(x^{m_1}\) and \(x^{m_2}\). As they squeeze together, those two functions collapse into each other and we lose independence. Something new has to appear.

The thing that appears is the derivative with respect to the root parameter. Define:

\(\displaystyle f(m) = x^m\)
our one-parameter family of solutions
\(\displaystyle \frac{\partial}{\partial m}\,x^m = x^m \ln x\)
differentiate with respect to m, using x^m = e^{m ln x}
\(\displaystyle \text{At }m = m_0:\quad y_2 = x^{m_0}\ln x\)
the new independent direction at a repeated root
The Beautiful Structural Rhyme

For constant-coefficient ODEs, a repeated root \(r\) gives the second solution \(xe^{rx}\) — a factor of \(x\) appears. For Cauchy–Euler, a repeated root \(m_0\) gives the second solution \(x^{m_0}\ln x\) — a factor of \(\ln x\) appears. In both cases, something extra attaches to the original solution when the two modes collapse into one.

General Solution — Repeated Real Root
\[ y = c_1 x^{m_0} + c_2 x^{m_0}\ln x \]

Formal proof that \(x^{m_0}\ln x\) actually works

Let the auxiliary polynomial be \(p(m) = am(m-1) + bm + c\). A repeated root means \(p(m_0) = 0\) and \(p'(m_0) = 0\). We already know \(y_1 = x^{m_0}\) satisfies \(L[y_1] = 0\). Now verify \(y_2 = x^{m_0}\ln x\).

\(\displaystyle y_2 = x^{m_0}\ln x\)
the candidate second solution
\(\displaystyle y_2' = m_0\,x^{m_0-1}\ln x + x^{m_0-1}\)
product rule: (x^{m_0})' ln x + x^{m_0} (ln x)'
\(\displaystyle y_2' = x^{m_0-1}(m_0\ln x + 1)\)
factor out x^{m_0−1}
\(\displaystyle y_2'' = (m_0-1)x^{m_0-2}(m_0\ln x + 1) + x^{m_0-1}\cdot\frac{m_0}{x}\)
product rule on y_2' = x^{m_0−1}(m_0 ln x + 1)
\(\displaystyle y_2'' = x^{m_0-2}\bigl((m_0-1)(m_0\ln x + 1) + m_0\bigr)\)
factor out x^{m_0−2}
\(\displaystyle y_2'' = x^{m_0-2}\bigl(m_0(m_0-1)\ln x + (2m_0-1)\bigr)\)
expand (m_0−1)(m_0 ln x + 1) + m_0

Now substitute into \(L[y] = ax^2y'' + bxy' + cy\) and collect by terms with and without \(\ln x\):

\(\displaystyle ax^2 y_2'' = ax^{m_0}\bigl(m_0(m_0-1)\ln x + (2m_0-1)\bigr)\)
multiply x^2 into y_2''
\(\displaystyle bxy_2' = bx^{m_0}(m_0\ln x + 1)\)
multiply x into y_2'
\(\displaystyle cy_2 = cx^{m_0}\ln x\)
plain multiplication
\(\displaystyle L[y_2] = x^{m_0}\bigl[\underbrace{(am_0(m_0-1)+bm_0+c)}_{\ln x \text{ coeff}}\ln x + \underbrace{(a(2m_0-1)+b)}_{\text{const coeff}}\bigr]\)
add all three terms, group by ln x
\(\text{coefficient of }\ln x = p(m_0) = 0\)
because m_0 is a root of the auxiliary equation
\(\text{constant coefficient} = a(2m_0-1)+b = 2am_0 + (b-a) = p'(m_0) = 0\)
because m_0 is a repeated root, so p'(m_0) = 0
\(\displaystyle L[y_2] = x^{m_0}[0 \cdot \ln x + 0] = 0\quad\checkmark\)
both coefficients vanish — y_2 is a solution
Why Both Coefficients Vanish

The \(\ln x\) coefficient equals \(p(m_0)\), which is zero because \(m_0\) is a root. The constant coefficient equals \(p'(m_0)\), which is zero because \(m_0\) is a repeated root (a root of both \(p\) and its derivative). That is why the repeated-root condition is exactly what you need for \(x^{m_0}\ln x\) to work — not just any root, but a double root.


§6Case III — Complex conjugate roots

If the auxiliary equation yields complex roots \(m = \alpha \pm i\beta\) with \(\beta \neq 0\), the two complex solutions can be converted into two real solutions using Euler's formula.

\(\displaystyle x^{\alpha + i\beta} = x^\alpha \cdot x^{i\beta}\)
split real and imaginary parts of the exponent
\(\displaystyle x^{i\beta} = e^{i\beta \ln x}\)
since x = e^{ln x}, so x^{iβ} = e^{iβ ln x}
\(\displaystyle e^{i\beta\ln x} = \cos(\beta\ln x) + i\sin(\beta\ln x)\)
Euler's formula with θ = β ln x
\(\displaystyle x^{\alpha+i\beta} = x^\alpha\bigl[\cos(\beta\ln x) + i\sin(\beta\ln x)\bigr]\)
combine — this is one complex solution
\(\displaystyle y_1 = x^\alpha\cos(\beta\ln x),\qquad y_2 = x^\alpha\sin(\beta\ln x)\)
take real and imaginary parts — both are real solutions
General Solution — Complex Roots \(\alpha \pm i\beta\)
\[ y = x^\alpha\bigl(c_1\cos(\beta\ln x) + c_2\sin(\beta\ln x)\bigr) \]
Oscillation in Log Scale

The argument of the trig functions is \(\beta\ln x\), not \(\beta x\). This means the solution doesn't repeat at fixed intervals of \(x\) — it repeats at fixed intervals of \(\ln x\). Every time \(x\) grows by a multiplicative factor of \(e\), the oscillation completes another cycle. This is what "periodicity in the log scale" means.

Compare to constant-coefficient complex roots: those give \(e^{\alpha x}\cos(\beta x)\), which oscillates at fixed intervals of \(x\). Cauchy–Euler instead gives \(x^\alpha\cos(\beta\ln x)\) — the same structure, but with \(x \leftrightarrow e^x\) throughout.

Constant Coeff Complex

\(e^{\alpha x}\cos(\beta x)\) — oscillates with a fixed period in \(x\).

Cauchy–Euler Complex

\(x^\alpha\cos(\beta\ln x)\) — oscillates with a fixed period in \(\ln x\).


§7The hidden engine: \(t = \ln x\)

There is a deeper reason everything works out the way it does. The substitution \(t = \ln x\) (equivalently \(x = e^t\)) transforms any Cauchy–Euler equation into a constant-coefficient equation in \(t\). This is not just a trick — it reveals that the two equation types are actually the same equation written in different coordinate systems.

The Link

Cauchy–Euler is a constant-coefficient ODE in disguise. The disguise is the coordinate change \(t = \ln x\), which turns the multiplicative structure of \(x\) into the additive structure of \(t\).

Full derivation of the substitution

Let \(t = \ln x\), so \(x = e^t\). Write \(y(x) = Y(t)\).

\(\displaystyle \frac{dt}{dx} = \frac{1}{x}\)
derivative of ln x
\(\displaystyle y' = \frac{dY}{dt}\cdot\frac{dt}{dx} = \frac{1}{x}\,\dot Y\)
chain rule, dot notation for d/dt
\(\displaystyle y'' = \frac{d}{dx}\!\left(\frac{\dot Y}{x}\right) = \frac{\ddot Y}{x}\cdot\frac{1}{x} - \frac{\dot Y}{x^2}\)
product rule + chain rule applied again
\(\displaystyle y'' = \frac{\ddot Y - \dot Y}{x^2}\)
combine over x²
\(\displaystyle x^2 y'' = \ddot Y - \dot Y, \qquad xy' = \dot Y\)
the x² and x cancel perfectly — that's the miracle
\(\displaystyle a(\ddot Y - \dot Y) + b\dot Y + cY = 0\)
substitute into ax²y'' + bxy' + cy = 0
\(\displaystyle a\ddot Y + (b-a)\dot Y + cY = 0\)
constant-coefficient ODE in t — same auxiliary equation!

The characteristic equation of this constant-coefficient ODE in \(t\) is: \[ar^2 + (b-a)r + c = 0\] This is identical to the Cauchy–Euler auxiliary equation with \(r = m\). So the root analysis — real, repeated, complex — is the same in both frameworks. The solution forms look different only because \(e^{rt}\) in \(t\)-space becomes \(e^{r\ln x} = x^r\) back in \(x\)-space. The underlying algebra is the same.

CAUCHY–EULER ax²y″ + bxy′ + cy = 0 trial: x^m CONSTANT-COEFF aŸ + (b−a)Ẏ + cY = 0 trial: e^{rt} t = ln x x = e^t

§8Worked Examples

Six fully worked examples, one for each combination of root type and IVP / no IVP. Every step is shown.

Example 1

Distinct Real Roots — \(x^2y'' - 2xy' - 4y = 0\)

Identify \(a=1,\; b=-2,\; c=-4\). Assume \(y = x^m\).

\(\displaystyle x^2\cdot m(m-1)x^{m-2} - 2x\cdot mx^{m-1} - 4x^m = 0\)
substitute y = x^m
\(\displaystyle m(m-1)x^m - 2mx^m - 4x^m = 0\)
simplify powers of x
\(\displaystyle x^m\bigl(m(m-1) - 2m - 4\bigr) = 0\)
factor x^m
\(\displaystyle m^2 - m - 2m - 4 = 0\)
expand m(m−1), x^m ≠ 0
\(\displaystyle m^2 - 3m - 4 = 0\)
collect like terms
\(\displaystyle (m-4)(m+1) = 0\)
factor the quadratic
\(\displaystyle m_1 = 4,\qquad m_2 = -1\)
two distinct real roots
\[ y = c_1 x^4 + c_2 x^{-1} \]

One mode grows fast \((x^4)\), the other decays \((x^{-1})\). The general solution balances both.

Example 2

Repeated Root — \(4x^2y'' + 8xy' + y = 0\)

Identify \(a=4,\; b=8,\; c=1\). Assume \(y = x^m\).

\(\displaystyle 4m(m-1)x^m + 8mx^m + x^m = 0\)
substitute and simplify
\(\displaystyle 4m(m-1) + 8m + 1 = 0\)
divide through by x^m
\(\displaystyle 4m^2 - 4m + 8m + 1 = 0\)
expand 4m(m−1)
\(\displaystyle 4m^2 + 4m + 1 = 0\)
collect terms
\(\displaystyle (2m+1)^2 = 0\)
recognize perfect square
\(\displaystyle m_0 = -\tfrac{1}{2}\quad\text{(repeated)}\)
double root
\[ y = c_1 x^{-1/2} + c_2 x^{-1/2}\ln x \]
Example 3

Complex Roots — \(x^2y'' + xy' + y = 0\)

Identify \(a=1,\; b=1,\; c=1\). Assume \(y = x^m\).

\(\displaystyle m(m-1)x^m + mx^m + x^m = 0\)
substitute and simplify
\(\displaystyle m(m-1) + m + 1 = 0\)
divide by x^m
\(\displaystyle m^2 - m + m + 1 = 0\)
expand m(m−1)
\(\displaystyle m^2 + 1 = 0\)
the −m and +m cancel
\(\displaystyle m = \pm\, i\quad\Rightarrow\quad \alpha = 0,\quad \beta = 1\)
complex conjugate roots
\[ y = c_1\cos(\ln x) + c_2\sin(\ln x) \]

Since \(\alpha = 0\), the \(x^\alpha\) factor is just 1. The oscillation in \(\ln x\) has no power-law envelope — it neither grows nor decays.

Example 4

Distinct Roots + IVP — \(x^2y'' - xy' - 3y = 0,\quad y(1)=2,\; y'(1)=5\)

\(\displaystyle m(m-1) - m - 3 = 0\)
auxiliary equation from a=1, b=−1, c=−3
\(\displaystyle m^2 - 2m - 3 = 0\)
expand and collect
\(\displaystyle (m-3)(m+1) = 0\)
factor
\(\displaystyle m = 3,\quad m = -1\)
distinct real roots
\(\displaystyle y = c_1 x^3 + c_2 x^{-1}\)
general solution
\(\displaystyle y' = 3c_1 x^2 - c_2 x^{-2}\)
differentiate general solution
\(\displaystyle y(1): \quad c_1 + c_2 = 2\)
apply first initial condition
\(\displaystyle y'(1): \quad 3c_1 - c_2 = 5\)
apply second initial condition
\(\displaystyle (c_1 + c_2) + (3c_1 - c_2) = 2 + 5\)
add the two equations
\(\displaystyle 4c_1 = 7\)
simplify left and right sides
\(\displaystyle c_1 = \tfrac{7}{4}\)
solve for c₁
\(\displaystyle c_2 = 2 - \tfrac{7}{4} = \tfrac{1}{4}\)
back-substitute into first equation
\[ y = \tfrac{7}{4}x^3 + \tfrac{1}{4}x^{-1} \]
Example 5

Repeated Root + IVP — \(x^2y'' - 3xy' + 4y = 0,\quad y(1)=1,\; y'(1)=0\)

\(\displaystyle m(m-1) - 3m + 4 = 0\)
auxiliary equation, a=1, b=−3, c=4
\(\displaystyle m^2 - 4m + 4 = 0\)
expand and collect
\(\displaystyle (m-2)^2 = 0\)
perfect square
\(\displaystyle m_0 = 2\quad\text{(repeated)}\)
double root
\(\displaystyle y = c_1 x^2 + c_2 x^2\ln x\)
general solution for repeated root
\(\displaystyle \frac{d}{dx}(x^2\ln x) = 2x\ln x + x\)
product rule on the second term
\(\displaystyle y' = 2c_1 x + c_2(2x\ln x + x)\)
differentiate general solution
\(\displaystyle y(1):\quad c_1\cdot 1 + c_2\cdot 1\cdot\ln 1 = 1\)
ln 1 = 0, so this reduces simply
\(\displaystyle c_1 = 1\)
first constant found
\(\displaystyle y'(1):\quad 2(1) + c_2(0 + 1) = 0\)
2x ln x at x=1 gives 0; x at x=1 gives 1
\(\displaystyle 2 + c_2 = 0\)
simplify
\(\displaystyle c_2 = -2\)
second constant found
\[ y = x^2 - 2x^2\ln x \]
Example 6

Complex Roots + IVP — \(x^2y'' - 2xy' + 5y = 0,\quad y(1)=3,\; y'(1)=1\)

\(\displaystyle m(m-1) - 2m + 5 = 0\)
auxiliary equation, a=1, b=−2, c=5
\(\displaystyle m^2 - 3m + 5 = 0\)
expand and collect
\(\displaystyle m = \frac{3 \pm \sqrt{9-20}}{2}\)
quadratic formula
\(\displaystyle m = \frac{3 \pm \sqrt{-11}}{2} = \frac{3}{2} \pm \frac{\sqrt{11}}{2}\,i\)
simplify — discriminant is negative → complex roots
\(\displaystyle \alpha = \tfrac{3}{2},\qquad \beta = \tfrac{\sqrt{11}}{2}\)
read off real and imaginary parts
\(\displaystyle y = x^{3/2}\!\left(c_1\cos\!\left(\tfrac{\sqrt{11}}{2}\ln x\right) + c_2\sin\!\left(\tfrac{\sqrt{11}}{2}\ln x\right)\right)\)
general solution for complex roots
\(\displaystyle \text{Let }k = \tfrac{\sqrt{11}}{2}\text{ for compactness}\)
shorthand to keep the work readable
\(\displaystyle y(1):\quad 1^{3/2}(c_1\cos 0 + c_2\sin 0) = 3\)
at x=1: ln 1 = 0, cos 0 = 1, sin 0 = 0
\(\displaystyle c_1 = 3\)
first constant
\(\displaystyle y' = \tfrac{3}{2}x^{1/2}(c_1\cos(k\ln x) + c_2\sin(k\ln x)) + x^{3/2}\cdot\frac{k}{x}(-c_1\sin(k\ln x) + c_2\cos(k\ln x))\)
product rule: u = x^{3/2}, v = (trig part)
\(\displaystyle y'(1):\quad \tfrac{3}{2}(c_1) + k(c_2) = 1\)
at x=1: sin 0 = 0, cos 0 = 1, x^{1/2} = 1
\(\displaystyle \tfrac{3}{2}(3) + kc_2 = 1\)
substitute c₁ = 3
\(\displaystyle \tfrac{9}{2} + kc_2 = 1\)
simplify left side
\(\displaystyle kc_2 = 1 - \tfrac{9}{2} = -\tfrac{7}{2}\)
isolate kc₂
\(\displaystyle c_2 = \frac{-7/2}{k} = \frac{-7/2}{\sqrt{11}/2} = \frac{-7}{\sqrt{11}}\)
divide through by k = √11/2
\[ y = x^{3/2}\!\left(3\cos\!\left(\tfrac{\sqrt{11}}{2}\ln x\right) - \frac{7}{\sqrt{11}}\sin\!\left(\tfrac{\sqrt{11}}{2}\ln x\right)\right) \]

§9Interactive Explorer

Adjust the coefficients \(a\), \(b\), \(c\) and watch the roots, solution type, and graph update in real time. The plot shows \(y(x)\) for \(x \in (0.1, 5]\) with initial conditions \(y(1)=1, y'(1)=0\).

Equation: ax²y″ + bxy′ + cy = 0

Root Analysis
General Solution Form

Solution Plot — y(1)=1, y′(1)=0


§10Common Mistakes

Mistake 1 — Wrong auxiliary equation

The auxiliary equation for \(ax^2y'' + bxy' + cy = 0\) is \(am(m-1) + bm + c = 0\), which expands to \(am^2 + (b-a)m + c = 0\). The middle coefficient is \((b-a)\), not just \(b\). Writing \(am^2 + bm + c = 0\) is the constant-coefficient equation, not Cauchy–Euler.

Mistake 2 — Guessing \(e^{mx}\) instead of \(x^m\)

Exponentials are the trial solution for constant-coefficient ODEs. Cauchy–Euler needs power functions. The whole method is built on \(y = x^m\).

Mistake 3 — Forgetting the product rule on \(x^m\ln x\)

When differentiating \(y_2 = x^{m_0}\ln x\), you must use the product rule. The derivative is \(m_0 x^{m_0-1}\ln x + x^{m_0-1}\), not just \(m_0 x^{m_0-1}\ln x\).

Mistake 4 — Forgetting the chain rule on \(\cos(\beta\ln x)\)

Differentiating \(\cos(\beta\ln x)\) gives \(-\sin(\beta\ln x)\cdot\dfrac{\beta}{x}\). The \(\dfrac{1}{x}\) factor from differentiating \(\ln x\) must appear. Leaving it out gives the wrong derivative.

Mistake 5 — Applying the formula on \((-\infty, 0)\) without adjustment

Since \(\ln x\) only makes sense for \(x > 0\), the formulas as written apply on \((0, \infty)\). On \((-\infty, 0)\), replace \(x\) with \(|x|\) throughout, including in \(\ln|x|\).


§11Summary Reference

The master equation is \(ax^2y'' + bxy' + cy = 0\). Try \(y = x^m\) and get the auxiliary equation:

\[ am^2 + (b-a)m + c = 0 \]
Root Type Condition General Solution
Distinct Real \(m_1 \neq m_2\), both real \(c_1 x^{m_1} + c_2 x^{m_2}\)
Repeated Real discriminant \(= 0\), root \(m_0\) \(c_1 x^{m_0} + c_2 x^{m_0}\ln x\)
Complex \(m = \alpha \pm i\beta\), \(\beta \neq 0\) \(x^\alpha\!\left(c_1\cos(\beta\ln x) + c_2\sin(\beta\ln x)\right)\)
The Core Mental Model

Constant-coefficient ODEs have translation symmetry → exponentials \(e^{rx}\) are the natural solutions. Cauchy–Euler ODEs have scale symmetry → power laws \(x^m\) are the natural solutions. The substitution \(t = \ln x\) converts between the two worlds, and the auxiliary equations are identical.

Solution Building Block Correspondence

Every solution form in Cauchy–Euler is the direct image of a constant-coefficient solution form under \(x = e^t\):

\(e^{m_1 t}\) and \(e^{m_2 t}\) \(\;\longrightarrow\;\) \(x^{m_1}\) and \(x^{m_2}\)

\(e^{m_0 t}\) and \(te^{m_0 t}\) \(\;\longrightarrow\;\) \(x^{m_0}\) and \(x^{m_0}\ln x\)

\(e^{\alpha t}\cos(\beta t)\) and \(e^{\alpha t}\sin(\beta t)\) \(\;\longrightarrow\;\) \(x^\alpha\cos(\beta\ln x)\) and \(x^\alpha\sin(\beta\ln x)\)