How do we know a solution exists? And if it does, is it the only one?
2.3.1 Existence Theorem2.3.2 Uniqueness Theorem2.3.3 Finding Points & Intervals
Foundation
Why Should We Even Ask?
Imagine you write down a differential equation — maybe you're modeling a rocket, a chemical reaction, a swinging pendulum. You plug in an initial condition and ask: "what happens next?"
Before doing any computation, two completely separate questions deserve an answer:
Question 1 — Existence
Does a solution exist at all? Could the equation be so weird or broken that no function satisfies it?
Question 2 — Uniqueness
If a solution exists, is it the only one? Could there be two different futures starting from the same present?
For linear equations, the answers came for free. But for nonlinear equations — things like \(y' = y^2\) or \(y' = \sqrt{y}\) — the story is trickier. Solutions can blow up to infinity in finite time. Multiple solutions can branch from the same point. Things get weird.
The Big Idea
We need theorems — guarantees — that tell us under what conditions we can trust that a solution exists and is unique. These theorems are like a safety check before you run your calculation.
The function \(f(t, y)\) is the rule — it says "at position \(y\) and time \(t\), the rate of change is \(f(t,y)\)." The initial condition \(y(t_0) = y_0\) pins a specific starting point.
A solution is a function \(y(t)\) that:
1
Passes through the starting point: \(y(t_0) = y_0\)
2
Has its derivative match the rule at every time: \(y'(t) = f(t,\, y(t))\)
3
Does so on some open interval \((a, b)\) around \(t_0\)
Existence means: at least one such function \(y(t)\) actually exists. The equation isn't asking the impossible.
Why Could It Fail?
If \(f(t, y)\) is wildly discontinuous — jumping all over the place — there may be no smooth curve \(y(t)\) that can follow the rules everywhere. Think of asking someone to walk a path where the slope instruction flickers between \(+\infty\) and \(-\infty\) randomly. No one can do it.
A slope field shows the direction \(f(t,y)\) at every point. A solution curve \(y(t)\) is one that flows along those arrows. Existence says: can you draw such a curve through \((t_0, y_0)\)?
Objective 2.3.1
The Existence Theorem
We need a condition on \(f(t,y)\) that guarantees a solution can be drawn. The condition turns out to be simple: just make sure \(f\) isn't too crazy — specifically, that it's continuous.
Theorem — Peano Existence
Existence Theorem
Let \(f(t, y)\) be continuous on some open rectangle
containing the initial point \((t_0, y_0)\). Then the IVP
\[\frac{dy}{dt} = f(t,y),\qquad y(t_0)=y_0\]
has at least one solution \(y(t)\) defined on some open interval \((t_0 - h,\; t_0 + h)\) with \(h \leq a\).
What the Theorem Is Really Saying
Think of the rectangle \(R\) as a safe zone on the \((t,y)\) plane centered at your initial point. As long as \(f\) is nice and continuous inside that zone, a solution curve is guaranteed to exist — at least locally, near \(t_0\).
Analogy — Traffic Flow
Imagine \(f(t,y)\) is a traffic direction sign at every intersection of a city grid. Existence says: as long as every sign in your neighborhood actually points somewhere defined (no blank signs, no contradictions), you can always take a valid route from your starting block. The route exists.
Why Continuity is the Key
Continuity means \(f(t, y)\) doesn't jump abruptly. Nearby points get nearby slopes. So if you're at \((t_0, y_0)\) and you take a tiny step, the new direction you need to follow is close to the old one. You can always make progress.
If \(f\) is discontinuous — if it can jump from \(+1000\) to \(-1000\) in an instant — then after taking a tiny step you might face an impossible direction reversal, and no smooth curve can honor that.
Sketch of Proof
Proof Idea (Euler's Method Argument)
The full proof uses a technique called the Picard–Lindelöf iteration or a compactness argument. Here is the geometric skeleton that builds the key intuition:
\(\textbf{Step 1.}\) The initial slope is \(f(t_0, y_0)\). Use it to take a small step: \(y(t_0 + \Delta t) \approx y_0 + f(t_0,y_0)\,\Delta t\).
Euler approximation from IC
\(\textbf{Step 2.}\) Because \(f\) is continuous on the compact rectangle \(\bar{R}\), it is bounded: \(|f(t,y)| \leq M\) for some \(M > 0\).
Extreme Value Theorem
\(\textbf{Step 3.}\) Taking steps of size \(\Delta t\), the curve stays inside \(R\) for at least \(h = \min\!\left(a,\, \tfrac{b}{M}\right)\) time units, since the vertical excursion is at most \(M \cdot h \leq b\).
Bounding the drift
\(\textbf{Step 4.}\) As \(\Delta t \to 0\), the family of Euler polygons has a convergent subsequence (Arzelà–Ascoli theorem). The limit is a genuine differentiable solution.
Compactness argument
∎
⚠ Important Caveat
Existence is only guaranteed locally — on a possibly tiny interval around \(t_0\). The solution might not last forever. It might blow up to infinity at some finite time. Existence just says: it starts.
Building Intuition
What Does "Uniqueness" Mean?
Existence answers "does a solution exist?" Uniqueness asks a harder question: "could two different solution curves pass through the same initial point?"
If the answer is yes — if two curves both satisfy the IVP — then knowing the initial condition doesn't tell you where you're headed. The future is ambiguous. That's bad for modeling physical reality.
Two different curves, both passing through \((t_0, y_0)\) and both satisfying \(y' = f(t,y)\). Without uniqueness, the future is a fork — the same initial conditions lead to different outcomes.
Physical Meaning
In Newtonian physics, if you know a particle's position and velocity at one moment, the future is determined. This is precisely the uniqueness guarantee — identical initial states must evolve identically. If uniqueness failed, physics would be non-deterministic at the classical level.
A Famous Example Where Uniqueness Fails
Consider the IVP:
\[\frac{dy}{dt} = y^{1/3}, \qquad y(0) = 0\]
One obvious solution is \(y(t) = 0\) for all \(t\). The zero function. But there's another one:
\[y(t) = \left(\frac{2t}{3}\right)^{3/2}, \quad t \geq 0\]
Two different solutions from the same initial condition. Uniqueness breaks. Why? Because \(f(t,y) = y^{1/3}\) has an infinite derivative with respect to \(y\) at \(y = 0\). That's the clue — we need an extra condition on \(f\) beyond just continuity.
Objective 2.3.2
The Uniqueness Theorem
The fix is to demand that \(f\) not only be continuous, but that it also be "controlled" in the \(y\)-direction. The precise tool is called a Lipschitz condition.
What is Lipschitz Continuity?
A function \(f(t, y)\) satisfies a Lipschitz condition in \(y\) on a rectangle \(R\) if there is a constant \(L > 0\) such that for all \((t, y_1)\) and \((t, y_2)\) in \(R\):
If you nudge \(y\) a little bit, the output \(f\) can only change by at most \(L\) times as much. The function can't have infinite sensitivity to \(y\). It can't be infinitely steep in the \(y\)-direction.
Easiest Way to Check: Use the Partial Derivative
If \(\partial f / \partial y\) exists and is continuous and bounded on \(R\), then \(f\) is Lipschitz in \(y\) on \(R\). This is the practical test you'll use almost every time.
Why?
By the Mean Value Theorem: \(f(t,y_1) - f(t,y_2) = \frac{\partial f}{\partial y}(t, \xi)\,(y_1 - y_2)\) for some \(\xi\) between \(y_1\) and \(y_2\). If \(|\partial f/\partial y| \leq L\) everywhere in \(R\), that immediately gives the Lipschitz bound.
\(\textbf{Step 4.}\) We now have \(u(t) \leq L\int_{t_0}^t u(s)\,ds\). By Gronwall's Inequality: \(u(t) \leq u(t_0)\,e^{L(t-t_0)} = 0 \cdot e^{L(t-t_0)} = 0\)
\(u(t_0) = |y_1(t_0)-y_2(t_0)| = 0\)
\(\textbf{Step 5.}\) Since \(u(t) = |y_1(t)-y_2(t)| \geq 0\) and \(u(t) \leq 0\), we get \(u(t) = 0\), i.e., \(y_1(t) = y_2(t)\).
Squeeze: nonneg. ≤ 0
∎
The Core Mechanism
The Lipschitz condition means: if two solutions ever "agree" at a point (which they must, at \(t_0\)), they can never drift apart — because any drift would require the equation to "feel" a difference larger than the Lipschitz bound allows. The Gronwall inequality is the mathematical expression of "drifts get squashed back to zero."
At \(y = 0\), this is \(\frac{1}{3} \cdot 0^{-2/3} = +\infty\). The partial derivative blows up. So \(\partial f/\partial y\) is not continuous near \(y = 0\). The Lipschitz condition fails. The theorem gives no uniqueness guarantee — and indeed uniqueness fails.
Objective 2.3.3
Finding the Points & Intervals
In practice, you're given an IVP and asked: for which initial points \((t_0, y_0)\) does the solution exist and is it unique? And on which interval is that guaranteed?
The Recipe
1
Check continuity of \(f(t,y)\).
Find where \(f\) is continuous — typically all of \(\mathbb{R}^2\) except where a denominator is zero, or a root is negative, etc.
2
Compute \(\partial f / \partial y\) and check its continuity.
Find where \(\partial f/\partial y\) is continuous — same idea, possibly additional constraints.
3
The "good" region is where both \(f\) and \(\partial f/\partial y\) are continuous. Existence and uniqueness hold in any open rectangle inside this region.
4
Find the interval.
Given initial point \((t_0, y_0)\), the guaranteed interval is the largest open interval containing \(t_0\) on which the solution stays inside the good region.
Key Insight About the Interval
The theorem guarantees existence/uniqueness on some interval. But we want the largest possible interval — up to the first point where conditions break down. A solution can be extended as long as it stays in the good region.
Comparison: Existence Only vs. Existence + Uniqueness
Scenario
Condition Needed
Conclusion
\(f\) continuous, \(\partial f/\partial y\) not continuous
Existence only (Peano)
At least one solution exists. May not be unique.
Both \(f\) and \(\partial f/\partial y\) continuous
Existence + Uniqueness (Picard–Lindelöf)
Exactly one solution exists locally.
\(f\) not continuous
Neither theorem applies
No guarantee. May have zero, one, or many solutions.
Worked Examples
Examples — Full Walkthrough
Example 1
Does a unique solution exist? \(\;y' = \dfrac{t^2}{y-1},\quad y(0) = 3\)
Here \(f(t,y) = \dfrac{t^2}{y-1}\).
1
Continuity of \(f\): \(f\) is continuous everywhere except where the denominator is zero: \(y - 1 = 0 \Rightarrow y = 1\). So \(f\) is continuous on \(\{(t,y) : y \neq 1\}\).
2
Compute \(\partial f/\partial y\):
\[\frac{\partial f}{\partial y} = \frac{\partial}{\partial y}\left(\frac{t^2}{y-1}\right) = -\frac{t^2}{(y-1)^2}\]
This is also continuous except at \(y = 1\).
3
Initial point: \((t_0, y_0) = (0, 3)\). Is \(y_0 = 3 \neq 1\)? Yes. So the initial point is in the good region.
4
Conclusion: By the Existence & Uniqueness Theorem, a unique solution exists in some open interval around \(t = 0\).
5
Interval: The solution is guaranteed to exist as long as \(y(t) \neq 1\). We can't tell from the theorem alone when/if \(y\) hits 1 — that requires solving the equation.
Result
A unique solution exists in some open interval containing \(t = 0\). The good region is any horizontal strip \(y > 1\) (since our IC has \(y_0 = 3 > 1\)), and the solution is valid until it potentially approaches \(y = 1\).
Example 2 — Where Does Uniqueness Fail?
\(y' = (y-2)^{2/3},\quad y(0) = 2\)
Here \(f(t,y) = (y-2)^{2/3}\).
1
Continuity of \(f\): \((y-2)^{2/3}\) is continuous for all real \(y\). ✓
2
Compute \(\partial f/\partial y\):
\[\frac{\partial f}{\partial y} = \frac{2}{3}(y-2)^{-1/3}\]
At \(y = 2\): this is \(\frac{2}{3}(0)^{-1/3} \to \infty\). Not defined. Not continuous at \(y = 2\).
3
Initial point: \((0, 2)\). This is exactly on the line \(y = 2\) where \(\partial f/\partial y\) blows up.
4
Conclusion: Existence is guaranteed (since \(f\) is continuous), but uniqueness is not guaranteed. Indeed, both \(y(t) = 2\) and \(y(t) = 2 + \left(\tfrac{t}{3}\right)^3\) solve the IVP.
Result
At \(y_0 = 2\), existence holds but uniqueness fails. You can verify that \(y = 2\) is the "flat" solution (stays at 2 forever), while the other grows away. Two valid futures from the same start.
Example 3 — Finding the Interval
\(y' = t\,y^2,\quad y(0) = 1\)
Here \(f(t,y) = t\,y^2\).
1
Check conditions: \(f = ty^2\) is continuous everywhere. \(\partial f/\partial y = 2ty\) is also continuous everywhere. The entire \(ty\)-plane is the good region.
2
Solve explicitly to find the interval: This is separable:
\[\int y^{-2}\,dy = \int t\,dt \implies -\frac{1}{y} = \frac{t^2}{2} + C\]
Blow-up time: The denominator \(2 - t^2 = 0\) at \(t = \pm\sqrt{2}\). So the solution blows up at \(t = \sqrt{2}\). The maximal interval of existence is \((-\sqrt{2},\, \sqrt{2})\).
Key Lesson
Even though conditions were met everywhere, the solution still blows up in finite time. The theorem guarantees existence locally; the actual interval depends on the specific solution. Always check whether the solution escapes to infinity.
Self-Check
Test Your Intuition
Q1: What's the difference between existence and uniqueness?▶
Existence means at least one solution curve passes through \((t_0, y_0)\). It answers "does anything work?"
Uniqueness means only one solution does so. It answers "is there just one?"
You can have existence without uniqueness (like \(y' = y^{1/3}\) at \(y_0 = 0\)): multiple solutions exist. You cannot have uniqueness without existence (a unique "nothing" isn't a solution). For physics and engineering, we almost always need both.
Q2: Why does continuity of \(\partial f/\partial y\) give uniqueness?▶
Continuity of \(\partial f/\partial y\) on a closed bounded rectangle implies it's bounded there: \(|\partial f/\partial y| \leq L\). By the Mean Value Theorem, this is exactly the Lipschitz condition: \(|f(t,y_1) - f(t,y_2)| \leq L|y_1 - y_2|\).
The Lipschitz condition prevents solutions from "separating." If two solutions start at the same point, the Lipschitz bound keeps their difference from growing — in fact, Gronwall's inequality drives that difference to exactly zero.
Q3: A solution can blow up even if uniqueness holds. Why?▶
The theorem guarantees existence/uniqueness locally — in a small rectangle around \((t_0, y_0)\). If the solution escapes that rectangle, the theorem says nothing about what happens next. In Example 3, \(f = ty^2\) is nice everywhere, so conditions hold in every rectangle — but the solution \(y = 2/(2-t^2)\) nevertheless hits \(+\infty\) at \(t = \sqrt{2}\).
Uniqueness just means you follow one path. But that one path can still lead off a cliff at finite time. "Unique" doesn't mean "nice forever."
Q4: For \(y' = \sin(ty)/t\), where does uniqueness hold?▶
Here \(f(t,y) = \sin(ty)/t\). The function \(f\) is undefined at \(t = 0\) but \(\lim_{t\to 0} \sin(ty)/t = y\), so if we define \(f(0,y) = y\) by continuity, \(f\) is continuous everywhere.
Compute \(\partial f/\partial y = \cos(ty)\), which is continuous everywhere.
So both conditions hold everywhere (with the patched definition at \(t=0\)), and uniqueness is guaranteed at every initial point.