§ 0 · Foundation
What Are We Actually Solving?
Picture a mass hanging on a spring. You pull it and let go — it bounces up and down on its own. That's called a homogeneous problem: the system is responding to its own initial state, with nothing from the outside pushing on it.
Now imagine somebody is also shaking the ceiling while the mass bounces. Now an external force is being added at every instant. That external push is called the forcing term or nonhomogeneous term \( g(t) \). The differential equation describing this is:
The left side \( ay'' + by' + cy \) is the system talking to itself — its own inertia, damping, and restoring force. The right side \( g(t) \) is whatever is being pumped in from outside.
When \( g(t) = 0 \), you have the homogeneous version. When \( g(t) \neq 0 \), the external world is pushing. Our job is to find \( y(t) \) — the total motion of the system at every instant.
Homogeneous
\( ay'' + by' + cy = 0 \)
System vibrates freely under its own dynamics. No outside push. Solution called \( y_c \) — the complementary or natural solution.
Nonhomogeneous
\( ay'' + by' + cy = g(t) \)
Something external is driving the system at every moment. We need an extra piece of the solution: \( y_p \), the particular or forced response.
The engineering split
In systems language: \( y_c \) is the natural response (what the system does on its own) and \( y_p \) is the forced response (what the system does because of the input). The full answer is always their sum: \( y = y_c + y_p \).
§ 1 · Anatomy
Dissecting the Equation
Before solving anything, understand what each piece of the equation means. Take the standard form:
The unknown function we want to find. It depends on time \( t \). Every derivative is a derivative of this function.
Constants that describe the system's physical properties. In a spring: \( a \) is mass, \( b \) is damping, \( c \) is spring stiffness. Crucially, they do not depend on \( t \) — this is what makes the equation "constant coefficient."
Think of \( L \) as a machine: you feed it a function \( y \), it spits out \( ay'' + by' + cy \). Writing \( L[y] \) instead of the full expression is just shorthand that keeps equations readable. The critical property of \( L \) is that it is linear:
\[ L[y_1 + y_2] = L[y_1] + L[y_2] \quad \text{and} \quad L[cy] = cL[y] \]This linearity is the entire engine behind why the method works.
Also called the driving term, input, or nonhomogeneous term. It is the external signal being injected into the system. The shape of \( g(t) \) determines what kind of guess we make for \( y_p \).
Classification of this equation:
It is second-order (highest derivative is \( y'' \)), linear (\( y, y', y'' \) appear only to the first power and are never multiplied together), constant-coefficient (coefficients \( a, b, c \) don't depend on \( t \)), and nonhomogeneous (right side is not zero).
§ 2 · Solution Architecture
Why the Solution Splits into Two Parts
This is the deepest idea in the whole topic. Don't rush past it. Everything else is just mechanics that follows from it.
Suppose \( y_p \) is any one function that satisfies the nonhomogeneous equation:
Now let \( y_h \) be any solution to the homogeneous equation (the one with zero on the right):
What happens if we add them together? Define \( y = y_h + y_p \) and plug into \( L \):
Now the crucial question: is every solution of this form? Yes. Here's why. Let \( \tilde{y} \) be any solution whatsoever. Look at the difference \( u = \tilde{y} - y_p \):
The Structural Theorem
\[ \boxed{y = y_c + y_p} \]where \( y_c \) is the general solution to the homogeneous equation (containing all the free constants \( c_1, c_2 \)) and \( y_p \) is any one particular solution to the nonhomogeneous equation. This form is both necessary and sufficient — it captures all solutions and nothing else.
We only need one particular solution
It doesn't matter which \( y_p \) you find — any one will do. The \( y_c \) piece absorbs all the freedom. Pick the simplest one.
§ 3 · Step One
Solving the Homogeneous Equation
Before we can find \( y_p \), we need \( y_c \). Start with:
Why try \( y = e^{rt} \)? Because exponentials are the only functions that reproduce themselves under differentiation — they're eigenfunctions of the derivative operator. Differentiating \( e^{rt} \) gives back \( e^{rt} \), just scaled by \( r \):
That means if we substitute \( y = e^{rt} \) into the ODE, every term comes out with a factor of \( e^{rt} \), which we can cancel — converting the differential equation into a pure algebra problem.
Substituting \( y = e^{rt} \), \( y' = re^{rt} \), \( y'' = r^2 e^{rt} \) into \( ay'' + by' + cy = 0 \):
Solve this quadratic for \( r \). The nature of the roots determines the form of \( y_c \):
Two distinct real roots \( r_1 \neq r_2 \)
\[ y_c = C_1 e^{r_1 t} + C_2 e^{r_2 t} \]Each root gives one independent solution. Two roots, two independent pieces.
Repeated real root \( r_1 = r_2 = r \)
\[ y_c = (C_1 + C_2 t)\,e^{rt} \]When the quadratic has a double root, the second independent solution picks up a factor of \( t \) — a key pattern that will reappear in the resonance glitch.
Complex roots \( r = \alpha \pm \beta i \)
\[ y_c = e^{\alpha t}(C_1 \cos\beta t + C_2 \sin\beta t) \]Euler's formula converts complex exponentials into real sines and cosines. The real part \( \alpha \) controls growth or decay; the imaginary part \( \beta \) controls the oscillation frequency.
Why do we need \( y_c \) before guessing \( y_p \)?
Because we need to check whether our guess for \( y_p \) accidentally overlaps with \( y_c \). If it does, the method fails silently — the guess gets annihilated by the operator. You must have \( y_c \) in hand before writing down \( y_p \).
§ 4 · Particular Solution
Guessing the Form of \( y_p \)
Here is the core insight: differentiation keeps you in the same family. Differentiate a polynomial and you get a polynomial. Differentiate an exponential and you get an exponential. Differentiate a sine or cosine and you stay in the sin/cos world. So if \( g(t) \) is built from these, then \( y_p \) must be too — otherwise the two sides of the equation can never match.
| Form of \( g(t) \) | Guess for \( y_p \) | Why it makes sense |
|---|---|---|
| \( Ke^{rt} \) | \( Ae^{rt} \) | Derivatives of \( e^{rt} \) are just constants times \( e^{rt} \) — so the left side stays in the same family |
| \( P_n(t) \) — polynomial degree \( n \) | \( A_n t^n + \cdots + A_1 t + A_0 \) | You need the full degree-\(n\) polynomial because derivatives lower the degree, and all lower-degree terms survive via the \( cy_p \) part |
| \( K\cos(\beta t) \) or \( K\sin(\beta t) \) | \( A\cos(\beta t) + B\sin(\beta t) \) | Differentiation mixes them: \( (\cos)' = -\sin \), \( (\sin)' = \cos \). Starting with one creates the other — you need both |
| \( P_n(t)e^{\alpha t} \) | \( (A_n t^n + \cdots + A_0)e^{\alpha t} \) | Product rule mixes polynomial and exponential but keeps you in their product family |
| \( e^{\alpha t}\cos(\beta t) \) or \( e^{\alpha t}\sin(\beta t) \) | \( e^{\alpha t}(A\cos\beta t + B\sin\beta t) \) | Decaying oscillation stays in its own family under differentiation |
| \( P_n(t)e^{\alpha t}\cos(\beta t) \) (most general) | \( e^{\alpha t}\!\big[(A_n t^n\!+\!\cdots\!+\!A_0)\cos\beta t + (B_n t^n\!+\!\cdots\!+\!B_0)\sin\beta t\big] \) | Full combination of all three families; must include both trig terms at every polynomial power |
Why "undetermined coefficients"?
We know the shape of the answer before we know the size. The letters \( A, B, A_0, \ldots \) are the sizes — the unknowns. We commit to the correct shape first, then determine the coefficients by substituting into the ODE and matching both sides. The name comes from the fact that the coefficients are initially undetermined.
§ 5 · The Critical Complication
The Resonance Glitch
Here is the famous failure mode. If your guessed \( y_p \) happens to already be part of \( y_c \) — a natural mode of the system — then substituting it into the ODE gives zero on the left no matter what coefficient you put in front of it. The operator kills it. Your guess is useless.
Why does this happen? Because \( y_c \) is exactly the collection of functions that \( L \) maps to zero. If your guess lives inside that collection, \( L[y_p] = 0 \), not \( g(t) \). The forcing can never be matched.
The Overlap / Resonance Rule
After writing your initial guess, compare every term with \( y_c \). If any term in the guess also appears in \( y_c \), multiply the entire guess by \( t \). If there's still overlap, multiply by \( t^2 \). Keep going until no term in the guess overlaps with \( y_c \).
Example of the glitch: Consider \( y'' - 3y' + 2y = e^t \). The characteristic roots are \( r = 1, 2 \), so:
The forcing is \( g(t) = e^t \), so the naive guess is \( y_p = Ae^t \). But \( e^t \) is already in \( y_c \)! Let's see what happens when we try it anyway:
The fix: multiply by \( t \). Use \( y_p = Ate^t \) instead. The factor \( t \) pushes the guess outside \( y_c \)'s span. It's no longer a natural mode. Now the operator can produce a nonzero result.
The general pattern for how many times to multiply by \( t \):
If \( r \) is a root of the characteristic equation of multiplicity \( s \) (i.e., the root appears \( s \) times), multiply your guess by \( t^s \). For a simple root (\( s=1 \)), multiply once. For a double root (\( s=2 \)), multiply twice, etc.
§ 6 · Combat Sequence
The Full Step-by-Step Algorithm
Every nonhomogeneous ODE of this type gets solved in the same order. Lock this in.
Method of Undetermined Coefficients — Full Sequence
§ 7 · Worked Example 1
Polynomial Forcing
Solve the equation:
Objective: Find the general solution \( y(t) \).
Known: Constant-coefficient linear ODE. Forcing \( g(t) = t^2 \) is a degree-2 polynomial.
Strategy: Find \( y_c \) from the characteristic equation, guess a degree-2 polynomial for \( y_p \), substitute, match coefficients.
Step 1 — Homogeneous solution. Solve \( y'' - 3y' + 2y = 0 \). Try \( y = e^{rt} \):
Step 2 — Guess for \( y_p \). Since \( g(t) = t^2 \) is degree 2, we guess a full degree-2 polynomial. We need all three terms because differentiating \( t^2 \) produces \( t \) and then \( 1 \), and those lower terms survive through the \( cy_p \) part of the ODE.
Overlap check: does any of \( \{t^2, t, 1\} \) appear in \( y_c = C_1 e^t + C_2 e^{2t} \)? No — \( y_c \) is purely exponential. No modification needed.
Step 3 — Compute derivatives.
Step 4 — Substitute into the ODE. The ODE requires \( y_p'' - 3y_p' + 2y_p = t^2 \). Plug in:
Step 5 — Match coefficients. Two polynomials are equal for all \( t \) if and only if coefficients of every power match:
General solution
\[ y(t) = C_1 e^t + C_2 e^{2t} + \frac{1}{2}t^2 + \frac{3}{2}t + \frac{7}{4} \]§ 8 · Worked Example 2
Exponential Forcing with Resonance Glitch
Solve:
Step 1 — Homogeneous solution. Same characteristic equation as before: \( r = 1, 2 \).
Step 2 — Guess for \( y_p \). Since \( g(t) = e^t \), the naive guess is \( y_p = Ae^t \).
Overlap check: Is \( e^t \) already in \( y_c \)? Yes — it's the \( C_1 e^t \) term. The operator will kill this guess. We must multiply by \( t \):
Step 3 — Compute derivatives. Use the product rule carefully:
Step 4 — Substitute into the ODE. Require \( y_p'' - 3y_p' + 2y_p = e^t \):
General solution
\[ y(t) = C_1 e^t + C_2 e^{2t} - te^t \]§ 9 · Worked Example 3
Trig Forcing with Resonance
Solve:
Step 1 — Homogeneous solution. Solve \( y'' + 4y = 0 \). Characteristic equation:
Step 2 — Guess for \( y_p \). Since \( g(t) = \cos(2t) \), the naive guess is \( y_p = A\cos(2t) + B\sin(2t) \).
Overlap check: Is \( \cos(2t) \) in \( y_c \)? Yes. Is \( \sin(2t) \) in \( y_c \)? Yes. Both terms are natural modes — the forcing is exactly at the system's natural frequency. This is pure resonance. Multiply the entire guess by \( t \):
Step 3 — Compute \( y_p' \). Apply the product rule to each term separately:
Step 4 — Compute \( y_p'' \). Differentiate \( y_p' \) term by term again:
Step 5 — Substitute into \( y_p'' + 4y_p = \cos(2t) \).
Step 6 — Match coefficients:
General solution
\[ y(t) = C_1\cos(2t) + C_2\sin(2t) + \frac{1}{4}\,t\sin(2t) \]§ 10 · Initial Value Problems
Locking Down the Constants
The general solution \( y = y_c + y_p \) still has free constants \( C_1, C_2 \). Initial conditions pin those down to give the unique solution for a specific physical scenario. The critical rule:
Always apply ICs to the complete solution
Initial conditions apply to \( y = y_c + y_p \), not to \( y_c \) alone. This is the single most common and most damaging mistake. The particular part \( y_p \) contributes to the value of \( y \) and \( y' \) at the initial point and must be included.
Find the general solution \( y = y_c + y_p \)
Complete the full method: \( y_c \) from the characteristic equation, \( y_p \) from undetermined coefficients.
Differentiate the full solution to get \( y' \)
You will need \( y(t_0) = y_0 \) and \( y'(t_0) = y_0' \), so compute \( y' \) symbolically with \( C_1, C_2 \) still free.
Substitute \( t = t_0 \) into \( y \) and \( y' \), set equal to given values
You now have two algebraic equations in two unknowns \( C_1, C_2 \).
Solve the 2×2 system for \( C_1, C_2 \)
Standard algebra. Substitute back to write the unique solution.
From earlier work: complementary solution \( y_c = C_1 e^{4t} + C_2 e^{-t} \). Particular solution \( y_p = -\tfrac{1}{2}e^{2t} \) (characteristic roots were \( 4, -1 \); \( r=2 \) is not a root, so no modification; guess \( Ae^{2t} \); substitute to get \( A = -\tfrac{1}{2} \)).
General solution:
Differentiate:
Apply \( y(0) = 1 \):
Apply \( y'(0) = 2 \):
Solve the 2×2 system. Add (I) and (II):
Final answer:
Physical interpretation
As \( t \to \infty \), the \( e^{4t} \) term dominates — the system is unstable because the characteristic root \( r = 4 \) is positive. The \( e^{-t} \) term is a transient that decays. The \( e^{2t} \) particular piece is dominated by the natural mode \( e^{4t} \) at long times.
§ 11 · Mixed Practice
Additional Examples — All Cases
Homogeneous: \( r^2 + 2r + 1 = (r+1)^2 = 0 \Rightarrow r = -1 \) repeated. So \( y_c = (C_1 + C_2 t)e^{-t} \).
Guess: \( g(t) = t^2 + 1 \) is degree 2. Guess \( y_p = At^2 + Bt + C \). Overlap? \( y_c \) contains \( e^{-t} \) and \( te^{-t} \), nothing polynomial. No modification.
Derivatives: \( y_p' = 2At + B,\quad y_p'' = 2A \).
Substitute into \( y_p'' + 2y_p' + y_p = t^2 + 1 \):
\[ 2A + 2(2At + B) + (At^2 + Bt + C) = t^2 + 1 \] \[ At^2 + (4A+B)t + (2A + 2B + C) = t^2 + 0{\cdot}t + 1 \]Match coefficients:
\( t^2 \): \( A = 1 \)
\( t^1 \): \( 4A + B = 0 \Rightarrow B = -4 \)
\( t^0 \): \( 2A + 2B + C = 1 \Rightarrow 2 - 8 + C = 1 \Rightarrow C = 7 \)
Result: \( y_p = t^2 - 4t + 7 \). General solution: \( y = (C_1 + C_2 t)e^{-t} + t^2 - 4t + 7 \).
Homogeneous: \( r^2 - 4r + 4 = (r-2)^2 = 0 \Rightarrow r = 2 \) repeated. So \( y_c = (C_1 + C_2 t)e^{2t} \).
Guess: Naive guess for \( e^{2t} \) is \( Ae^{2t} \). Overlap? \( e^{2t} \in y_c \). Multiply by \( t \): guess \( Ate^{2t} \). Still overlaps? \( te^{2t} \in y_c \) too! Multiply again by \( t \): guess \( y_p = At^2 e^{2t} \). Now check: is \( t^2 e^{2t} \in y_c \)? No — \( y_c \) only goes up to \( te^{2t} \). We're safe.
This is because \( r = 2 \) is a root of multiplicity 2, so we multiply by \( t^2 \).
Derivatives (product rule twice):
\[ y_p' = 2Ate^{2t} + 2At^2 e^{2t} = Ae^{2t}(2t + 2t^2) \] \[ y_p'' = Ae^{2t}(2 + 8t + 4t^2) \]Substitute into \( y_p'' - 4y_p' + 4y_p = e^{2t} \):
\[ Ae^{2t}(2 + 8t + 4t^2) - 4Ae^{2t}(2t + 2t^2) + 4At^2 e^{2t} = e^{2t} \] \[ Ae^{2t}[(2 + 8t + 4t^2) - (8t + 8t^2) + 4t^2] = e^{2t} \] \[ Ae^{2t}[2 + (8t - 8t) + (4t^2 - 8t^2 + 4t^2)] = e^{2t} \] \[ 2Ae^{2t} = e^{2t} \Rightarrow A = \tfrac{1}{2} \]Result: \( y_p = \tfrac{1}{2}t^2 e^{2t} \). General solution: \( y = (C_1 + C_2 t)e^{2t} + \tfrac{1}{2}t^2 e^{2t} \).
Homogeneous: \( r^2 + 4 = 0 \Rightarrow r = \pm 2i \), so \( y_c = C_1\cos 2t + C_2\sin 2t \).
Guess: \( g(t) = e^t\cos(2t) \) has form \( e^{\alpha t}\cos(\beta t) \) with \( \alpha = 1, \beta = 2 \). Guess: \( y_p = e^t(A\cos 2t + B\sin 2t) \). Overlap? \( y_c \) has no \( e^t \) factor — no modification needed.
Derivatives (product rule twice):
\begin{align} y_p' &= e^t[(A+2B)\cos 2t + (B-2A)\sin 2t]\\ y_p'' &= e^t[(-3A+4B)\cos 2t + (-4A-3B)\sin 2t] \end{align}Substitute into \( y_p'' + 4y_p = e^t\cos 2t \):
\[ e^t[(-3A+4B+4A)\cos 2t + (-4A-3B+4B)\sin 2t] = e^t\cos 2t \] \[ e^t[(A+4B)\cos 2t + (-4A+B)\sin 2t] = e^t\cos 2t \]Match: \( A + 4B = 1 \) and \( -4A + B = 0 \Rightarrow B = 4A \). Substitute: \( A + 16A = 1 \Rightarrow A = \tfrac{1}{17},\ B = \tfrac{4}{17} \).
Result: \( y_p = e^t\!\left(\dfrac{\cos 2t + 4\sin 2t}{17}\right) \).
Roots: \( (r+1)(r+2) = 0 \Rightarrow r = -1, -2 \). So \( y_c = C_1 e^{-t} + C_2 e^{-2t} \).
Initial guess for \( g = te^{-t} \): degree-1 polynomial times \( e^{-t} \). Standard guess: \( (At + B)e^{-t} \).
Overlap check: The \( B e^{-t} \) part — is \( e^{-t} \in y_c \)? Yes! Multiply entire guess by \( t \):
\[ y_p = t(At + B)e^{-t} = (At^2 + Bt)e^{-t} \]Is \( te^{-t} \in y_c \)? No — \( y_c \) only has \( e^{-t} \) and \( e^{-2t} \). Safe. Correct guess: \( \boxed{(At^2 + Bt)e^{-t}} \).
Note: we don't need to solve this fully — the point is to identify the correct guess shape before starting any algebra.
Homogeneous: \( r^2 - 4 = 0 \Rightarrow r = \pm 2 \), so \( y_c = C_1 e^{2t} + C_2 e^{-2t} \).
Guess: \( g = e^{2t} \). Naive guess \( Ae^{2t} \) — but \( e^{2t} \in y_c \). Multiply by \( t \): \( y_p = Ate^{2t} \). No more overlap.
Derivatives:
\[ y_p' = Ae^{2t}(1 + 2t), \qquad y_p'' = Ae^{2t}(4 + 4t) \]Substitute into \( y_p'' - 4y_p \):
\[ Ae^{2t}(4 + 4t) - 4Ate^{2t} = Ae^{2t}[(4 + 4t) - 4t] = 4Ae^{2t} = e^{2t} \] \[ \Rightarrow A = \tfrac{1}{4},\quad y_p = \tfrac{t}{4}e^{2t} \]General solution: \( y = C_1 e^{2t} + C_2 e^{-2t} + \tfrac{t}{4}e^{2t} \).
Apply \( y(0) = 0 \): \( C_1 + C_2 = 0 \Rightarrow C_2 = -C_1 \).
Differentiate: \( y' = 2C_1 e^{2t} - 2C_2 e^{-2t} + \tfrac{1}{4}e^{2t} + \tfrac{t}{2}e^{2t} \).
Apply \( y'(0) = 0 \): \( 2C_1 - 2C_2 + \tfrac{1}{4} = 0 \Rightarrow 4C_1 = -\tfrac{1}{4} \Rightarrow C_1 = -\tfrac{1}{16},\ C_2 = \tfrac{1}{16} \).
Final answer:
\[ y = -\frac{1}{16}e^{2t} + \frac{1}{16}e^{-2t} + \frac{t}{4}e^{2t} = \frac{1}{16}(e^{-2t} - e^{2t}) + \frac{t}{4}e^{2t} \]Zero initial conditions but still grows
The system starts from complete rest — yet the response grows unboundedly. This happens because the forcing \( e^{2t} \) resonates with the natural mode \( e^{2t} \). The \( te^{2t} \) term grows faster than \( e^{2t} \) alone, so the response escapes to infinity no matter how you start.
§ 12 · Error Catalog
Where Students Go Wrong
Not solving the homogeneous equation first. You need \( y_c \) before you can detect overlap. Jumping straight to the guess is like walking onto a minefield without a map.
Guessing too small for polynomials. If \( g(t) = t^2 \), guessing only \( At^2 \) is wrong. You must include all lower-degree terms: \( At^2 + Bt + C \). The ODE mixes all powers when you substitute — lower-degree coefficients survive through the \( cy_p \) term.
Guessing only cosine when the forcing is cosine. You always need both \( A\cos(\beta t) + B\sin(\beta t) \) because differentiation mixes the two. Start with just cosine and you'll immediately generate a sine term you didn't account for.
Forgetting to check for the resonance glitch. If your guess is part of \( y_c \), substituting it will give zero on the left, and no value of \( A \) will fix it. Always compare your guess against \( y_c \) before differentiating.
Applying initial conditions to \( y_c \) alone. The ICs apply to the total function \( y = y_c + y_p \). Applying them only to \( y_c \) gives the wrong constants and therefore the wrong solution.
Algebra slop after substitution. The guess-and-substitute phase is conceptually simple but algebraically messy. Missing a negative sign, forgetting a term in the product rule, or miscopying a coefficient will silently corrupt the answer. Write every term.
§ 13 · Quick Reference
The Full Roadmap
The game in one line:
1. Solve \( L[y]=0 \) to get \( y_c \). 2. Guess \( y_p \) from \( g(t) \), fix overlap. 3. Substitute and match to find the constants in \( y_p \). 4. Write \( y = y_c + y_p \). 5. Apply ICs to the whole thing.
| If \( g(t) \) looks like... | Guess for \( y_p \) | Modify if... |
|---|---|---|
| \( Ke^{rt} \) | \( Ae^{rt} \) | \( e^{rt} \in y_c \) → use \( Ate^{rt} \) |
| \( P_n(t) \) (deg-\( n \) poly) | \( A_n t^n + \cdots + A_0 \) | \( 1 \in y_c \) (only if \( c=0 \)) → multiply by \( t \) |
| \( K\cos\beta t \) or \( K\sin\beta t \) | \( A\cos\beta t + B\sin\beta t \) | trig terms \( \in y_c \) → multiply by \( t \) |
| \( P_n(t)e^{\alpha t} \) | \( (A_n t^n + \cdots + A_0)e^{\alpha t} \) | \( e^{\alpha t} \in y_c \) → multiply by \( t^s \) |
| \( e^{\alpha t}(\cos\beta t \text{ or } \sin\beta t) \) | \( e^{\alpha t}(A\cos\beta t + B\sin\beta t) \) | overlap with \( y_c \) → multiply by \( t \) |
The single most important formula
\[ y_{\text{general}} = y_c + y_p \]Where \( y_c \) is the general homogeneous solution (with free constants) and \( y_p \) is any single particular solution. Everything else in this topic is just how to find each piece efficiently.
Practice set
These four problems cover every case: \( y'' - y' - 2y = 3t + 1 \) (polynomial), \( y'' - 4y' + 4y = e^{2t} \) (double-root resonance), \( y'' + 9y = \sin(3t) \) (trig resonance), \( y'' - 2y' + y = te^t \) (combo resonance). Work them in order.