eleven9Silicon · Calculus Reference Series

The Integration
Master Reference

Every core method, every proof, every differential — built from scratch, Feynman-style. No steps skipped.

Calc II → Calc III → Signals → EMAG
01

Fundamental Theorem of Calculus

⚡ Core Intuition

Derivative = local rate of change. Integral = total accumulated effect. The FTC is the punchline: these two operations are inverses of each other. Measuring how fast something changes and measuring how much it changes over an interval — same coin, two sides.

Before we prove anything, here's the movie in one sentence: if you keep track of a running total \(F(x) = \int_a^x f(t)\,dt\), then the rate at which that total changes is exactly \(f(x)\). That's it. That's the whole theorem.

Part I — The Derivative of an Accumulation Function

Define the accumulation function:

\[ F(x) = \int_a^x f(t)\,dt \]

What does this mean? We're sweeping from \(a\) to \(x\) and adding up all the tiny \(f(t)\,dt\) contributions. \(F(x)\) is the running total. Now ask: what's \(F'(x)\)?

📐 Proof — FTC Part I
Start from the definition of derivative. We want to know how \(F\) changes near \(x\): \[ F'(x) = \lim_{h \to 0} \frac{F(x+h) - F(x)}{h} \]
Expand using the integral definition. \(F(x+h) - F(x)\) is just the integral from \(x\) to \(x+h\): \[ F(x+h) - F(x) = \int_a^{x+h} f(t)\,dt - \int_a^x f(t)\,dt = \int_x^{x+h} f(t)\,dt \]
So our difference quotient is: \[ F'(x) = \lim_{h \to 0} \frac{1}{h} \int_x^{x+h} f(t)\,dt \]
Geometric intuition: the integral \(\int_x^{x+h} f(t)\,dt\) is the area of a thin sliver of width \(h\). If \(f\) is continuous near \(x\), that sliver's height is approximately \(f(x)\). So the area is approximately \(f(x) \cdot h\). Formally (Mean Value Theorem for Integrals): there exists \(c \in [x, x+h]\) such that \[ \int_x^{x+h} f(t)\,dt = f(c) \cdot h \]
As \(h \to 0\), \(c \to x\), and \(f(c) \to f(x)\) by continuity. Therefore: \[ F'(x) = \lim_{h \to 0} \frac{f(c) \cdot h}{h} = f(x) \quad \blacksquare \]

Part II — The Evaluation Theorem

This is the one you actually use to compute integrals. If \(F'(x) = f(x)\), then:

\[ \int_a^b f(x)\,dx = F(b) - F(a) \]
📐 Proof — FTC Part II
Let \(G(x) = \int_a^x f(t)\,dt\). By Part I, \(G'(x) = f(x)\).
Suppose \(F\) is any antiderivative of \(f\), so \(F'(x) = f(x) = G'(x)\). Then \((F - G)' = 0\), so \(F(x) - G(x) = C\) (constant) for all \(x\).
Evaluate at \(x = a\): \(G(a) = \int_a^a f = 0\), so \(F(a) - 0 = C\), giving \(C = F(a)\). Therefore \(G(x) = F(x) - F(a)\).
Evaluate at \(x = b\): \[ \int_a^b f(t)\,dt = G(b) = F(b) - F(a) \quad \blacksquare \]
✎ Example

\(\displaystyle\int_1^3 x^2\,dx\). We need \(F(x)\) such that \(F'(x) = x^2\). By power rule in reverse: \(F(x) = \frac{x^3}{3}\).

\[ \int_1^3 x^2\,dx = \frac{3^3}{3} - \frac{1^3}{3} = 9 - \frac{1}{3} = \frac{26}{3} \]
02

Antiderivatives — The Home Base

An antiderivative \(F(x)\) of \(f(x)\) is any function satisfying \(F'(x) = f(x)\). We always add \(+C\) because the derivative of a constant is zero — you can't recover the constant from the derivative alone.

Power Rule (Reversed)

📐 Where it comes from

We know \(\frac{d}{dx}[x^{n+1}] = (n+1)x^n\). To undo that:

\[ \int x^n\,dx = \frac{x^{n+1}}{n+1} + C, \quad n \neq -1 \]

The \(n+1\) in the denominator cancels the one that appears when you differentiate back. The exception \(n = -1\) is \(\int \frac{1}{x}\,dx = \ln|x| + C\), because \(\frac{d}{dx}[\ln x] = \frac{1}{x}\).

Core Antiderivative Table

Function \(f(x)\)Antiderivative \(F(x)\)Why
\(x^n\)\(\dfrac{x^{n+1}}{n+1}+C\)Reverse power rule
\(\dfrac{1}{x}\)\(\ln|x|+C\)\(\frac{d}{dx}\ln|x|=\frac{1}{x}\)
\(e^x\)\(e^x+C\)\(e^x\) is its own derivative
\(a^x\)\(\dfrac{a^x}{\ln a}+C\)\(\frac{d}{dx}[a^x]=a^x\ln a\)
\(\sin x\)\(-\cos x+C\)\(\frac{d}{dx}[-\cos x]=\sin x\)
\(\cos x\)\(\sin x+C\)\(\frac{d}{dx}[\sin x]=\cos x\)
\(\sec^2 x\)\(\tan x+C\)\(\frac{d}{dx}[\tan x]=\sec^2 x\)
\(\sec x\tan x\)\(\sec x+C\)\(\frac{d}{dx}[\sec x]=\sec x\tan x\)
\(\dfrac{1}{\sqrt{1-x^2}}\)\(\arcsin x+C\)Inverse trig derivative
\(\dfrac{1}{1+x^2}\)\(\arctan x+C\)Inverse trig derivative
03

Differentials — Real Objects, Not Decoration

🔑 Why this section exists

Most courses treat \(dx\) as "just notation." That's a trap. Understanding \(dx\) as a real linear object is the key that unlocks u-substitution, change of variables, Jacobians, line integrals, and surface integrals. Everything flows from this.

Layer 1 — Single Variable: \(dx\) as a tiny input change

If \(y = f(x)\), the differential \(dy\) is defined as:

\[ dy = f'(x)\,dx \]

Mechanically: \(dx\) is an infinitesimally small change in the input. \(dy\) is the corresponding linear approximation to the change in output. This is not the actual change \(\Delta y\); it's the change predicted by the tangent line.

⚡ Feynman View

Think of \(dx\) as a tiny ruler. The function \(f\) stretches that ruler by a factor of \(f'(x)\) to give \(dy = f'(x)\,dx\). Substitution works because when you change variables, you're just relabeling the ruler.

Layer 2 — Substitution: how differentials transform

If \(u = g(x)\), then differentiating both sides:

\[ du = g'(x)\,dx \]

This is not a trick — it's the chain rule applied to the differential. When we "substitute," we're literally replacing the \(g'(x)\,dx\) piece inside the integral with \(du\). The differential is the bridge.

Layer 3 — Multivariable: Total Differential

If \(z = f(x, y)\), the total differential captures how \(z\) changes due to changes in both \(x\) and \(y\):

\[ dz = \frac{\partial f}{\partial x}\,dx + \frac{\partial f}{\partial y}\,dy \]

Each term \(\frac{\partial f}{\partial x}\,dx\) says: "how much does \(z\) change if only \(x\) wiggles by \(dx\)?" The total differential adds these contributions. This is the multivariable tangent plane approximation:

\[ \Delta z \approx f_x\,\Delta x + f_y\,\Delta y \]

The Grand Payoff — Area Elements

When you change coordinates, the differential area element \(dA\) transforms too. This is why:

  • Polar: \(dA = r\,dr\,d\theta\) (the \(r\) comes from stretching)
  • Cylindrical: \(dV = r\,dz\,dr\,d\theta\)
  • Spherical: \(dV = \rho^2\sin\phi\,d\rho\,d\theta\,d\phi\)

Those scale factors aren't magic — they're the determinants of the Jacobian matrix of the coordinate transformation. More on this in Section 10.

04

u-Substitution

⚡ Core Intuition

You're not doing a trick. You are relabeling the input variable so the integral matches the natural geometry of the expression. It's the chain rule run in reverse.

The Mechanism — Chain Rule Reversed

The chain rule says: \(\frac{d}{dx}[F(g(x))] = F'(g(x))\cdot g'(x)\). So integrating both sides:

\[ \int F'(g(x))\cdot g'(x)\,dx = F(g(x)) + C \]

Now set \(u = g(x)\), so \(du = g'(x)\,dx\). The integral becomes:

\[ \int F'(u)\,du = F(u) + C \]

That's it. The \(g'(x)\,dx\) piece in the original integrand must be present (up to a constant) for u-sub to work. You're recognizing that the derivative of the inner function is sitting there, waiting.

Algorithm

Identify an inner function \(g(x)\) whose derivative (up to a constant) also appears in the integrand. Set \(u = g(x)\).
Compute \(du = g'(x)\,dx\) and solve for \(dx = \frac{du}{g'(x)}\).
Substitute \(u\) and \(du\) throughout. The \(g'(x)\) factors should cancel. If they don't, u-sub might not be the right tool.
Integrate in terms of \(u\).
Back-substitute \(u = g(x)\) to return to \(x\). (For definite integrals, change bounds instead.)

Definite Integral Bound Change

When limits are given, transform them: if \(u = g(x)\), then the limits \(x = a, b\) become \(u = g(a), g(b)\). No need to back-substitute.

✎ Example 1 — Basic

Compute \(\displaystyle\int 2x\cos(x^2)\,dx\).

\(u = x^2\)
inner function spotted
\(du = 2x\,dx\)
the \(2x\,dx\) is already in the integrand
\(\displaystyle\int \cos(u)\,du\)
clean substitution
\(= \sin(u) + C = \sin(x^2) + C\)
back-substitute
✎ Example 2 — Definite

Compute \(\displaystyle\int_0^1 3x^2 e^{x^3}\,dx\).

\(u = x^3\)
inner function
\(du = 3x^2\,dx\)
present in integrand
Bounds: \(x=0 \Rightarrow u=0\), \(x=1 \Rightarrow u=1\)
transform bounds
\(\displaystyle\int_0^1 e^u\,du = e^1 - e^0 = e - 1\)
no back-sub needed
⚠ Common Mistake

You cannot use u-sub and just leave stray \(x\) terms behind. If after substituting you still have \(x\)'s in the integrand that aren't part of \(du\), either use algebra to express them in terms of \(u\), or try a different method.

05

Integration by Parts

⚡ Core Intuition

This is the product rule run backward. When you have a product of two functions, you can trade the hard integral for a (hopefully) easier one by differentiating one factor and integrating the other.

Proof from the Product Rule

📐 Derivation
Start with the product rule for \((u \cdot v)\): \[ \frac{d}{dx}[uv] = u'v + uv' \]
Rearrange for \(uv'\): \[ uv' = \frac{d}{dx}[uv] - u'v \]
Integrate both sides with respect to \(x\): \[ \int uv'\,dx = uv - \int u'v\,dx \]
In differential notation, \(v'\,dx = dv\) and \(u'\,dx = du\): \[ \boxed{\int u\,dv = uv - \int v\,du} \quad \blacksquare \]

How to Choose \(u\) and \(dv\) — LIATE Guide

Set \(u\) = the function that comes first in this priority list (it gets differentiated, so you want it to simplify):

LetterTypeExample
LLogarithms\(\ln x, \log x\)
IInverse trig\(\arctan x, \arcsin x\)
AAlgebraic\(x^n, \sqrt{x}\)
TTrig\(\sin x, \cos x\)
EExponential\(e^x, a^x\)

LIATE is a heuristic, not a law. The goal: pick \(u\) that simplifies when differentiated, and \(dv\) that you can actually integrate.

✎ Example 1 — \(\int x e^x\,dx\)
\(u = x\), \(dv = e^x\,dx\)
Algebraic before Exponential (LIATE)
\(du = dx\), \(v = e^x\)
differentiate \(u\), integrate \(dv\)
\(\int x e^x\,dx = x e^x - \int e^x\,dx\)
apply IBP formula
\(= x e^x - e^x + C = e^x(x-1)+C\)
done
✎ Example 2 — \(\int \ln x\,dx\)

This one looks like a single function, not a product. Trick: write it as \(\ln x \cdot 1\).

\(u = \ln x\), \(dv = 1\,dx\)
Log before Algebraic
\(du = \frac{1}{x}\,dx\), \(v = x\)
differentiate \(u\), integrate \(dv\)
\(\int \ln x\,dx = x\ln x - \int x \cdot \frac{1}{x}\,dx\)
IBP formula
\(= x\ln x - \int 1\,dx = x\ln x - x + C\)
simplify and integrate

Repeated IBP

Sometimes you apply IBP twice (or more). In cases like \(\int x^2 e^x\,dx\), each round reduces the power of \(x\) by one. A tabular method organizes this:

SignDifferentiate (u-side)Integrate (dv-side)
\(+\)\(x^2\)\(e^x\)
\(-\)\(2x\)\(e^x\)
\(+\)\(2\)\(e^x\)
\(-\)\(0\)\(e^x\)

Multiply diagonally with alternating signs: \(\int x^2 e^x\,dx = x^2 e^x - 2x e^x + 2e^x + C = e^x(x^2 - 2x + 2)+C\).

⚠ The "Circular" Trick

For \(\int e^x\sin x\,dx\), applying IBP twice gives back the original integral. Don't panic — call the original integral \(I\), solve the equation \(I = (\text{expression}) - I\) to get \(2I = \text{expression}\), thus \(I = \frac{1}{2}(\text{expression})\).

06

Trig Integrals

⚡ Core Intuition

Trig integrals are about pattern recognition + strategic use of Pythagorean identities to create something you can substitute. The game: look at the powers, decide what to peel off, use an identity to rewrite the rest in terms of the "other" trig function, then u-sub.

Essential Pythagorean Identities (always keep these hot)

\[\sin^2 x + \cos^2 x = 1 \qquad 1 + \tan^2 x = \sec^2 x \qquad 1 + \cot^2 x = \csc^2 x\]
\[\sin^2 x = \frac{1 - \cos 2x}{2} \qquad \cos^2 x = \frac{1 + \cos 2x}{2}\]

Family 1

\[ \int \sin^m x\,\cos^n x\,dx \]
SituationStrategyWhy it works
\(m\) oddPeel one \(\sin x\), write \(\sin^{m-1}x = (1-\cos^2 x)^{(m-1)/2}\), let \(u=\cos x\)\(du = -\sin x\,dx\) absorbs the peeled sin
\(n\) oddPeel one \(\cos x\), write \(\cos^{n-1}x = (1-\sin^2 x)^{(n-1)/2}\), let \(u=\sin x\)\(du = \cos x\,dx\) absorbs the peeled cos
Both evenUse half-angle identities to reduce powersConverts to lower-degree trig integrals
✎ Example — \(m=3\) odd

Compute \(\displaystyle\int \sin^3 x\cos^2 x\,dx\).

\(\sin^3 x = \sin^2 x \cdot \sin x = (1-\cos^2 x)\sin x\)
peel one sin, use identity
\(\int (1-\cos^2 x)\cos^2 x\sin x\,dx\)
rewritten integrand
\(u = \cos x, \; du = -\sin x\,dx\)
u-sub; peeled sin absorbs into du
\(-\int (1-u^2)u^2\,du = -\int (u^2 - u^4)\,du\)
clean polynomial
\(= -\frac{u^3}{3} + \frac{u^5}{5} + C = -\frac{\cos^3 x}{3} + \frac{\cos^5 x}{5} + C\)
integrate, back-sub

Family 2

\[ \int \tan^m x\,\sec^n x\,dx \]
SituationStrategy
\(n\) evenPeel \(\sec^2 x\), write remaining \(\sec^{n-2}x\) in terms of \(\tan x\) via \(\sec^2 x = 1+\tan^2 x\), let \(u=\tan x\)
\(m\) oddPeel \(\sec x\tan x\), write remaining \(\tan^{m-1}x\) in terms of \(\sec x\), let \(u=\sec x\)
07

Trigonometric Substitution

⚡ Core Intuition

Radicals like \(\sqrt{a^2 - x^2}\) encode a circle. You're choosing a variable that "fits the geometry" of the radical — exploiting a Pythagorean identity to kill the square root. The right triangle is the key picture.

The Three Core Substitutions

📋 Substitution Table
Radical FormSubstitutionIdentity UsedRight Triangle
\(\sqrt{a^2 - x^2}\)\(x = a\sin\theta\)\(a^2 - a^2\sin^2\theta = a^2\cos^2\theta\)hyp \(a\), opp \(x\), adj \(\sqrt{a^2-x^2}\)
\(\sqrt{a^2 + x^2}\)\(x = a\tan\theta\)\(a^2 + a^2\tan^2\theta = a^2\sec^2\theta\)adj \(a\), opp \(x\), hyp \(\sqrt{a^2+x^2}\)
\(\sqrt{x^2 - a^2}\)\(x = a\sec\theta\)\(a^2\sec^2\theta - a^2 = a^2\tan^2\theta\)hyp \(x\), adj \(a\), opp \(\sqrt{x^2-a^2}\)

Why the Identities Kill the Radical

For \(x = a\sin\theta\):

\(\sqrt{a^2 - x^2} = \sqrt{a^2 - a^2\sin^2\theta}\)
substitute
\(= \sqrt{a^2(1 - \sin^2\theta)}\)
factor out \(a^2\)
\(= \sqrt{a^2\cos^2\theta} = a\cos\theta\)
\(1-\sin^2\theta = \cos^2\theta\); radical gone
✎ Example — \(\displaystyle\int \frac{dx}{\sqrt{9-x^2}}\)
\(a = 3\), use \(x = 3\sin\theta\), so \(dx = 3\cos\theta\,d\theta\).
\(\sqrt{9-x^2} = \sqrt{9-9\sin^2\theta} = 3\cos\theta\)
\(\displaystyle\int \frac{3\cos\theta\,d\theta}{3\cos\theta} = \int d\theta = \theta + C\)
Back-substitute: \(\sin\theta = \frac{x}{3}\) so \(\theta = \arcsin\left(\frac{x}{3}\right)\). Answer: \(\arcsin\!\left(\frac{x}{3}\right) + C\).
✎ Example 2 — \(\displaystyle\int \sqrt{1-x^2}\,dx\) (area of half-circle)
\(x = \sin\theta\), \(dx = \cos\theta\,d\theta\), \(\sqrt{1-x^2}=\cos\theta\).
\(\displaystyle\int\cos^2\theta\,d\theta = \int\frac{1+\cos 2\theta}{2}\,d\theta = \frac{\theta}{2} + \frac{\sin 2\theta}{4} + C\)
\(\sin 2\theta = 2\sin\theta\cos\theta = 2x\sqrt{1-x^2}\). Back-sub: \(\frac{\arcsin x}{2} + \frac{x\sqrt{1-x^2}}{2} + C\). This is geometrically the area of a circular segment — beautiful.
08

Partial Fractions

⚡ Core Intuition

Break one ugly rational function into a sum of simpler fractions whose antiderivatives you already know. It's the reverse of adding fractions — you're decomposing instead of combining. Critical for Laplace transforms and control systems.

When to Use

When the integrand is a rational function \(\frac{P(x)}{Q(x)}\). First check: if degree of \(P \geq\) degree of \(Q\), do polynomial long division first to get \(\frac{P}{Q} = (\text{polynomial}) + \frac{R(x)}{Q(x)}\) where \(\deg R < \deg Q\).

The Four Cases

Case 1 — Distinct Linear Factors

If \(Q(x) = (x-a)(x-b)\cdots\), write:

\[ \frac{P(x)}{(x-a)(x-b)} = \frac{A}{x-a} + \frac{B}{x-b} \]

Multiply both sides by \((x-a)(x-b)\), then plug in \(x = a\) and \(x = b\) to solve for \(A\) and \(B\) directly (the "cover-up method").

Case 2 — Repeated Linear Factors

For \((x-a)^k\), you need one term for each power:

\[ \frac{P(x)}{(x-a)^2} = \frac{A_1}{x-a} + \frac{A_2}{(x-a)^2} \]

Case 3 — Irreducible Quadratics

For a factor \(x^2 + bx + c\) with no real roots:

\[ \frac{P(x)}{x^2+bx+c} = \frac{Ax+B}{x^2+bx+c} \]

The numerator needs to be linear (one degree below the denominator).

Case 4 — Repeated Irreducible Quadratics

\[ \frac{P(x)}{(x^2+1)^2} = \frac{Ax+B}{x^2+1} + \frac{Cx+D}{(x^2+1)^2} \]
✎ Example — \(\displaystyle\int \frac{x+5}{x^2+x-2}\,dx\)
Factor denominator: \(x^2+x-2 = (x+2)(x-1)\).
Decompose: \(\dfrac{x+5}{(x+2)(x-1)} = \dfrac{A}{x+2} + \dfrac{B}{x-1}\).
Multiply through: \(x+5 = A(x-1) + B(x+2)\).
Plug \(x=1\): \(6 = 3B \Rightarrow B=2\).
Plug \(x=-2\): \(3 = -3A \Rightarrow A=-1\).
\(\displaystyle\int\left(\frac{-1}{x+2}+\frac{2}{x-1}\right)dx = -\ln|x+2| + 2\ln|x-1| + C\)
09

Improper Integrals

⚡ Core Intuition

What happens when the interval is infinite, or the function blows up somewhere? We can't just plug in. We replace the bad endpoint with a limit and ask: does that limit exist? If yes, the integral converges. If no, it diverges.

Type I — Infinite Limits

\[ \int_a^\infty f(x)\,dx = \lim_{t\to\infty}\int_a^t f(x)\,dx \]
\[ \int_{-\infty}^b f(x)\,dx = \lim_{t\to-\infty}\int_t^b f(x)\,dx \]

For doubly infinite integrals, split at any convenient point \(c\): both halves must converge independently.

✎ Example — \(\displaystyle\int_1^\infty \frac{1}{x^p}\,dx\)

This is the \(p\)-integral, fundamental to series convergence.

For \(p \neq 1\): \(\displaystyle\int_1^t x^{-p}\,dx = \left[\frac{x^{1-p}}{1-p}\right]_1^t = \frac{t^{1-p}-1}{1-p}\)
As \(t\to\infty\): if \(p > 1\), then \(1-p < 0\) so \(t^{1-p}\to 0\). Integral \(= \frac{1}{p-1}\). Converges.
If \(p < 1\), then \(t^{1-p}\to\infty\). Diverges.
For \(p = 1\): \(\int_1^\infty \frac{dx}{x} = \lim_{t\to\infty}\ln t = \infty\). Diverges.
Key Result
\[\int_1^\infty \frac{1}{x^p}\,dx \text{ converges} \iff p > 1\]

Type II — Vertical Asymptote

If \(f\) blows up at \(x = c \in [a,b]\):

\[ \int_a^b f(x)\,dx = \lim_{t\to c^-}\int_a^t f(x)\,dx + \lim_{t\to c^+}\int_t^b f(x)\,dx \]

Both limits must converge. A common mistake: forgetting the singularity in the middle and computing as if the function were bounded.

⚠ Classic Trap

\(\displaystyle\int_{-1}^1 \frac{1}{x^2}\,dx\) — the integrand blows up at \(x=0\). Naively applying FTC gives \(-1/x \big|_{-1}^1 = -2\), which is wrong (a positive function can't have a negative integral). You must split at 0, and both halves diverge. The integral diverges.

10

Change of Variables & Jacobians

⚡ Core Intuition

When you change coordinates, you're stretching, squishing, and rotating the tiny area/volume elements. The Jacobian determinant measures exactly how much a coordinate transformation stretches infinitesimal patches. It's the multivariable version of \(du = g'(x)\,dx\).

1D Case — Sanity Check

You already know this: if \(u = g(x)\), then \(du = g'(x)\,dx\). The "Jacobian" in 1D is just \(g'(x)\). It tells you how the length element stretches.

2D Jacobian — The Core Construction

Suppose we change from \((x,y)\) to \((u,v)\) via \(x = x(u,v)\), \(y = y(u,v)\). A tiny rectangle \(du\,dv\) in \((u,v)\)-space maps to a parallelogram in \((x,y)\)-space. The area of that parallelogram is:

\[ dA_{xy} = |J|\,du\,dv \]

where the Jacobian determinant is:

\[ J = \frac{\partial(x,y)}{\partial(u,v)} = \begin{vmatrix} \frac{\partial x}{\partial u} & \frac{\partial x}{\partial v} \\ \frac{\partial y}{\partial u} & \frac{\partial y}{\partial v} \end{vmatrix} = \frac{\partial x}{\partial u}\frac{\partial y}{\partial v} - \frac{\partial x}{\partial v}\frac{\partial y}{\partial u} \]
📐 Why it's a determinant

A tiny step \(du\) in the \(u\)-direction maps to the vector \(\left(\frac{\partial x}{\partial u}, \frac{\partial y}{\partial u}\right)du\). A tiny step \(dv\) in the \(v\)-direction maps to \(\left(\frac{\partial x}{\partial v}, \frac{\partial y}{\partial v}\right)dv\). The parallelogram spanned by these two vectors has area equal to the absolute value of their cross product — which is exactly the determinant of the Jacobian matrix.

Polar Coordinates — Full Derivation

Let \(x = r\cos\theta\), \(y = r\sin\theta\). Compute the Jacobian:

\(\dfrac{\partial x}{\partial r} = \cos\theta, \quad \dfrac{\partial x}{\partial\theta} = -r\sin\theta\)
partial derivatives of x
\(\dfrac{\partial y}{\partial r} = \sin\theta, \quad \dfrac{\partial y}{\partial\theta} = r\cos\theta\)
partial derivatives of y
\(J = \begin{vmatrix}\cos\theta & -r\sin\theta \\ \sin\theta & r\cos\theta\end{vmatrix} = r\cos^2\theta + r\sin^2\theta = r\)
determinant; Pythagorean identity
\(dA = r\,dr\,d\theta\)
the r appears naturally — not magic!
🔑 This is where the polar area element comes from

The factor \(r\) in \(dA = r\,dr\,d\theta\) is the Jacobian of the polar coordinate transformation. A thin ring of radius \(r\) and width \(dr\) has arc length \(r\,d\theta\), so its area is \(r\,dr\,d\theta\). The Jacobian formalizes this geometric stretching — it's not magic, it's a determinant.

General Change of Variables Formula

\[ \iint_R f(x,y)\,dA = \iint_S f(x(u,v),\,y(u,v))\,\left|\frac{\partial(x,y)}{\partial(u,v)}\right|\,du\,dv \]
11

Double & Triple Integrals

⚡ Core Intuition

A single integral sums \(f(x)\,dx\) — tiny contributions along a line. A double integral sums \(f(x,y)\,dA\) — tiny contributions over a region. A triple integral sums \(f(x,y,z)\,dV\) over a solid. Same movie, one more dimension each time.

Fubini's Theorem — Iterated Integration

For a rectangle \(R = [a,b]\times[c,d]\):

\[ \iint_R f(x,y)\,dA = \int_a^b\int_c^d f(x,y)\,dy\,dx = \int_c^d\int_a^b f(x,y)\,dx\,dy \]

You can switch the order of integration for a rectangle (as long as \(f\) is continuous). The intuition: you're summing along columns then rows, or rows then columns — same total.

Non-Rectangular Regions — Type I and Type II

Type I (bounded by functions of \(x\)): \(R = \{(x,y): a \leq x \leq b,\; g_1(x) \leq y \leq g_2(x)\}\):

\[ \iint_R f\,dA = \int_a^b\int_{g_1(x)}^{g_2(x)} f(x,y)\,dy\,dx \]

Type II (bounded by functions of \(y\)): integrate in the other order.

Geometric Meaning

IntegralWhen \(f=1\)When \(f = \rho\) (density)
\(\iint_R dA\)Area of \(R\)
\(\iint_R f\,dA\)Volume under surface \(z=f\)Mass of 2D plate
\(\iiint_E dV\)Volume of \(E\)
\(\iiint_E f\,dV\)4D "volume"Mass of 3D solid

Average Value

\[ f_{\text{avg}} = \frac{1}{\text{Area}(R)}\iint_R f(x,y)\,dA \]
✎ Example — \(\iint_R x^2 y\,dA\) over \(R = [0,2]\times[1,3]\)
\(\displaystyle\int_0^2\int_1^3 x^2 y\,dy\,dx\)
Inner integral (treat \(x\) as constant): \(\displaystyle\int_1^3 x^2 y\,dy = x^2\left[\frac{y^2}{2}\right]_1^3 = x^2\cdot\frac{9-1}{2} = 4x^2\)
Outer integral: \(\displaystyle\int_0^2 4x^2\,dx = 4\cdot\frac{x^3}{3}\Big|_0^2 = \frac{32}{3}\)
12

Curvilinear Coordinates

⚡ Core Intuition

When your region has circular, cylindrical, or spherical symmetry, the Cartesian grid is fighting the geometry. Switch to a coordinate system that matches the shape — the bounds simplify, and the Jacobian handles the area/volume scaling automatically.

Polar (2D)

\[ x = r\cos\theta, \quad y = r\sin\theta, \quad dA = r\,dr\,d\theta \]

Use when region is a disk, annulus, wedge, or any boundary described by \(r = f(\theta)\).

✎ Example — Area of disk radius \(a\)
\[\iint_{\text{disk}} dA = \int_0^{2\pi}\int_0^a r\,dr\,d\theta = 2\pi\cdot\frac{a^2}{2} = \pi a^2 \checkmark\]

Cylindrical (3D)

\[ x = r\cos\theta, \quad y = r\sin\theta, \quad z = z, \quad dV = r\,dz\,dr\,d\theta \]

Use for cylinders, cones, any solid with circular cross-sections. The Jacobian is the same \(r\) as polar — the \(z\) direction doesn't get distorted.

Spherical (3D) — Full Jacobian Derivation

\[ x = \rho\sin\phi\cos\theta, \quad y = \rho\sin\phi\sin\theta, \quad z = \rho\cos\phi \]

Variables: \(\rho\) = radial distance from origin, \(\phi\) = polar angle from \(+z\)-axis (\(0 \leq \phi \leq \pi\)), \(\theta\) = azimuthal angle in \(xy\)-plane.

📐 Why \(dV = \rho^2\sin\phi\,d\rho\,d\theta\,d\phi\)

The Jacobian matrix \(\partial(x,y,z)/\partial(\rho,\theta,\phi)\) is 3×3. Computing its determinant (rows = components of \(\textbf{r}\), columns = partial derivatives):

\[ J = \begin{vmatrix} \sin\phi\cos\theta & -\rho\sin\phi\sin\theta & \rho\cos\phi\cos\theta \\ \sin\phi\sin\theta & \rho\sin\phi\cos\theta & \rho\cos\phi\sin\theta \\ \cos\phi & 0 & -\rho\sin\phi \end{vmatrix} = \rho^2\sin\phi \]

Therefore \(dV = \rho^2\sin\phi\,d\rho\,d\theta\,d\phi\).

Geometric interpretation: a tiny spherical box at \((\rho,\theta,\phi)\) has sides \(d\rho\) (radial), \(\rho\sin\phi\,d\theta\) (latitudinal arc), and \(\rho\,d\phi\) (longitudinal arc). Volume = \(\rho^2\sin\phi\,d\rho\,d\theta\,d\phi\). The \(\sin\phi\) squishes the boxes near the poles where they're smaller.

✎ Example — Volume of sphere radius \(a\)
\[\int_0^{2\pi}\int_0^\pi\int_0^a \rho^2\sin\phi\,d\rho\,d\phi\,d\theta = 2\pi \cdot \frac{a^3}{3} \cdot \int_0^\pi\sin\phi\,d\phi = 2\pi\cdot\frac{a^3}{3}\cdot 2 = \frac{4\pi a^3}{3} \checkmark\]
13

Line & Surface Integrals

⚡ Core Intuition

Same idea as always: \(\text{tiny contribution} \times \text{how many tiny pieces}\). Now the "tiny pieces" are arc-length elements along a curve or area patches on a surface. The vector field version asks: how much does the field push along the path (work) or through the surface (flux)?

Line Integrals — Scalar Field

Integrate \(f(x,y,z)\) along a curve \(C\) parametrized by \(\textbf{r}(t) = (x(t), y(t), z(t))\) for \(t \in [a,b]\):

\[ \int_C f\,ds = \int_a^b f(\textbf{r}(t))\,|\textbf{r}'(t)|\,dt \]

The \(ds = |\textbf{r}'(t)|\,dt\) is the arc-length element — the tiny piece of path length. Geometrically this is the "curtain area" under \(f\) draped along \(C\).

Line Integrals — Vector Field (Work)

If \(\textbf{F}(x,y,z)\) is a vector field (e.g., a force), the work done along \(C\) is:

\[ \int_C \textbf{F}\cdot d\textbf{r} = \int_a^b \textbf{F}(\textbf{r}(t))\cdot\textbf{r}'(t)\,dt \]

Here \(d\textbf{r} = \textbf{r}'(t)\,dt\) is a tiny displacement vector along the path. The dot product picks out the component of \(\textbf{F}\) in the direction of motion — only that component does work.

📋 Component Form
\[\int_C \textbf{F}\cdot d\textbf{r} = \int_C (F_1\,dx + F_2\,dy + F_3\,dz)\]

Where \(dx = x'(t)\,dt\), etc. This is the differential form version — each \(F_i\,dx_i\) is a contribution from one direction.

Surface Integrals — Scalar Field (Surface Area)

For a surface parametrized by \(\textbf{r}(u,v)\) over a region \(D\):

\[ \iint_S f\,dS = \iint_D f(\textbf{r}(u,v))\,|\textbf{r}_u \times \textbf{r}_v|\,dA \]

The cross product \(\textbf{r}_u \times \textbf{r}_v\) gives a vector perpendicular to the surface, and its magnitude is the area of the tiny parallelogram patch. When \(f=1\), this is just the surface area.

Surface Integrals — Vector Field (Flux)

The flux of \(\textbf{F}\) through surface \(S\) measures how much \(\textbf{F}\) flows through \(S\):

\[ \iint_S \textbf{F}\cdot d\textbf{S} = \iint_S \textbf{F}\cdot\hat{\textbf{n}}\,dS = \iint_D \textbf{F}\cdot(\textbf{r}_u\times\textbf{r}_v)\,dA \]

Here \(\hat{\textbf{n}}\) is the unit normal to the surface and \(d\textbf{S} = \hat{\textbf{n}}\,dS\). Only the component of \(\textbf{F}\) perpendicular to the surface contributes to flux.

🔑 The Grand Unification
Integral TypeWhat you sumTiny piece
Single \(\int_a^b f\,dx\)Area under curve\(dx\) = length element
Double \(\iint f\,dA\)Volume under surface\(dA\) = area element
Triple \(\iiint f\,dV\)4D "hypervolume"\(dV\) = volume element
Line \(\int_C f\,ds\)Curtain area along path\(ds = |\textbf{r}'|\,dt\)
Line vector \(\int_C \textbf{F}\cdot d\textbf{r}\)Work done by field\(d\textbf{r} = \textbf{r}'\,dt\)
Surface scalar \(\iint_S f\,dS\)Surface area / mass\(dS = |\textbf{r}_u\times\textbf{r}_v|\,dA\)
Surface vector \(\iint_S \textbf{F}\cdot d\textbf{S}\)Flux through surface\(d\textbf{S} = (\textbf{r}_u\times\textbf{r}_v)\,dA\)
ref

Integration Roadmap

MethodTriggerKey IdeaPriority
FTC + Power RulePolynomial, standard functionsReverse derivativeTier 1
u-SubstitutionComposite function + its derivative presentChain rule reversed; differential relabelingTier 1
Integration by PartsProduct of two different function familiesProduct rule reversedTier 1
Trig Integrals\(\sin^m x\cos^n x\) powersPeel + Pythagorean identity + u-subTier 1
Trig SubstitutionRadicals \(\sqrt{a^2\pm x^2}\), \(\sqrt{x^2-a^2}\)Geometric coordinate fitTier 1
Partial FractionsRational function \(P(x)/Q(x)\)Decompose into simple fractionsTier 1
Improper IntegralsInfinite limit or vertical asymptoteReplace with limit; convergence testTier 1
Change of Variables + JacobianMultivariable with natural new coordsScale factor = |det J|Tier 2
Double / Triple IntegralsRegion in 2D or 3DIterated 1D integrals; FubiniTier 2
Polar / Cylindrical / SphericalCircular/spherical symmetryCoordinate match; Jacobian gives scale factorTier 2
Line IntegralsIntegration along a curveArc length element \(ds = |\textbf{r}'|\,dt\)Tier 2
Surface IntegralsIntegration over a surfaceArea patch \(dS = |\textbf{r}_u\times\textbf{r}_v|\,dA\)Tier 2
Numerical IntegrationNo closed-form antiderivativeApproximate with simple shapesTier 3
Ref

Numerical Integration

When a function has no closed-form antiderivative (e.g., \(e^{-x^2}\)), we approximate using simple shapes.

RuleFormula (n subintervals, \(h=(b-a)/n\))Error Order
Midpoint\(\displaystyle h\sum_{i=0}^{n-1}f\!\left(a+\left(i+\tfrac{1}{2}\right)h\right)\)\(O(h^2)\)
Trapezoidal\(\displaystyle\frac{h}{2}\left[f(a)+2\sum_{i=1}^{n-1}f(x_i)+f(b)\right]\)\(O(h^2)\)
Simpson's\(\displaystyle\frac{h}{3}\left[f(a)+4f(x_1)+2f(x_2)+\cdots+f(b)\right]\)\(O(h^4)\)
⚡ Why Simpson's is so good

Trapezoidal uses straight lines (degree 1) to approximate \(f\). Simpson's fits a parabola (degree 2) through every three consecutive points. A parabola matches more curvature, so the error drops from \(O(h^2)\) to \(O(h^4)\) — way better convergence per step.