Advanced Differential Equations
Differential equations are the language of science and engineering. From the motion of planets to heat flow through solids, population dynamics, and quantum mechanics, ODEs and PDEs model every continuous process in the physical world. This guide covers the full spectrum from first-order techniques through partial differential equations and nonlinear dynamics.
1. First-Order Ordinary Differential Equations
A first-order ODE relates a function y(x) to its first derivative y'. The general form is F(x, y, y') = 0 or equivalently y' = f(x, y). Several structural types admit systematic solution methods.
Separable Equations
A separable equation has the form dy/dx = g(x) h(y). Dividing both sides by h(y) and integrating gives the implicit solution integral(dy / h(y)) = integral(g(x) dx) + C. This method works whenever variables can be cleanly separated to opposite sides of the equation.
Example: Exponential Growth
dy/dx = ky separates to dy/y = k dx
Integrating: ln|y| = kx + C
Solution: y = A e^(kx), where A = e^C
Linear First-Order Equations and Integrating Factors
The standard form is y' + P(x)y = Q(x). The integrating factor method multiplies both sides by mu(x) = exp(integral of P(x) dx), which makes the left side an exact derivative d/dx[mu(x) y]. Integrating both sides yields the general solution directly.
Algorithm
- Step 1: Write in standard form y' + P(x)y = Q(x)
- Step 2: Compute mu = exp(integral P(x) dx)
- Step 3: Multiply through by mu
- Step 4: Left side becomes d/dx[mu * y]
- Step 5: Integrate both sides; solve for y
Exact Equations
An equation M(x,y) dx + N(x,y) dy = 0 is exact if the partial derivative of M with respect to y equals the partial derivative of N with respect to x. When exact, there exists a potential function F(x,y) with F_x = M and F_y = N, and the solution is F(x,y) = C. If the equation is not exact, one may seek an integrating factor mu(x) or mu(y) that makes it exact.
Exactness Test
M dx + N dy = 0 is exact if and only if partial M / partial y equals partial N / partial x, provided M and N have continuous first partial derivatives on a simply connected domain.
Bernoulli Equations
The Bernoulli equation y' + P(x)y = Q(x) y^n (n not 0 or 1) is nonlinear but reducible. The substitution v = y^(1-n) transforms it into a linear equation v' + (1-n)P(x)v = (1-n)Q(x), which is solvable by the integrating factor method. This technique applies to many biological and physical growth models.
Riccati Equations
The Riccati equation y' = P(x) + Q(x)y + R(x) y^2 is generally not solvable in closed form. However, if one particular solution y1(x) is known, the substitution y = y1 + 1/v converts it into a linear equation for v. Finding a particular solution often requires inspection, series methods, or recognizing special structure.
2. Second-Order Linear ODEs
The general second-order linear ODE is y'' + P(x)y' + Q(x)y = f(x). The associated homogeneous equation (f = 0) has a two-dimensional solution space. The general solution to the nonhomogeneous equation is y = y_h + y_p, where y_h is the general homogeneous solution and y_p is any particular solution.
Constant Coefficient Equations: Characteristic Equation
For ay'' + by' + cy = 0, the trial solution y = e^(rx) leads to the characteristic equation ar^2 + br + c = 0. The nature of the roots determines the form of the general solution.
Two Real Distinct Roots r1, r2
y = C1 e^(r1 x) + C2 e^(r2 x)
Repeated Root r (b^2 = 4ac)
y = (C1 + C2 x) e^(rx)
Complex Roots alpha +/- i*beta
y = e^(alpha x)[C1 cos(beta x) + C2 sin(beta x)]
The Wronskian and Linear Independence
For two solutions y1 and y2 of the homogeneous equation, the Wronskian is W(x) = y1(x)y2'(x) - y2(x)y1'(x). The solutions form a fundamental set of solutions (are linearly independent) if and only if W is nonzero at some (hence every) point of the interval. Abel's theorem gives the Wronskian explicitly without solving the ODE: W(x) = W(x0) * exp(-integral from x0 to x of P(t) dt).
Abel's Theorem
If y1, y2 solve y'' + P(x)y' + Q(x)y = 0, then
W(x) = W(x0) * exp(-integral from x0 to x of P(t) dt)
Consequence: W is either identically 0 or never 0 on the interval.
Method of Undetermined Coefficients
For ay'' + by' + cy = f(x) where f is a polynomial, exponential, sine, cosine, or a product of these, guess a particular solution of the same form. If the guess overlaps the homogeneous solution, multiply by x (or x^2 if necessary). This method is efficient when applicable but limited to constant-coefficient equations with special right-hand sides.
Common Guesses
- f = polynomial degree n → y_p = polynomial degree n
- f = e^(ax) → y_p = A e^(ax)
- f = cos(bx) or sin(bx) → y_p = A cos(bx) + B sin(bx)
- f = e^(ax) cos(bx) → y_p = e^(ax)[A cos(bx) + B sin(bx)]
- Overlap with y_h → multiply by x (or x^2)
Variation of Parameters
Variation of parameters finds a particular solution for any continuous right-hand side f(x), even when undetermined coefficients does not apply. Given fundamental solutions y1 and y2, assume y_p = u1(x) y1 + u2(x) y2 and impose the condition u1' y1 + u2' y2 = 0. This yields the system for u1' and u2', which is solved using the Wronskian.
Variation of Parameters Formulas
u1'(x) = -y2(x) f(x) / W(x)
u2'(x) = y1(x) f(x) / W(x)
y_p = y1 * integral(-y2 f / W dx) + y2 * integral(y1 f / W dx)
Cauchy-Euler Equations
The equation x^2 y'' + ax y' + by = 0 has non-constant coefficients but the substitution x = e^t (or equivalently y = x^r) reduces it to a constant-coefficient equation. The trial solution y = x^r leads to the indicial equation r(r-1) + ar + b = 0, with the same three cases (distinct real, repeated, complex roots) as constant- coefficient equations.
3. Systems of Ordinary Differential Equations
Many physical phenomena are described by coupled ODEs. A system of n first-order linear ODEs can be written in matrix form x' = Ax + g(t), where x is a vector of unknowns and A is an n-by-n matrix. Systems also arise naturally from converting higher-order ODEs into first-order systems.
Matrix Form and the Eigenvalue Method
For the homogeneous system x' = Ax, seek solutions of the form x = v e^(lambda t) where v is a constant vector. Substituting gives Av = lambda v, so lambda must be an eigenvalue of A and v the corresponding eigenvector. The general solution is a linear combination of all such solutions over all eigenvalues.
Cases for 2x2 Systems
- Real distinct eigenvalues lambda1, lambda2: x = C1 v1 e^(lambda1 t) + C2 v2 e^(lambda2 t)
- Repeated eigenvalue lambda, one eigenvector: x = C1 v e^(lambda t) + C2 (v t + w) e^(lambda t) where w is a generalized eigenvector satisfying (A - lambda I)w = v
- Complex eigenvalues alpha +/- i beta: Take real and imaginary parts of the complex solution to get two real solutions
Phase Plane Analysis
The phase plane for a 2D autonomous system x' = f(x,y), y' = g(x,y) shows trajectories in the (x,y) plane parametrized by time. The direction field is tangent to each trajectory. Equilibrium points (where f = g = 0) are classified by linearization: the Jacobian at the equilibrium determines whether it is a stable node, unstable node, saddle, stable spiral, unstable spiral, or center.
Classification by Eigenvalues
- Both negative real: stable node
- Both positive real: unstable node
- Real opposite signs: saddle point
- Complex with negative real part: stable spiral
- Complex with positive real part: unstable spiral
- Pure imaginary: center
Stability Criteria (Trace-Determinant)
- det(A) = lambda1 * lambda2
- tr(A) = lambda1 + lambda2
- Stable if tr(A) less than 0 and det(A) greater than 0
- Saddle if det(A) less than 0
- Discriminant = tr^2 - 4 det determines node vs spiral
Fundamental Matrix and Matrix Exponential
A fundamental matrix Phi(t) is a matrix whose columns are n linearly independent solutions of x' = Ax. The general solution is x = Phi(t) c for constant vector c. For constant-coefficient systems, the fundamental matrix equals the matrix exponential e^(At), which can be computed via eigendecomposition (if A is diagonalizable) or the Cayley-Hamilton theorem. The matrix exponential satisfies d/dt[e^(At)] = A e^(At) and e^(A*0) = I.
4. Laplace Transforms
The Laplace transform converts a function of time t into a function of complex frequency s, transforming differential equations into algebraic equations. It is especially powerful for initial value problems and for handling discontinuous or impulsive forcing functions.
Definition and Basic Properties
For a function f(t) defined on t greater than or equal to 0, the Laplace transform is L(f)(s) = integral from 0 to infinity of e^(-st) f(t) dt, provided the integral converges. The transform exists whenever f is piecewise continuous and of exponential order (|f(t)| less than or equal to M e^(at) for some M, a, and all large t).
Essential Transform Table
| f(t) | L(f)(s) |
|---|---|
| 1 | 1/s |
| t^n | n! / s^(n+1) |
| e^(at) | 1/(s-a) |
| sin(bt) | b / (s^2 + b^2) |
| cos(bt) | s / (s^2 + b^2) |
| t e^(at) | 1 / (s-a)^2 |
| e^(at) sin(bt) | b / ((s-a)^2 + b^2) |
| e^(at) cos(bt) | (s-a) / ((s-a)^2 + b^2) |
| delta(t - a) | e^(-as) |
Transform Properties for Derivatives
The power of Laplace transforms for ODEs comes from the derivative property: L(f')(s) = s L(f)(s) - f(0), and L(f'')(s) = s^2 L(f)(s) - s f(0) - f'(0). These formulas encode the initial conditions automatically, transforming the ODE into an algebraic equation in Y(s) = L(y)(s).
Convolution Theorem
The convolution of two functions is (f * g)(t) = integral from 0 to t of f(tau) g(t - tau) d tau. The convolution theorem states L(f * g)(s) = L(f)(s) * L(g)(s). This is fundamental for solving ODEs where the right-hand side is a product in the s-domain, expressing the solution as a convolution integral. It also gives the response of a linear system to an arbitrary input in terms of the impulse response.
Partial Fractions for Inverse Transforms
After transforming an IVP, Y(s) is typically a rational function of s. Partial fraction decomposition breaks Y(s) into simpler terms that match the Laplace table. For a root at s = a with multiplicity m, the partial fraction expansion includes terms A/(s-a), B/(s-a)^2, ..., up to A/(s-a)^m. Complex conjugate roots a +/- ib contribute terms of the form (As + B) / ((s-a)^2 + b^2).
Heaviside Step Function and Discontinuous Forcing
The Heaviside unit step u(t - a) equals 0 for t less than a and 1 for t greater than or equal to a. It models a forcing function that switches on at time a. The second shifting theorem gives L(u(t-a) f(t-a))(s) = e^(-as) F(s). This enables closed-form Laplace treatment of piecewise-defined forcing functions, such as a hammer blow or a switch being thrown at a specific time.
5. Power Series Solutions
When an ODE has non-constant coefficients that do not yield to elementary methods, power series solutions provide systematic access to the solution near a specific point. The key distinction is whether the expansion point is ordinary or singular.
Ordinary vs. Singular Points
For y'' + P(x) y' + Q(x) y = 0, a point x0 is ordinary if P and Q are analytic at x0 (representable as convergent power series). At an ordinary point, two independent power series solutions exist and converge in a disk at least as large as the nearest singularity of P or Q in the complex plane. A singular point is where P or Q fails to be analytic. A singular point x0 is regular if (x-x0)P(x) and (x-x0)^2 Q(x) are analytic at x0; otherwise it is irregular.
Classification Summary
- Ordinary point: standard power series y = sum(a_n (x-x0)^n) works
- Regular singular point: Frobenius method (extended power series) required
- Irregular singular point: Frobenius may fail; asymptotic methods needed
The Frobenius Method
At a regular singular point x0, assume a solution of the form y = (x-x0)^r * sum over n of a_n (x-x0)^n with a_0 not 0. Substituting into the ODE and equating powers of (x-x0) to zero, the lowest power gives the indicial equation, a quadratic in r. The two roots r1 and r2 (with r1 greater than or equal to r2) of the indicial equation determine the structure of the solutions:
- Case 1 (r1 - r2 not an integer): Two independent Frobenius series y1 (exponent r1) and y2 (exponent r2)
- Case 2 (r1 = r2): One Frobenius series y1 and y2 = y1 ln(x-x0) + (x-x0)^r1 * sum(b_n (x-x0)^n)
- Case 3 (r1 - r2 = positive integer): y1 is a Frobenius series (exponent r1); y2 may include a logarithmic term
Bessel's Equation
Bessel's equation x^2 y'' + x y' + (x^2 - nu^2) y = 0 arises in problems with circular or cylindrical symmetry (drum vibrations, heat flow in cylinders, electromagnetic waveguides). The origin x = 0 is a regular singular point with indicial equation r^2 - nu^2 = 0, giving roots r = +/- nu. The Frobenius series with r = nu is the Bessel function of the first kind J_nu(x). For integer nu = n, the second solution Y_n(x) (Bessel function of the second kind or Neumann function) involves a logarithmic term.
Bessel Function J_0(x)
J_0(x) = sum over m of (-1)^m (x/2)^(2m) / (m!)^2
= 1 - x^2/4 + x^4/64 - x^6/2304 + ...
Legendre's Equation
Legendre's equation (1-x^2)y'' - 2xy' + n(n+1)y = 0 arises in problems with spherical symmetry (gravitational potentials, quantum mechanics of hydrogen). The points x = +/- 1 are regular singular points; x = 0 is ordinary. For non-negative integer n, one solution is the Legendre polynomial P_n(x) of degree n. The first few are: P_0 = 1, P_1 = x, P_2 = (3x^2 - 1)/2, P_3 = (5x^3 - 3x)/2. Legendre polynomials are orthogonal on [-1, 1] with weight 1.
6. Sturm-Liouville Theory
Sturm-Liouville theory provides a unified framework for the orthogonal function expansions that arise throughout applied mathematics. It generalizes the familiar Fourier series to a broad class of second-order eigenvalue problems.
The Sturm-Liouville Problem
The regular Sturm-Liouville problem consists of the ODE d/dx[p(x) y'] - q(x) y + lambda w(x) y = 0 on [a, b], with boundary conditions alpha1 y(a) + alpha2 y'(a) = 0 and beta1 y(b) + beta2 y'(b) = 0. Here p, q, w are given functions with p and w positive on [a, b]. The parameter lambda is the eigenvalue; nontrivial solutions y_n are eigenfunctions.
Self-Adjoint Operators and Orthogonality
The Sturm-Liouville operator L[y] = -(1/w)(d/dx[p y'] - q y) is self-adjoint (symmetric) with respect to the weighted inner product (f, g) = integral from a to b of f(x) g(x) w(x) dx. As a consequence, all eigenvalues are real, and eigenfunctions corresponding to distinct eigenvalues are orthogonal with respect to this weighted inner product.
Key Theorems
- Eigenvalue reality: All eigenvalues lambda_n are real
- Ordering: Eigenvalues form an increasing sequence lambda_1 less than lambda_2 less than ... tending to infinity
- Orthogonality: integral from a to b of y_m y_n w dx = 0 for m not equal to n
- Completeness: Any piecewise smooth function on [a, b] has a convergent eigenfunction expansion
Generalized Fourier Series
By completeness, any suitable function f on [a, b] can be written as f(x) = sum over n of c_n y_n(x), where the coefficients are c_n = integral from a to b of f(x) y_n(x) w(x) dx / integral from a to b of y_n(x)^2 w(x) dx. This is the Sturm-Liouville version of the Fourier series. Standard Fourier sine and cosine series, Bessel series, and Legendre series are all special cases.
7. Partial Differential Equations: Classification
A second-order PDE in two independent variables x and y has the general form Au_xx + Bu_xy + Cu_yy + Du_x + Eu_y + Fu = G. The discriminant B^2 - 4AC classifies the PDE, analogous to the classification of conic sections.
Elliptic
B^2 - 4AC less than 0
Steady-state problems. No real characteristics. Solutions are smooth (harmonic functions). Prototype: Laplace equation u_xx + u_yy = 0.
Parabolic
B^2 - 4AC = 0
Evolution problems with smoothing. One family of real characteristics. Solutions become instantaneously smooth. Prototype: heat equation u_t = k u_xx.
Hyperbolic
B^2 - 4AC greater than 0
Wave propagation. Two families of real characteristics. Discontinuities can propagate. Prototype: wave equation u_tt = c^2 u_xx.
Method of Characteristics for First-Order PDEs
For a first-order PDE a(x,y,u) u_x + b(x,y,u) u_y = c(x,y,u), characteristics are curves in the (x,y) plane along which the PDE reduces to an ODE. The characteristic equations are dx/ds = a, dy/ds = b, du/ds = c. By solving these ODEs along each characteristic curve, one propagates the solution from initial data. The method extends to quasi-linear and fully nonlinear first-order PDEs via the theory of contact transformations.
Example: Advection Equation
u_t + c u_x = 0 with u(x, 0) = f(x)
Characteristics: dx/dt = c, so x - ct = constant
Solution: u(x, t) = f(x - ct) (traveling wave, speed c)
8. The Heat Equation
The heat equation u_t = k u_xx describes diffusion of heat (or any diffusing quantity) along a rod. Here k is the thermal diffusivity. The equation is parabolic; solutions are smooth for t greater than 0 regardless of initial data.
Separation of Variables
Assume u(x,t) = X(x) T(t). Substituting into the heat equation and dividing by X T gives T'/(kT) = X''/X = -lambda. Both sides equal the same constant (the separation constant). This yields two ODEs: T' + k lambda T = 0 and X'' + lambda X = 0. The spatial ODE with boundary conditions determines the allowable values of lambda (eigenvalues) and eigenfunctions X_n.
Solution on [0, L] with Dirichlet BCs u(0,t) = u(L,t) = 0
Eigenvalues: lambda_n = (n pi / L)^2, n = 1, 2, 3, ...
Eigenfunctions: X_n = sin(n pi x / L)
Time factors: T_n = exp(-k (n pi / L)^2 t)
General solution: u = sum over n of b_n sin(n pi x / L) exp(-k (n pi / L)^2 t)
Fourier coefficients: b_n = (2/L) integral from 0 to L of f(x) sin(n pi x / L) dx
Neumann and Mixed Boundary Conditions
Neumann boundary conditions u_x(0,t) = u_x(L,t) = 0 (insulated ends) give eigenfunctions cos(n pi x / L) with n = 0, 1, 2, ... The n = 0 term is a_0/2 (constant), representing the conserved average temperature. Insulated BCs preserve the total thermal energy: d/dt integral from 0 to L of u dx = 0.
The Fundamental Solution (Heat Kernel)
On the full real line, the heat equation with initial data u(x,0) = f(x) has solution u(x,t) = integral from -infinity to infinity of G(x-y, t) f(y) dy, where the heat kernel is G(x,t) = (1 / sqrt(4 pi k t)) exp(-x^2 / (4kt)). The heat kernel is itself a Gaussian that broadens over time, reflecting diffusive spreading. As t approaches 0 from above, G approaches the Dirac delta distribution.
9. The Wave Equation
The wave equation u_tt = c^2 u_xx models vibrating strings, acoustic pressure waves, and electromagnetic fields. It is hyperbolic; disturbances propagate at finite speed c with no smoothing effect.
D'Alembert's Solution
The general solution on the infinite line is u(x,t) = F(x + ct) + G(x - ct), a superposition of waves traveling left and right at speed c. With initial conditions u(x,0) = f(x) (initial displacement) and u_t(x,0) = g(x) (initial velocity), d'Alembert's formula gives the solution explicitly without series.
D'Alembert's Formula
u(x,t) = [f(x+ct) + f(x-ct)] / 2
+ (1/(2c)) integral from (x-ct) to (x+ct) of g(s) ds
Standing Waves on a Bounded String
For a string fixed at both ends (u(0,t) = u(L,t) = 0), separation of variables gives standing wave solutions u_n(x,t) = sin(n pi x / L) [A_n cos(n pi c t / L) + B_n sin(n pi c t / L)]. The natural frequencies are omega_n = n pi c / L. The general solution is a superposition of all modes; the Fourier coefficients A_n and B_n are determined by initial displacement and velocity.
Energy Conservation
The total energy of the wave equation system is E(t) = (1/2) integral from 0 to L of [u_t^2 + c^2 u_x^2] dx. For homogeneous Dirichlet or Neumann boundary conditions, dE/dt = 0: the wave equation conserves energy. This contrasts sharply with the heat equation, which dissipates energy.
Domain of Dependence and Influence
The value u(x0, t0) depends only on initial data in the interval [x0 - ct0, x0 + ct0] — the domain of dependence. Conversely, the initial data at x0 influences the solution only in the cone |x - x0| less than or equal to c t — the domain of influence. This finite propagation speed (Huygens' principle in 1D) is a fundamental feature of hyperbolic equations.
10. The Laplace Equation and Harmonic Functions
The Laplace equation u_xx + u_yy = 0 (or Delta u = 0 in higher dimensions) governs electrostatic potentials, steady-state heat distribution, and incompressible irrotational fluid flow. Solutions are called harmonic functions and possess remarkable regularity properties.
Mean Value Property
If u is harmonic in a domain, then at any interior point the value u(x0, y0) equals the average of u over any circle centered at that point lying in the domain. In three dimensions, the average is over a sphere. This property characterizes harmonic functions and has profound consequences: a harmonic function cannot have an interior maximum or minimum unless it is constant.
Maximum Principle
A nonconstant harmonic function on a bounded domain attains its maximum and minimum values only on the boundary, not in the interior. The strong maximum principle states that if the maximum is attained at any interior point, the function is identically constant. This implies uniqueness for the Dirichlet problem: two harmonic functions with the same boundary data must be equal.
Separation of Variables in Polar Coordinates
On a disk of radius a, the Laplace equation in polar coordinates (r, theta) is u_rr + (1/r) u_r + (1/r^2) u_theta_theta = 0. Separation gives u = R(r) Theta(theta), leading to Euler equations for R and periodic Theta. The bounded solution is u(r, theta) = A_0/2 + sum over n of r^n [A_n cos(n theta) + B_n sin(n theta)], with coefficients determined by the boundary condition u(a, theta) = f(theta) via Fourier series in theta.
Green's Functions
The Green's function G(x, y; x0, y0) for a domain is the response to a unit point source at (x0, y0) with zero boundary data. Once known, the solution to Delta u = f with zero Dirichlet data is u(x0, y0) = double integral of G(x,y; x0,y0) f(x,y) dA. For the upper half-plane, the Green's function is constructed by the method of images. For more complex geometries, conformal mapping can transform the domain to one where the Green's function is known.
11. Nonlinear ODEs and Qualitative Analysis
Most real-world ODEs are nonlinear and have no closed-form solutions. Qualitative analysis — studying the long-term behavior without explicit formulas — is essential. Phase plane methods, nullclines, and stability theory provide rich information about the dynamics.
Nullclines and the Phase Plane
For the autonomous system x' = f(x, y), y' = g(x, y), the x-nullclines are curves where f = 0 (so x' = 0), and the y-nullclines are curves where g = 0 (so y' = 0). Their intersections are equilibrium points. The sign of f and g in each region between nullclines gives the direction of flow, allowing a qualitative sketch of trajectories without solving the system.
Linearization at Equilibria
Near an equilibrium (x*, y*), the nonlinear system is approximated by its linearization: the system with coefficient matrix equal to the Jacobian J evaluated at (x*, y*). The Hartman-Grobman theorem guarantees that the local phase portrait of the nonlinear system near a hyperbolic equilibrium (eigenvalues with nonzero real parts) is topologically equivalent to the phase portrait of the linearization.
Jacobian Matrix
J = | f_x f_y |
| g_x g_y |
evaluated at equilibrium (x*, y*)
Poincaré-Bendixson Theorem
The Poincaré-Bendixson theorem is the principal existence result for limit cycles in the plane. If a trajectory of a continuously differentiable autonomous system remains in a compact region that contains no equilibrium point, then the trajectory's omega-limit set is a closed orbit. This theorem establishes the existence of periodic solutions in many biological and mechanical models.
Bendixson's Negative Criterion
If the divergence of the vector field (f_x + g_y) is of constant sign (never zero) on a simply connected domain, then the system has no closed orbits or limit cycles in that domain. This is a useful tool for ruling out periodic behavior.
Limit Cycles
A limit cycle is an isolated closed trajectory in the phase plane. A stable limit cycle attracts nearby trajectories as t approaches infinity. The van der Pol oscillator x'' - mu(1 - x^2)x' + x = 0 (mu greater than 0) is the canonical example: it has exactly one stable limit cycle. For small mu the cycle is nearly circular; for large mu the oscillation becomes a relaxation oscillation with rapid transitions and slow drift.
12. Applications: Population Models, Vibrations, and Circuits
Differential equations are indispensable tools in biology, mechanical engineering, and electrical engineering. The following canonical models illustrate the power and range of the methods developed above.
Logistic Population Model
The logistic equation dP/dt = rP(1 - P/K) models population growth with carrying capacity K. Unlike pure exponential growth, the logistic model saturates: as P approaches K the growth rate approaches zero. Separating variables yields the explicit solution P(t) = K / (1 + ((K - P0)/P0) e^(-rt)), which is an S-shaped (sigmoid) curve. The equilibria P = 0 (unstable) and P = K (stable) are confirmed by phase line analysis.
Lotka-Volterra Predator-Prey System
The system x' = ax - bxy (prey), y' = -cy + dxy (predator) models the coupled dynamics of prey (population x) and predator (population y). The constants a, b, c, d are positive. There are two equilibria: the trivial equilibrium (0, 0) (saddle point) and the coexistence equilibrium (c/d, a/b) (center in the linearization). All trajectories in the positive quadrant are closed orbits encircling the coexistence equilibrium — the populations oscillate periodically without ever reaching a stable steady state.
Conservation Law
V(x, y) = d x - c ln(x) + b y - a ln(y) = constant
This first integral is conserved along all trajectories and explains
why Lotka-Volterra orbits are closed (centers, not spirals).
Mechanical Vibrations: Mass-Spring-Damper
Newton's second law for a mass m on a spring (constant k) with damping (coefficient b) gives mx'' + bx' + kx = F(t). Dividing by m: x'' + 2 gamma x' + omega_0^2 x = f(t), where omega_0 = sqrt(k/m) is the natural frequency and gamma = b/(2m) is the damping ratio. The system's behavior depends on whether gamma^2 compared to omega_0^2:
- Underdamped (gamma less than omega_0): Oscillatory decay, x_h = e^(-gamma t)[A cos(omega_d t) + B sin(omega_d t)], where omega_d = sqrt(omega_0^2 - gamma^2)
- Critically damped (gamma = omega_0): Fastest approach to equilibrium without oscillation: x_h = (C1 + C2 t) e^(-gamma t)
- Overdamped (gamma greater than omega_0): Slow exponential approach to equilibrium with two real decay rates
- Resonance (F(t) = F0 cos(omega_0 t), undamped): Particular solution grows as t sin(omega_0 t) without bound
RLC Circuit Analysis
A series RLC circuit with resistance R, inductance L, and capacitance C driven by voltage E(t) satisfies L Q'' + R Q' + Q/C = E(t), where Q is the charge on the capacitor. This is mathematically identical to the mass-spring equation (L plays the role of m, R the role of damping, 1/C the role of the spring constant). The natural frequency is omega_0 = 1/sqrt(LC), and the damping constant is gamma = R/(2L). Laplace transforms are particularly efficient for circuit analysis with switched sources and initial charge/current conditions.
13. Practice Problems with Solutions
Work through these problems, then expand the solutions to check your understanding.
Problem 1: Separable ODE with Initial Condition
Solve dy/dx = xy / (1 + x^2) with y(0) = 3.
+ Show Solution
Separate: dy/y = x dx / (1 + x^2)
Integrate left side: ln|y| + C1
Integrate right side: (1/2) ln(1 + x^2) + C2
Combine: ln|y| = (1/2) ln(1 + x^2) + C
Exponentiate: y = A sqrt(1 + x^2)
Apply y(0) = 3: 3 = A sqrt(1) = A
Solution: y = 3 sqrt(1 + x^2)
Problem 2: Linear First-Order with Integrating Factor
Solve y' + (2/x) y = x^2 on x greater than 0.
+ Show Solution
P(x) = 2/x, so mu = exp(integral 2/x dx) = exp(2 ln x) = x^2
Multiply through by x^2: x^2 y' + 2x y = x^4
Left side is d/dx[x^2 y], so d/dx[x^2 y] = x^4
Integrate: x^2 y = x^5/5 + C
Solution: y = x^3/5 + C/x^2
Problem 3: Second-Order Constant Coefficient IVP
Solve y'' - 5y' + 6y = 0 with y(0) = 1, y'(0) = 4.
+ Show Solution
Characteristic equation: r^2 - 5r + 6 = 0
Factor: (r - 2)(r - 3) = 0, so r = 2 or r = 3
General solution: y = C1 e^(2x) + C2 e^(3x)
y(0) = C1 + C2 = 1
y'(x) = 2C1 e^(2x) + 3C2 e^(3x)
y'(0) = 2C1 + 3C2 = 4
From first equation C1 = 1 - C2; substituting: 2(1-C2) + 3C2 = 4
2 + C2 = 4, so C2 = 2, C1 = -1
Solution: y = -e^(2x) + 2 e^(3x)
Problem 4: Undetermined Coefficients
Find a particular solution to y'' + 4y = 3 cos(2x).
+ Show Solution
Homogeneous solution: y_h = C1 cos(2x) + C2 sin(2x)
The forcing 3 cos(2x) matches y_h, so there is resonance.
Multiply guess by x: try y_p = x[A cos(2x) + B sin(2x)]
y_p' = A cos(2x) + B sin(2x) + x[-2A sin(2x) + 2B cos(2x)]
y_p'' = -4A sin(2x) + 4B cos(2x) + x[-4A cos(2x) - 4B sin(2x)]
y_p'' + 4 y_p = -4A sin(2x) + 4B cos(2x)
Match 3 cos(2x): 4B = 3 so B = 3/4; -4A = 0 so A = 0
Particular solution: y_p = (3/4) x sin(2x)
Problem 5: Laplace Transform IVP
Solve y'' + y = sin(t) with y(0) = 0, y'(0) = 0 using Laplace transforms.
+ Show Solution
Transform: s^2 Y - 0 - 0 + Y = 1/(s^2+1)
Y(s^2 + 1) = 1/(s^2+1)
Y = 1/(s^2+1)^2
Partial fractions / table: L^(-1)(1/(s^2+1)^2) = (sin(t) - t cos(t))/2
Solution: y(t) = (sin(t) - t cos(t)) / 2
Note: the t cos(t) term reflects resonance (omega = 1 is the natural frequency)
Problem 6: Phase Plane Equilibrium Classification
Find and classify the equilibria of x' = x - xy, y' = -y + xy. (This is the Lotka-Volterra system with a = c = 1, b = d = 1.)
+ Show Solution
Set x - xy = 0 and -y + xy = 0
From first: x(1 - y) = 0, so x = 0 or y = 1
From second: y(x - 1) = 0, so y = 0 or x = 1
Equilibria: (0, 0) and (1, 1)
Jacobian J = | 1-y -x |
| y x-1 |
At (0,0): J = | 1 0 |, eigenvalues 1 and -1: saddle point
| 0 -1 |
At (1,1): J = | 0 -1 |, tr = 0, det = 1, eigenvalues +/- i
| 1 0 |
(0,0) is a saddle; (1,1) is a center (linearization) — closed orbits expected
Problem 7: Heat Equation Fourier Coefficients
Solve u_t = u_xx on [0, pi] with u(0,t) = u(pi,t) = 0 and u(x,0) = x(pi - x).
+ Show Solution
With L = pi and k = 1, the general solution is
u(x,t) = sum over n=1 to inf of b_n sin(nx) e^(-n^2 t)
b_n = (2/pi) integral from 0 to pi of x(pi-x) sin(nx) dx
Integrate by parts twice:
b_n = (2/pi) * [pi/n^2 * 2 (1 - (-1)^n)] ... evaluating carefully:
b_n = 4(1 - (-1)^n) / (n^3 pi) * pi = 4(1 - (-1)^n) / n^3
Wait, standard result: for f(x) = x(pi-x) on [0,pi]:
b_n = 0 for n even; b_n = 8/(n^3 pi) * ... let us be precise:
b_n = (2/pi) integral x(pi-x) sin(nx) dx
= (2/pi) * [4/n^3] for n odd, 0 for n even (by standard computation)
= 8/(n^3 pi) for odd n
u(x,t) = sum over odd n of (8/(n^3 pi)) sin(nx) e^(-n^2 t)
Quick Reference: Key Formulas
First-Order Methods
- Separable: integral dy/h(y) = integral g(x) dx
- Linear: y = (1/mu) integral mu Q dx, mu = e^(int P dx)
- Exact: F_x = M, F_y = N, solution F = C
- Bernoulli: v = y^(1-n) linearizes
Second-Order Formulas
- Char. roots: r = (-b +/- sqrt(b^2-4ac)) / (2a)
- Wronskian: W = y1 y2' - y2 y1'
- VoP: u1' = -y2 f/W, u2' = y1 f/W
- Abel: W(x) = W(x0) exp(-int P dx)
Laplace Essentials
- L(f') = s F(s) - f(0)
- L(f'') = s^2 F(s) - s f(0) - f'(0)
- L(f*g) = F(s) G(s) (convolution)
- L(u(t-a) f(t-a)) = e^(-as) F(s)
PDE Solutions
- Heat: u = sum b_n sin(n pi x/L) e^(-k(n pi/L)^2 t)
- Wave: u = F(x+ct) + G(x-ct)
- Laplace (disk): u = sum r^n [A_n cos(n th) + B_n sin(n th)]
- Separation constant links X''/X = T'/(kT) = -lambda
Frequently Asked Questions
What is the difference between an ODE and a PDE?
An ordinary differential equation (ODE) involves a function of a single independent variable and its ordinary derivatives. A partial differential equation (PDE) involves a function of two or more independent variables and partial derivatives with respect to each. ODEs govern dynamics of lumped systems (a single point mass, a circuit), while PDEs govern distributed systems (temperature in a rod, pressure in a fluid, displacement in a membrane). The solution theory for PDEs is substantially richer and more complex than for ODEs.
When should I use Laplace transforms vs. undetermined coefficients?
Use undetermined coefficients when the right-hand side is a polynomial, exponential, sine, cosine, or product of these, and the equation has constant coefficients. The method is fast once you recognize the form. Use Laplace transforms when: initial conditions are given (Laplace incorporates them automatically), the right-hand side is piecewise-defined or involves a Heaviside step function or Dirac delta, or when you need the particular solution for a specific initial value problem rather than the general solution. For systems of ODEs with initial conditions, Laplace transforms are especially powerful.
Why does the Frobenius method involve logarithms in some cases?
Logarithms appear when the two roots of the indicial equation are equal, or when they differ by a positive integer and a certain compatibility condition fails. In these cases, the second solution cannot be expressed as a pure power series — the standard Frobenius ansatz produces only one independent solution. The second solution is obtained by differentiating the first solution with respect to the root parameter r and evaluating at the repeated root, which naturally introduces ln(x). Bessel functions Y_n (integer order) and the second solutions of Legendre's equation at the singular points x = +/- 1 are canonical examples.
What makes the heat equation fundamentally different from the wave equation?
The heat equation is parabolic and irreversible: solutions instantly become smooth for any t greater than 0, energy dissipates over time, and information propagates at infinite speed (a temperature perturbation anywhere in the rod is felt everywhere instantaneously, though exponentially small at large distances). The wave equation is hyperbolic and time-reversible: solutions preserve the regularity of initial data (no smoothing), energy is conserved, and disturbances travel at finite speed c. The heat equation's high-frequency modes decay exponentially fast (factor exp(-k n^2 t)), while the wave equation's modes oscillate without decay.
How do you determine if a nonlinear equilibrium is stable?
For a hyperbolic equilibrium (Jacobian eigenvalues have nonzero real parts), the Hartman-Grobman theorem guarantees the nonlinear stability matches the linearization: stable if all eigenvalues have negative real parts, unstable if any eigenvalue has positive real part. For non-hyperbolic equilibria (some eigenvalues purely imaginary, as in the Lotka-Volterra center), linearization is inconclusive and one needs a Lyapunov function — a positive definite function V(x,y) whose time derivative along solutions is non-positive (for stability) or negative definite (for asymptotic stability). Finding a Lyapunov function is an art; energy functions and conserved quantities are natural candidates.
Continue Learning
Real Analysis
Rigorous foundations: completeness, sequences, continuity, integration
Linear Algebra
Matrices, eigenvalues, vector spaces — essential tools for systems of ODEs
Fourier Analysis
Fourier series, transforms, and spectral methods for PDE solutions
Complex Analysis
Contour integration, residues, and conformal mapping for PDEs