MATH 20D cheat sheet
I don't know why it works. Just trust it.
Assumes x or t is
the independent variable and y is dependent.
Separable equations
\frac{dy}{dx} = g(x) \, p(y)
Solve
-
Multiply by
dx, divide byp(y). -
Integrate both sides.
\int \frac{1}{p(y)} \, dy = \int g(x) \, dx -
Solve for
yif possible. -
y = Dis a hidden solution whenp(D) = 0.
Identify
Find g(x) and
p(y).
Linear equations
\frac{dy}{dx} + p(x) \, y = Q(x)
Solve
-
Calculate the integrating factor. Ignore
C.\mu(x) = e^{\int p(x) \, dx} -
Multiply both sides by
\mu(x). Note that\mu'(x) = \mu(x) \, p(x).\mu y' + \mu' y = (\mu y)' = \mu(x) \, Q(x)You need to show this step.
-
Integrate to solve for
y.y = \frac{1}{\mu} \int \mu(x) \, Q(x) \, dx
Identify and uniqueness
y and its derivatives can't be
multiplied by each other or inside another function; they can only
be multiplied by something in terms of
x.
An initial condition has a unique solution in an interval where
p and Q are
continuous.
Exact equations
M(x, y) + N(x, y) \cdot \frac{dy}{dx} = 0
Solve
-
F = \int M \, dx + g(y) -
N = \frac{\partial F}{\partial y} = \text{(something)} + g'(y)Find
g'(y). You have two expressions forN. -
g(y) = \int g'(y) \, dy + C -
Plug
g(y)intoF.F(x, y) = Cis a solution.
Identify
Show that
\frac{\partial M}{\partial y} = \frac{\partial N}{\partial x}
If not, then multiply both M and
N by a special integrating factor
\mu, and find what
\mu makes the equation exact. The form
of \mu is either given (HW 2 Q3) or
-
a function only of
x. IgnoreC.\mu = e^{\displaystyle \int \dfrac{\frac{\partial M}{\partial y} - \frac{\partial N}{\partial x}}{N} \, dx} -
a function only of
y. IgnoreC.\mu = e^{\displaystyle \int \dfrac{\frac{\partial N}{\partial x} - \frac{\partial M}{\partial y}}{M} \, dy}
\mu may cause
gained or lost solutions. We
aren't really told how, but it probably has to do with things like
\mu's denominator being zero.
Homogeneous equations
ay'' + by' + cy = 0
Solve
-
Find the roots of the characteristic equation.
ar^2 + br + c = 0 -
y(t) = C_1 e^{r_1 t} + C_2 e^{r_2 t} -
If there is a repeated root, multiply the repeated root's term by
t.y(t) = C_1 e^{rt} + C_2 t e^{rt} + C_3 t^2 e^{rt}
If the roots are complex
r = \alpha \pm i\beta, they want a real
solution.
-
e^{(\alpha + i\beta)t} = e^{\alpha t} e^{\beta i t} = e^{\alpha t} (\cos \beta t + i \sin \beta t)You should show this use of Euler's formula.
-
y(t) = C_1 e^{\alpha t} \cos \beta t + C_2 e^{\alpha t} \sin \beta t
Uniqueness and springs
For second-order equations,
a \neq 0. There is a unique solution.
This also works for higher order equations.
Spring word problems use this equation for the spring's motion,
with mass m, damping constant
b, and stiffness constant
k.
my'' + by' + ky = F_{ext}(t)
Nonhomogeneous equations
ay'' + by' + cy = f(t)
The following finds a particular solution
y_p(t).
Undetermined coefficients
-
Find the solution
y_c(t)for the homogenous part.ay_c'' + by_c' + cy_c = 0
-
Guess
y_pbased on the terms inf(t)and its derivatives.This sometimes doesn't work, so you'll have to use the other method.
-
Multiply terms by
tuntil there is no repeated term betweeny_pandy_c. -
Solve for the coefficients of
y_pby plugging it into the original equation.ay_p'' + by_p' + cy_p = f(t)
Variation of parameters
-
Find the complementary solution
y_c(t)to findy_1andy_2. Drop the constants.ay_c'' + by_c' + cy_c = 0 \to y_c(t) = y_1 + y_2 -
Set up a system for
v_1'andv_2'.\begin{cases} v_1' y_1 + v_2' y_2 = 0 \\ v_1' y_1' + v_2' y_2' = \dfrac{f(t)}{a} \end{cases} -
Find
v_1andv_2. Drop the integration constants.v_1 = \int v_1'
v_2 = \int v_2'
-
y_p = v_1 y_1 + v_2 y_2
This also works with variable-coefficient equations.
Then, to find the general solution,
y(t) = y_c(t) + y_p(t)
Cauchy–Euler equations
at^2 y'' + bty' + cy = f(t)
Solve (homogeneous)
-
Find the roots of the characteristic equation.
ar^2 + (b - a)r + c = 0
-
y(t) = C_1 t^{r_1} + C_2 t^{r_2} -
If
r = \alpha + i\beta,y(t) = C_1 t^\alpha \cos(\beta \ln t) + C_2 t^\alpha \sin(\beta \ln t) -
If
ris repeated,y(t) = C_1 t^r + C_2 t^r \ln t
-
If the equation is nonhomogeneous, use variation of parameters.
Variable-coefficient equations' uniqueness
y''(t) + p(t) \, y'(t) + q(t) \, y(t) = g(t)
An initial value problem of the above form has a unique solution
if
p, q, and
g are continuous on
(a, b).
Linear independence
Method 1
-
Let
C_1 \vec{x_1} + \dots + C_n \vec{x_n} = 0. -
Show
C_1 = \dots = C_n = 0for somet.One way to do this is to use the matrix form, then determine whether its inverse exists by calculating its determinant. If it's nonzero for some
t, then it's linearly independent. If it's always zero, you'll have to use a different approach.\begin{bmatrix} \vec{x_1} & \dots & \vec{x_n} \end{bmatrix} \begin{bmatrix} C_1 \\ \vdots \\ C_2 \end{bmatrix} = 0 \to \det \begin{bmatrix} \vec{x_1} & \dots & \vec{x_n} \end{bmatrix} \neq 0
Method 2
This only works if the problem says the vectors are solutions of
\vec{x}' = A\vec{x}.
-
Show that for every
t,w(\vec{x_1}, \dots, \vec{x_n}) = \det \begin{bmatrix} x_{11}(t) & \dots & x_{n1}(t) \\ \vdots & & \vdots \\ x_{1n}(t) & \dots & x_{nn}(t) \end{bmatrix} \neq 0
Linear systems with a constant-coefficient matrix
\vec{x}' = A\vec{x}
Solve
-
Solve for the eigenvalues
r.\det(A - rI) = 0
If r_1 \neq r_2,
-
For each eigenvalue
r, find its eigenvector\vec{u}.(A - rI) \vec{u} = 0 -
\vec{x}(t) = C_1 e^{r_1 t} \vec{u_1} + \dots + C_n e^{r_n t} \vec{u_n}
For complex eigenvalues
r_1 = \alpha + i\beta,
-
You only need to find one eigenvector
\vec{u_1} = \vec{a} + i\vec{b}because\vec{u_2} = \vec{a} - i\vec{b}. -
Use Euler's formula and collect the real and imaginary terms.
\begin{aligned} e^{(\alpha + i\beta) t} \vec{u_1} &= e^{\alpha t} (\cos \beta t + i \sin \beta t) \vec{u_1} \\ &= e^{\alpha t} \left( \underbrace{\left(\cos(\beta t) \vec{a} - \sin(\beta t) \vec{b} \right)}_{\vec{x_1}} + i \underbrace{\left(\cos(\beta t) \vec{b} + \sin(\beta t) \vec{a} \right)}_{\vec{x_2}} \right) \end{aligned} -
\vec{x}(t) = C_1 e^{\alpha t} \vec{x_1} + C_2 e^{\alpha t} \vec{x_2}
If there is a repeated eigenvalue on the homework or midterm, you are probably wrong. But for your information,
-
Solve for
\vec{v}.(A - rI) \vec{v} = \vec{u} -
\vec{x}(t) = te^{rt} \vec{u} + e^{rt} \vec{v}
Fundamentals
With a general solution
\vec{x}(t) = C_1 \vec{x_1} + \dots + C_n \vec{x_n}
the fundamental set is
\mathscr{S} = \{ \vec{x_1}, \dots, \vec{x_n} \}
and the fundamental matrix is
X(t) = \begin{bmatrix}
\vec{x_1} (t) & \dots & \vec{x_n} (t)
\end{bmatrix}
Laplace transform
\mathscr{L}\{f\}(s) = F(s) = \int_0^\infty e^{-st} f(t) \,dt = \lim_{N \to \infty} \int_0^N e^{-st} f(t) \,dt
Examples
\begin{aligned}
f(t) &\longleftrightarrow F(s) \\
1 &\longleftrightarrow \frac{1}{s} && \text{for} ~ s > 0 \\
e^{at} &\longleftrightarrow \frac{1}{s - a} && \text{for} ~ s > a \\
\sin bt &\longleftrightarrow \frac{b}{s^2 + b^2} && \text{for} ~ s > 0 \\
\cos bt &\longleftrightarrow \frac{s}{s^2 + b^2} \\
\end{aligned}
Properties
\begin{aligned}
f(t) &\longleftrightarrow F(s) && \text{for} ~ s > \alpha \\
af_1(t) + bf_2(t) &\longleftrightarrow aF_1(s) + bF_2(s) \\
e^{at}f(t) &\longleftrightarrow F(s - a) && \text{for} ~ s > \alpha + a \\
f'(t) &\longleftrightarrow sF(s) - f(0) && \text{for} ~ s > \alpha \\
f^{(n)}(t) &\longleftrightarrow s^nF(s) - s^{n-1} f(0) - s^{n-2} f'(0) - \dots - f^{(n-1)}(0) \\
t^n f(t) &\longleftrightarrow (-1)^n \frac{d^n F(s)}{ds^n} \\
\end{aligned}
Partial fractions
For \dfrac{p(s)}{Q(s)},
-
Nonrepeated linear factors. If the roots are all distinct,
\frac{p(s)}{Q(s)} = \frac{A_1}{s - r_1} + \dots + \frac{A_n}{s - r_n} -
Repeated linear factors. If
Q(s) = (s - r)^m,\frac{p(s)}{Q(s)} = \frac{A_1}{s - r} + \dots + \frac{A_m}{(s - r)^m} -
Quadratic factors. If
Q(s) = \left((s - \alpha)^2 + \beta^2\right)^m,\frac{p(s)}{Q(s)} = \frac{A_1(s - \alpha) + B_1 \beta}{(s-\alpha)^2 \beta^2} + \dots + \frac{A_m(s - \alpha) + B_m \beta}{((s-\alpha)^2 + \beta^2)^m}
The test will not ask for more than these three cases.
Method
-
If the initial value condition is
w(k)wherek \neq 0, then let\begin{cases} y(t) = w(t + k) \\ y'(t) = w'(t + k) \\ y''(t) = w''(t + k) \\ \end{cases} -
Take the Laplace transform of both sides.
a\left(s^2 Y - sy(0) - y'(0)\right) + b(sY - y(0)) + cY = \mathscr{L}\{f\} -
Apply properties to solve for
Y(s). -
Apply an inverse transform to find
y(t).Don't forget
w(t) = y(t - k)if necessary.
Power series
Ratio test
For
\sum_{n=0}^\infty a_n (x - x_0)^n
the radius of convergence
L = \lim_{n \to \infty} \left| \frac{a_n}{a_{n + 1}} \right|
Taylor series
At x = 0:
f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!} x^n
Singular points
y'' + p(x) y' + q(x) y = 0
The singular points are when p and
q aren't defined.
Method
-
Plug in
\begin{cases} y(x) = \displaystyle\sum\limits_{n=0}^\infty a_n x^n \\ y'(x) = \displaystyle\sum\limits_{n=1}^\infty n a_n x^{n-1} \\ y''(x) = \displaystyle\sum\limits_{n=2}^\infty n(n-1) a_n x^{n-2} \\ \end{cases} -
Simplify by moving all the terms inside summations.
-
Shift indices so that all summations are in terms of
x^n. -
Merge all summations into a single summation in terms of
x^n. Set this all equal to the right hand side. -
Solve for
a_n.Kisun Lee wants us to collect terms into the following form:
y(x) = a_0 \underbrace{(\dots)}_{y_1} + a_1 \underbrace{(\dots)}_{y_2}