 Joe Herbert - Further Maths $\require{enclose}$
Top

# Further Maths

## Pure

 Formula Vectors Matrices Complex Numbers Proof Further Algebra Further Functions Numerical Methods Hyperbolic Functions Polar Coordinates Further Calculus

## Formula

Formula booklet given
Formula to learn

## Vectors

A vector has both magnitude and direction. Notation: $$\overrightarrow{AB} \text{ or } \underset{\sim}{a} \text{ or } \begin{pmatrix}x\\y\end{pmatrix}$$

### Magnitude

The magnitude of 2D vector $\underset{\sim}{a} = \begin{pmatrix}x\\y\end{pmatrix}$ is $|\underset{\sim}{a}| = \sqrt{x^2 + y^2}$
The magnitude of 3D vector $\underset{\sim}{a} = \begin{pmatrix}x\\y\\z\end{pmatrix}$ is $|\underset{\sim}{a}| = \sqrt{x^2 + y^2 + z^2}$
Multiplying a vector by a scalar alters its magnitude but not its direction.
A unit vector is a vector whose magnitude is 1.
e.g. $\begin{pmatrix}3\\4\end{pmatrix}$ in unit vectors is $\frac{1}{5}\begin{pmatrix}3\\4\end{pmatrix}$
$\underset{\sim}{a}$ in unit vectors is $\frac{1}{|\underset{\sim}{a}|}\underset{\sim}{a}$

### Parallel Vectors

If $\underset{\sim}{a}$ is a vector, then $\lambda\underset{\sim}{a}$ is a vector parallel to $\underset{\sim}{a}$, where $\lambda$ is a scalar.

### Cartesian Straight Lines

i = unit vector in x-axis = $\begin{pmatrix}1\\0\\0\end{pmatrix}$
j = unit vector in y-axis = $\begin{pmatrix}0\\1\\0\end{pmatrix}$
k = unit vector in z-axis = $\begin{pmatrix}0\\0\\1\end{pmatrix}$
xi + yj + zk = $\begin{pmatrix}x\\y\\z\end{pmatrix}$ e.g. $\begin{pmatrix}-2\\0\\3\end{pmatrix} = -2i + 3k$

### Dot Product

The dot product (or scalar product) $\underset{\sim}{a} \cdot \underset{\sim}{b}$ of two vectors is the sum of the products of the components. In 2D, vector $\underset{\sim}{a} = \begin{pmatrix}a_1\\a_2\end{pmatrix}$, vector $\underset{\sim}{b} = \begin{pmatrix}b_1\\b_2\end{pmatrix}$, $\underset{\sim}{a} \cdot \underset{\sim}{b} = a_1b_1 + a_2b_2$
In 3D, vector $\underset{\sim}{a} = \begin{pmatrix}a_1\\a_2\\a_3\end{pmatrix}$, vector $\underset{\sim}{b} = \begin{pmatrix}b_1\\b_2\\b_3\end{pmatrix}$, $\underset{\sim}{a} \cdot \underset{\sim}{b} = a_1b_1 + a_2b_2 + a_3b_3$

### Angle Between Two Vectors

$\underset{\sim}{a} \cdot \underset{\sim}{b} = |a| \times |b| \times \cos \theta$ where $\theta$ is the acute angle between two lines To find the angle between two lines, find the angle between their direction vectors, not their position vectors.
If two vectors are perpendicular their dot product is 0.

### Converting between Vector and Parametric Equations

If you have $\underset{\sim}{r} = \begin{pmatrix}3\\2\\1\end{pmatrix} + t \begin{pmatrix}-3\\4\\-1\end{pmatrix}$, and $\underset{\sim}{r}$ is a general position vector, then
$\begin{pmatrix}x\\y\\z\end{pmatrix} = \begin{pmatrix}3\\2\\1\end{pmatrix} + t \begin{pmatrix}-3\\4\\-1\end{pmatrix}$
$x = 3 - 3t$
$y = 2 + 4t$
$z = 1 - t$
This is the parametric equation of the line.

### Converting between Parametric and Cartesian Equations

Rearrange to make $t$ the subject.
$t = \frac{3-x}{3}$
$t = \frac{y-2}{4}$
$t = 1 - z$
$\frac{3-x}{3} = \frac{y-2}{4} = 1 - z$ is the equation of this line.
To convert between vector and cartesian equations you must convert to parametric in between.

### Intersection of Lines

Parallel lines have the same direction vector and do not intersect.
Intersecting lines have different direction vectors and there is one point that lies on both lines.
Skew Lines have different direction vectors and don't intersect. These are only possible in 3D.
To find the intersection of lines, convert to parametric equations, solve for x and y simultaneously to work out the scalars for the crossing points. Then work out x and y. Then plug in to find z.
e.g. Find the intersection of the lines $r = i + j + k + \lambda (2i + j + k)$ and $r = -i - 3j + \mu (2i + 2j + k)$
Parametric Equations for x values: $x = 1 + 2 \lambda = -1 + 2 \mu$
Parametric Equations for y values: $y = 1 + \lambda = -3 + 2 \mu$
$\therefore \lambda = 2$ and $\mu = 3$
$z = 1 + \lambda = 3$ and $z = \mu = 3$. Equations have the same z values which shows they're not skew lines and they do intersect.
$\therefore$ the lines intersect at (5, 3, 3).

### Minimum Distance between Line and Point

The minimum distance is the magnitude of the vector from the point perpendicular to the line.
To find this:
• Find the parametrics of the lines.
• Find the general vector by subtracting the position vector of the point from the parametrics.
• This general vector must be perpendicular to the original line, so their dot product is 0.
• Solve this to find the scalar of the direction vector.
• Plug this back into the general vector to get the vector between the line and the point.
• Then find the magnitude of this vector to get the distance.

### Distance between Two Parallel Lines

This is done in the same way as between a point and a line. Find a point on one of the lines by equating the scalar to 0 and then do the normal method.

### Cross Product

The cross product of $\underset{\sim}{a} \times \underset{\sim}{b}$ gives a vector perpendicular to both $\underset{\sim}{a}$ and $\underset{\sim}{b}$.
To find the cross product, solve a 3x3 matrix:
$$\begin{bmatrix}i & j & k\\a_1 & a_2 & a_3\\b_1 & b_2 & b_3\end{bmatrix}$$ $$i \begin{vmatrix}a_2 & a_3\\b_2 & b_3\end{vmatrix} - j \begin{vmatrix}a_1 & a_3\\b_1 & b_3\end{vmatrix} + k \begin{vmatrix}a_1 & a_2\\b_1 & b_2\end{vmatrix}$$ For vectors $\underset{\sim}{a}$ and $\underset{\sim}{b}$, and angle $\theta$ between them:
$|\underset{\sim}{a} \times \underset{\sim}{b}| = |\underset{\sim}{a}||\underset{\sim}{b}|\sin \theta$

### Area of Vector Triangles and Parallelograms $A = \frac{1}{2}|\underset{\sim}{b} \times \underset{\sim}{a}|$ $B = |\underset{\sim}{b} \times \underset{\sim}{a}|$

### Alternate Equation of a Line

The cross product of two vectors in the same direction is 0. For any vector $\underset{\sim}{r}$ on the line, $\underset{\sim}{r}-\underset{\sim}{a}$ is the same direction as vector $\underset{\sim}{b}$:
$(\underset{\sim}{r} - \underset{\sim}{a}) \times \underset{\sim}{b} = 0$

### Distance Between Two Skew Lines

Method 1 - Using Dot Product:
• Find the general vector between the two lines.
• Both directions must be perpendicular to the general vector so their dot products must be 0.
• Solve these to find the scalars.
• Plug the scalars back into the general vector and the shortest distance is the magnitude of this vector.
Method 2 - Using Cross Product:
The shortest vector must be parallel to the cross product.
• Find the cross product of the two directions.
• Find the general vector and set this equal to (k x the cross product), as they are parallel.
• Solve this to find k.
• Find the magnitude of (k x the cross product) to get the distance.
Method 3 - The Formula
Consider $\underset{\sim}{r_1} = p_1 + \lambda d_1$ and $\underset{\sim}{r_2} = p_2 + \lambda d_2$
Distance between skew lines: $${(p_1 - p_2) \cdot (d_1 \times d_2)} \overwithdelims || {|d_1 \times d_2|}$$

### Normal Vector

The normal vector is a direction vector which is perpendicular to a plane at any position.

### Vector Planes

Let $a$ be the position vector of a fixed point in the plane.
Let $n$ be the normal vector to the plane at $a$.
Let $r$ be the position vector of any point in the plane.
$\overrightarrow{ra} = r - a$
As $n$ is normal, and $r - a$ is a vector in the plane, the angle between them is 90°.
$\therefore (r-a) \cdot n = 0$
$r \cdot n - a \cdot n = 0$
$r \cdot n = a \cdot n$
e.g. Find the vector equation of a plane with normal $\begin{pmatrix}1\\2\\-1\end{pmatrix}$ through point $\begin{pmatrix}1\\2\\3\end{pmatrix}$:
$r \cdot \begin{pmatrix}1\\2\\-1\end{pmatrix} = \begin{pmatrix}1\\2\\3\end{pmatrix} \cdot \begin{pmatrix}1\\2\\-1\end{pmatrix}$
$r \cdot \begin{pmatrix}1\\2\\-1\end{pmatrix} = 2$. This is the equation of the plane.

### Cartesian Planes

Substituting $\begin{pmatrix}x\\y\\z\end{pmatrix}$ as $\underset{\sim}{r}$ allows us to find a cartesian equation for a plane.
e.g. If $r \cdot \begin{pmatrix}1\\2\\-1\end{pmatrix} = 2$ then
$\begin{pmatrix}x\\y\\z\end{pmatrix} \cdot \begin{pmatrix}1\\2\\-1\end{pmatrix} = 2$, giving us $x + 2y - z = 2$

Finding the normal from a cartesian equation:
The co-efficients of $x$, $y$ and $z$ give the normal.
e.g. $3x + 2y - 4z \rightarrow \begin{pmatrix}3\\2\\-4\end{pmatrix}$
e.g. $5x + 2z \rightarrow \begin{pmatrix}5\\0\\2\end{pmatrix}$

### Two Vectors In A Plane

One vector isn't enough to define the unique direction of a plane.
A plane can be written as $\underset{\sim}{r} = a + \lambda b + \mu c$ where $\lambda, \mu ∈ \mathbb R$ and where $a$ is the position vector of the plane and $b$ and $c$ are direction vectors completely contained within the plane.
You can find the normal by doing $n = |b \times c|$

### Minimum Distance Between a Point and a Plane

To find the distance:
• Find the normal vector to the plane.
• Find the vector equation with the given point as the position vector and the direction vector as the normal vector.
• Calculate where that vector intersects the plane.
• Find the distance between the points.

### Angle Between Two Planes

The angle between two planes is equal to the angle between their normal vectors.

## Matrices

The dimensions of matrices are given by (the no. of rows x the no. of columns).
A matrix with one column is a vector.
A matrix is square if it has the same number of rows as columns.
A zero matrix is one in which all its elements are 0.
An identity matrix $I$ is a square matrix which has 1s in the leading diagonal (starting top left) and 0 elsewhere: $$\begin{pmatrix}1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{pmatrix}$$ $AI = IA = A$ for all matrices 1

### Matrix Multiplication

Start with the first row and first column, and sum the products of each pair.
Then repeat with the first row and second column, then for the next row and first column, and for the second column, etc.
e.g. $$\begin{pmatrix}1 & 0 & 3 & -2\\2 & 8 & 4 & 3\\7 & -1 & 0 & 2\end{pmatrix} \begin{pmatrix}5 & 1\\1 & 7\\0 & 3\\8 & -3\end{pmatrix} = \begin{pmatrix} & \\ & \\ & \end{pmatrix}$$ (1 x 5) + (0 x 1) + (3 x 0) + (-2 x 8) = -11 $$\begin{pmatrix}1 & 0 & 3 & -2\\2 & 8 & 4 & 3\\7 & -1 & 0 & 2\end{pmatrix} \begin{pmatrix}5 & 1\\1 & 7\\0 & 3\\8 & -3\end{pmatrix} = \begin{pmatrix} \color{red}{-11} & \\ & \\ & \end{pmatrix}$$ (1 x 1) + (0 x 7) + (3 x 3) + (-2 x -3) = 16 $$\begin{pmatrix}1 & 0 & 3 & -2\\2 & 8 & 4 & 3\\7 & -1 & 0 & 2\end{pmatrix} \begin{pmatrix}5 & 1\\1 & 7\\0 & 3\\8 & -3\end{pmatrix} = \begin{pmatrix} -11 & \color{red}{16}\\ & \\ & \end{pmatrix}$$ (2 x 5) + (8 x 1) + (4 x 0) + (3 x 8) = 42 $$\begin{pmatrix}1 & 0 & 3 & -2\\2 & 8 & 4 & 3\\7 & -1 & 0 & 2\end{pmatrix} \begin{pmatrix}5 & 1\\1 & 7\\0 & 3\\8 & -3\end{pmatrix} = \begin{pmatrix} -11 & 16\\\color{red}{42} & \\ & \end{pmatrix}$$ (2 x 1) + (8 x 7) + (4 x 3) + (3 x -3) = 61 $$\begin{pmatrix}1 & 0 & 3 & -2\\2 & 8 & 4 & 3\\7 & -1 & 0 & 2\end{pmatrix} \begin{pmatrix}5 & 1\\1 & 7\\0 & 3\\8 & -3\end{pmatrix} = \begin{pmatrix} -11 & 16\\42 & \color{red}{61}\\ & \end{pmatrix}$$ (7 x 5) + (-1 x 1) + (0 x 0) + (2 x 8) = 50 $$\begin{pmatrix}1 & 0 & 3 & -2\\2 & 8 & 4 & 3\\7 & -1 & 0 & 2\end{pmatrix} \begin{pmatrix}5 & 1\\1 & 7\\0 & 3\\8 & -3\end{pmatrix} = \begin{pmatrix} -11 & 16\\42 & 61\\\color{red}{50} & \end{pmatrix}$$ (7 x 1) + (-1 x 7) + (0 x 3) + (2 x -3) = -6 $$\begin{pmatrix}1 & 0 & 3 & -2\\2 & 8 & 4 & 3\\7 & -1 & 0 & 2\end{pmatrix} \begin{pmatrix}5 & 1\\1 & 7\\0 & 3\\8 & -3\end{pmatrix} = \begin{pmatrix} -11 & 16\\42 & 61\\50 & \color{red}{-6}\end{pmatrix}$$ Matrices can be multiplied if there are the same number of columns in A as rows in B. The dimensions of AB will be the rows of A x the columns of B. This means only square matrices can be raised to a power.
Matrix multiplication is not commutative, so AB $\ne$ BA.
Matrix multiplication is associative, so (AB)C = A(BC).
Matrix multiplication distributes over addition, so A(B + C) = AB + AC.

In an adjacency matrix, the number in the $i^{th}$ row and $j^{th}$ column is the number of edges directly connecting node $i$ to dot $j$.

### Transpose

$A^T$ is the transpose of a matrix A, where the rows and columns are interchanged. An m x n matrix becomes n x m dimensions. e.g.
$$\begin{pmatrix}1 & 2 & 3\\4 & 5 & 6\end{pmatrix}^T = \begin{pmatrix}1 & 4\\2 & 5\\3 & 6\end{pmatrix}$$ A matrix is symmetrical if $A^T = A$. All symmetric matrices must be square.
If you transpose a matrix twice, you get back to where you started: $(A^T)^T = A$.
$A^TA \ne AA^T$, both $A^TA$ and $AA^T$ are symmetric matrices. $(AB)^T = B^T A^T$

### Determinant of a Matrix

$A = \begin{pmatrix}a & b\\c & d\end{pmatrix}, det(A) = |A| = ad - bc$ If det(A) = 0, A is a singular matrix and doesn't have an inverse.
If det(A) $\ne$ = 0, A is a non-singular matrix and it has an inverse.

### Determinant of a 3D Matrix

$\begin{vmatrix}a & b & c\\d & e & f\\g & h & i\end{vmatrix} = a\begin{vmatrix}e & f\\h & i\end{vmatrix} - b\begin{vmatrix}d & f\\g & i\end{vmatrix} + c\begin{vmatrix}d & e\\g & h\end{vmatrix}$

### Inverse 2x2 Matrices

If $A = \begin{pmatrix}a & b\\c & d\end{pmatrix}$ then $A^{-1} = \frac{1}{det(A)}\begin{pmatrix}d & -b\\-c & a\end{pmatrix}$ If $Ax = y, x = A^{-1}y$ $AA^{-1} = A^{-1}A = I$

### Inverse 3x3 Matrices

1. Find det(A)
2. Form a matrix of minors, M. The Minor of an element is the determinant of the 2x2 matrix that remains after the row and column containing that element have been crossed out.
3. Form a matrix of cofactors, C, by changing all the signs in the pattern $\begin{pmatrix}+ & - & +\\- & + & -\\+ & - & +\end{pmatrix}$
4. $A^{-1} = \frac{1}{det(A)} C^T$
If the determinant is zero there is no inverse matrix.

### Matrices for Simulatenous Equations

$\begin{pmatrix}a & b & c\\d & e & f\\g & h & i\end{pmatrix} \begin{pmatrix}x\\y\\z\end{pmatrix} = \begin{pmatrix}j\\k\\l\end{pmatrix}$ gives three simultaneous equations: $\begin{matrix}ax + by + cz = j\\dx + ey + fz = k\\gx + hy + iz = l\end{matrix}$
A consistent solution is one which has one or more solutions.

### Transformations

A transformation that can be represented by matrix multiplication is called a linear transformation. A transformation can be represented by matrix multiplication when the centre of rotation, centre of enlargement or line of reflection are or go through the origin.
To transform a shape using a matrix you multiply the vectors representing the vertices by that matrix and the resultant vectors represent the co-ordinates of the image.
To undo a transformation, transform the shape using the inverse matrix.
The determinant of the matrix is the scale factor of area or volume between the original shape and its image.

Finding Transformation Matrices
Use the standard basis vectors to find the transformation matrix. e.g. Matrix representing an anticlockwise rotation by $\theta$ about the origin is $\begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta\end{pmatrix}$ The reflection in the line $y = (\tan \theta) x$ is represented by $\begin{pmatrix} \cos 2\theta & \sin 2\theta \\ \sin 2\theta & -\cos 2\theta\end{pmatrix}$ The reflection in the line x-axis is represented by $\begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}$ The reflection in the line y-axis is represented by $\begin{pmatrix} -1 & 0 \\ 0 & 1\end{pmatrix}$ An enlargement by scale factor $a$ is represented by $\begin{pmatrix} a & 0 \\ 0 & a\end{pmatrix}$ A stretch by scale factor $a$ parallel to x-axis is represented by $\begin{pmatrix} a & 0 \\ 0 & 1\end{pmatrix}$ A stretch by scale factor $a$ parallel to y-axis is represented by $\begin{pmatrix} 1 & 0 \\ 0 & a\end{pmatrix}$ Doing a transformation $A$ followed by a transformation $B$ is the same as doing the transformation $BA$ (work from right to left).
The matrices representing rotations about the axes in 3D are in the formula booklet.
Reflections in planes normal to the axes in 3D:
$\begin{pmatrix} -1 & 0 & 0 \\ 0 & 1 & 0\\0 & 0 & 1\end{pmatrix}$ for a reflection in $x = 0$
$\begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0\\0 & 0 & 1\end{pmatrix}$ for a reflection in $y = 0$
$\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0\\0 & 0 & -1\end{pmatrix}$ for a reflection in $z = 0$

### Factorising Determinants

You can factorise out any factor of an entire row or column from a determinant.
$\begin{vmatrix} a & a & a \\ 2 & 0 & 4\\0 & 1 & 3\end{vmatrix} = a\begin{vmatrix} 1 & 1 & 1 \\ 2 & 0 & 4\\0 & 1 & 3\end{vmatrix}$
You can add or subtract any amount of any other row to any row without changing the determinant. The same can be done with columns. e.g. applying $C1 \rightarrow C1 - C3$ and $C2 \rightarrow C2 - C3$:
$\begin{vmatrix} 1 & 1 & 1 \\ x & y & 1\\x^2 & y^2 & 1\end{vmatrix} = \begin{vmatrix} 0 & 0 & 1 \\ x-1 & y-1 & 1\\x^2-1 & y^2-1 & 1\end{vmatrix}$

### Eigenvalues and Eigenvectors

An eigenvector is a vector that stays in the same direction after a matrix transformation.
An eigenvalue is how much that vector is stretched. A negative eigenvalue means the vector goes in the opposite direction (still considered the same direction).
A normalised eigenvector is one with magnitude 1. To normalise a vector just divide all the components by the magnitude of that vector.
Equations:
$Av = \lambda v$, where $v$ is an eigenvector for a matrix $A$ and $\lambda$ is the eigenvalue The characteristic equation for a matrix is: $det(A-\lambda I) = 0$ These can be used to find all the eigenvalues and eigenvectors.

### Invariant Lines

An eigenvector will give an invariant line through the origin. If the corresponding eigenvalue is 1 then it will be a line of invariant points.
To find the Cartesian equation of the invariant lines passing throught the origin, for each eigenvalue, form equations and solve for y from:
$M \begin{pmatrix} x\\y\end{pmatrix} = \lambda \begin{pmatrix} x\\y\end{pmatrix}$

### Diagonalising a Matrix

Matrix $M = U D U^{-1}$ where $D$ is the diagonal matrix and $U$ is formed from the eigenvectors.
1. Find the eigenvalues of M
2. Put the eigenvalues in the leading diagonal of a new matrix, and surround these with 0 to get $D$
3. Find the corresponding eigenvectors for the eigenvalues and put these as the columns of $U$, in the same order as the eigenvalues were in $D$
4. Find the inverse of U
$M^{n} = (UDU^{-1})^n = UD^nU^{-1}$

## Complex Numbers

$\sqrt{-1} = i$

### Facts

If $f(x)$ is a polynomial with integer coefficients, and $z = a+bi$ is a root of $f(x)$, then $z^* = a-bi$ is also a root of $f(x)$ and $z^*$ is the complex conjugate. The modulus of a complex number $z$ is given by $|z| = \sqrt{a^2+b^2}$.
The argument of a complex number $z$ is given by $arg(z) = \theta$, where $\tan(\theta) = \frac{b}{a}$.
A point can be written as $(x, y), (r, \theta)$ or $r(\cos\theta + i\sin\theta)$ where $|z| = r$ and $\theta = arg(z)$. If $z = a+bi, z \times z^* = a^2 + b^2 = |z|^2$ and $z+z^* = 2a$ If $z_1 = (r_1,\theta_1)$, and $z_2 = (r_2, \theta_2)$, then $z_1z_2 = (r_1r_2, \theta_1+\theta_2)$ and $\frac{z_1}{z_2} = (\frac{r_1}{r_2}, \theta_1-\theta_2)$. If $a + bi = c + di$, where $a, b, c, d, ∈ \mathbb R$ then $a=c$ and $b=d$.

### Argand Diagrams

Argand Diagrams have the real part on the x-axis and the imaginary part on the y-axis. $|z| = k$ represents a circle with centre $O$ and radius $k$. $|z-z_1|=k$ represents a circle with centre $z_1$ and radius $k$.
$|z-z_1| = |z-z_2|$ represents a straight line, the perpendicular bisector of the line joining points $z_1$ and $z_2$.
$arg(z) = \alpha$ represents the half line through $O$ inclined at an angle $\alpha$ to the positive direction of $x$. $arg(z-z_1) = \alpha$ represents the half line through the point $z_1$ inclined at an angle $\alpha$ to the positive direction of $x$.

### More Facts

De Moivre's Theorem says: $$(\cos\theta + i\sin\theta)^n = \cos(n\theta) + i\sin(n\theta)$$ If $z=\cos\theta + i\sin\theta$, then $z^n + \frac{1}{z^n} = 2\cos(n\theta)$ and $z^n - \frac{1}{z^n} = 2i\sin(n\theta)$. $e^{i\theta} = \cos\theta + i\sin\theta \therefore$ complex numbers can be written as $z = re^{i\theta}$. $z^n = r^n e^{in\theta}$.

### Roots of Unity

The cube roots of unity are the solutions that satisfy the equation $z^3 - 1 = 0$. The cube roots of unity are $1, w, w^2$ where:$$w^3 = 1$$$$1+w+w^2=0$$ and the non-real roots are $\frac{-1\pm\sqrt{3}}{2}$ $e^0=1$.
$e^{2\pi i}=1$.
$e^{4\pi i}=1$.
$e^{2k\pi i}=1$.
If $z^n=1$ then $z^n = 1 = e^{2k\pi i}$. The equation $z^n=1$ has roots $z=e^\frac{{2k\pi i}}{n}$ where $k ∈ [0, n)$. On an Argand Diagram, the roots all lie on the circle $|z| = 1$ and are equally spaced around the circle at intervals of $\frac{2\pi}{n}$ starting at (1,0).

## Proof

$$\sum_{i=1}^k i^2 = \sum_{i=1}^{k-1} i^2 + k^2$$

### The Four Steps of Induction:

1. Base case: Prove the statement is true for n = 1
2. Assumption: Assume the statement is true for n = k
3. Inductive step: Show that the statement is true for n = k + 1
4. Conclusion: The statement is then true for all positive integers n
An example conclusion would be:
Since true for n = 1 and if true for n = k, true for n = k + 1, $\therefore$ true for all $n ∈ \mathbb N$ by induction.

### Summation Proofs

e.g. Show $\sum_{i=1}^n i^2 = \frac{n}{6}(n+1)(2n+1)$ for all $n ∈ \mathbb N$
1. When n = 1, $\sum_{i=1}^n i^2 = 1$, and $\frac{1}{6}(1+1)(2(1)+1) = \frac{6}{6} = 1$
$1 = 1 \therefore$ statement is true for n = 1
2. Assume true for n = k:
$\sum_{i=1}^k i^2 = \frac{k}{6}(k+1)(2k+1)$
3. When n = k + 1: $$\sum_{i=0}^{k+1} i^2 = \sum_{i=0}^k i^2 + (k+1)^2$$ $$= \frac{k}{6} (k+1)(2k+1) + (k+1)^2$$ $$= \frac{k+1}{6} (k(2k+1) + 6(k+1))$$ $$= \frac{k+1}{6} (2k^2 + 7k + 3)$$ $$= \frac{k+1}{6} (2k + 3)(k + 2)$$ $$= \frac{k+1}{6} (2(k+1) + 1)((k+1) + 1)$$ This is the same as what we're trying to show, with k + 1 instead of n. Hence true when n = k + 1.
4. Since true for n = 1 and if true for n = k, true for n = k + 1, $\therefore$ true for all $n ∈ \mathbb N$ by induction.

### Divisibility Proofs

e.g. Prove by induction that $3^{2n} + 11$ is divisible by 4 for all $n ∈ \mathbb Z^+$.
1. When $n = 1, 3^{2 \times 1} + 11 = 3^2 + 11 = 20$ which is divisible by 4
2. Assume that for $n = k, f(k) = 3^{2k} + 11$ is divisible by 4 for all $k ∈ \mathbb Z^+$
3. $$f(k+1) = 3^{2(k+1)} + 11$$ $$= 3^{2k} \times 3^2 + 11$$ $$= 9(3^{2k})+11$$ $$= 3^{2k} + 11 + 8(3^{2k})$$ $$= f(k) + 8(3^{2k})$$ $\therefore$ f(n) is divisible by 4 when n = k+1 since f(k) is divisible by 4 and 8 is divisible by 4.
4. Since true for n = 1 and if true for n = k, true for n = k + 1, $\therefore$ true for all $n ∈ \mathbb Z^+$ by induction.

### Recurrence Relations

e.g. Given that $u_{n+1} = 3u_n + 4$ and $u_1 = 1$, prove by induction that $u_n = 3^n - 2$.
1. When $n = 1, u_1 = 3^1 - 2 = 1$ as given.
2. Assume that for $n = k, u_k = 3^k - 2$ is true for $k ∈ \mathbb Z^+$
3. Then $$u_{k+1} = 3u_k + 4$$ $$= 3(3^k - 2) + 4$$ $$= 3^{k+1} - 6 + 4$$ $$= 3^{k+1} - 2$$
4. Since true for n = 1 and if true for n = k, true for n = k + 1, $\therefore$ true for all $n ∈ \mathbb Z^+$ by induction.

### Second Order Recurrence Relations

e.g. Given that $u_{n+2} = 5u_{n+1} - 6u_n$ and $u_1 = 13, u_2 = 35$, prove by induction that $u_n = 2^{n+1} + 3^{n+1}$.
1. When n = 1, $u_1 = 2^2 + 3^2 = 13$ as given
When n = 2, $u_2 = 2^3 + 3^3 = 35$ as given
2. Assume that for n = k and n = k+1, $u_k = 2^{k+1} + 3^{k+1} \text{ and } u_{k+1} = 2^{k+2} + 3^{k+2}$ are true for $k ∈ \mathbb Z^+$
3. Let n = k+1 $$u_{k+2} = 5u_{k+1} - 6u_k$$ $$= 5(2^{k+1} + 3^{k+1}) - 6(2^k + 3^k)$$ $$= 5 \times 2^{k+1} + 5 \times 3^{k+1} - 3 \times 2 \times 2^k - 2 \times 3 \times 3^k$$ $$= 5 \times 2^{k+1} + 5 \times 3^{k+1} - 3 \times 2^{k+1} - 2 \times 3^{k+1}$$ $$= 2 \times 2^{k+1} + 3 \times 3^{k+1} = 2^{k+1+1} + 3^{k+1+1}$$
4. Since true for n = 1 and n = 2, and if true for n = k and n = k + 1, true for n = k + 2, $\therefore$ true for all $n ∈ \mathbb Z^+$

### Matrix Proofs

e.g. Prove by induction that $\begin{pmatrix}1 & -1\\0 & 2\end{pmatrix}^n = \begin{pmatrix}1 & 1-2^n\\0 & 2^n\end{pmatrix}$ for all $n ∈ \mathbb Z^+$.
1. $$\begin{pmatrix}1 & -1\\0 & 2\end{pmatrix}^1 = \begin{pmatrix}1 & -1\\0 & 2\end{pmatrix}$$ $$\begin{pmatrix}1 & 1-2^1\\0 & 2^1\end{pmatrix} = \begin{pmatrix}1 & -1\\0 & 2\end{pmatrix}$$ $\therefore$ true for n = 1
2. Assume true for n = k so $\begin{pmatrix}1 & -1\\0 & 2\end{pmatrix}^k = \begin{pmatrix}1 & 1-2^k\\0 & 2^k\end{pmatrix}$
3. When n = k+1, $$\begin{pmatrix}1 & -1\\0 & 2\end{pmatrix}^{k+1} = \begin{pmatrix}1 & -1\\0 & 2\end{pmatrix}^k \begin{pmatrix}1 & -1\\0 & 2\end{pmatrix} = \begin{pmatrix}1^k & 1-2^k\\0^k & 2^k\end{pmatrix}\begin{pmatrix}1 & -1\\0 & 2\end{pmatrix}$$ $$= \begin{pmatrix}1 & 1-2^{k+1}\\0 & 2^{k+1}\end{pmatrix}$$ $\therefore$ true when n = k + 1.
4. Since true for n = 1 and if true for n = k, true for n = k + 1 $\therefore$ true for all $n ∈ \mathbb Z^+$

## Further Algebra

### Roots of Polynomials

For Quadratics: If $ax^2+bx+c$ has roots $\alpha$ and $\beta$ then $$\alpha + \beta = - \frac{b}{a}$$$$\alpha \beta = \frac{c}{a}$$ For Cubics: If $ax^3 + bx^2 + cx + d$ has roots $\alpha, \beta$ and $\gamma$ then $$\alpha + \beta + \gamma = -\frac{b}{a}$$$$\alpha \beta \gamma = -\frac{d}{a}$$$$\alpha\beta + \beta\gamma + \alpha\gamma = \frac{c}{a}$$ In General: Adding the roots gives $-\frac{b}{a}$. Multiplying the roots gives $\frac{z}{a}$ for even degree polynomials and $-\frac{z}{a}$ for odd degree polynomials, where z is the constant at the end.

### Series Summation Fomulae

$$\sum_{r=1}^n r = \frac{n(n+1)}{2}$$ $$\sum_{r=1}^n r^2 = \frac{1}{6} n (n+1)(2n+1)$$ $$\sum_{r=1}^n r^3 = \frac{1}{4} n^2 (n+1)^2$$ If $a$ is a constant:$$\sum a f(r) = a \sum f(r)$$ $$\sum (f(n) + g(n)) = \sum f(n) + \sum g(n)$$ $$\sum_{r=k}^x f(n) = \sum_{r=1}^x - \sum_{r=1}^{k-1}$$ $$\sum_{r=1}^n 1 = n$$

### Series Summation Method of Differences

For $\sum_{r=1}^n u_r = u_1 + u_2 + ... + u_{n-1} + u_n$, if we can write $u_r = f(r+1) - f(r)$ or $u_r = f(r) - f(r+1)$ for some function f, then: $$\sum_{r=1}^n u_r = f(1) - f(2) + f(2) - f(3) + f(3) - f(4) + ... + f(n-1) - f(n)$$$$= f(1) - f(n)$$

### Maclaurin Series

Expansions of functions such as $e^x, \sin x, \cos x$ and $ln(x+1)$ are called Maclaurin Series Expansions.
Conditions of Maclaurin series:
• The function f(x) can be expressed in the form $a_0 + a_1x + a_2x^2 + a_3x^3 + ...$
• The function can be derived term by term
• The function and all of its derivatives exist at $x=0$
• The function must converge
The Maclaurin series for a function f(x) is given by $$f(x) = f(0) + f'(0) x + \frac{f''(0)}{2!} x^2 + \frac{f'''(0)}{3!}x^3 + ... + \frac{f^r (0)}{r!} x^r + ...$$ $e^x: \space 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \frac{x^5}{120}$
$\sin x: \space x - \frac{x^3}{6} + \frac{x^5}{120}$
$\cos x: \space 1 - \frac{x^2}{2} + \frac{x^4}{24} - \frac{x^6}{720}$
$\ln (1+x): \space x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \frac{x^5}{5} - \frac{x^6}{6}$
$(1+x)^n: \space 1 + nx + \frac{1}{2} (n-1)nx^2 + \frac{1}{6}(n-2)(n-1)nx^3 + \frac{1}{24} (n-3)(n-2)(n-1)nx^4 + \frac{1}{120}(n-4)(n-3)(n-2)(n-1)nx^5$

### L'Hopitals Rule

$$\text{If } \lim_{x\to a} = \frac{0}{0} \text{ or } \frac{\infty}{\infty} \text{ then:}$$$$\lim_{x\to a} \frac{f(x)}{g(x)} = \frac{f'(x)}{g'(x)}$$

## Further Functions

### Asymptotes

Vertical asymptotes occur when the denominator of a fraction is 0, or in certain other cases like with logarithms.
Horizontal asymptotes occur if the graph converges to a particular value of $y$ as $x \to \pm \infty$. When you have a fraction, divide by the largest power of $x$ in the denominator to help determine the behavious as $x \to \pm \infty$. Anything of the form $\frac{\text{constant}}{\text{some power of }x}$ will tend to zero.
Oblique asymptotes occur if the graph almost follows a straight line as $x \to \pm \infty$. They occur when the degree of the numerator is one more than the degree of the denominator. To find them, divide by the largest power in the denominator.

### Turning Points

You can find the turning point of $y=\frac{ax^2 + bx + c}{dx^2 + ex + f}$ without needing to use differentiation if you use the fact that where there is a turning point it will just touch a horizontal line through that $y$ value. So the discriminant of the equation formed when it is solved simultaneously with $y=k$ will be zero.
e.g. Find the turning point of $y=\frac{2}{x^2-4x}$ without using calculus.
$$\frac{2}{x^2-4x} = k$$ $$2 = k(x^2 - 4x)$$ $$kx^2 - 4kx - 2 = 0$$ $$(-4k)^2 - 4(k)(-2) = 0$$ $$16k^2 + 8k = 0$$ $$k=0 \text{ or } k = -\frac{1}{2}$$ 0 is the asymptote so the turning point must be when $y = -\frac{1}{2}$
$$-\frac{1}{2} = \frac{2}{x^2 - 4x} \therefore x = 2$$ $$\therefore \text{turning point is at } (2, -\frac{1}{2})$$

### Reciprocal Function

For a function $f(x)$ the reciprocal of the function is $\frac{1}{f(x)}$.
When you sketch the graph of $y = \frac{1}{f(x)}$ apply the following rules:
• A zero of $f(x)$ will become a vertical asymptote of $\frac{1}{f(x)}$
• A vertical asymptote of $f(x)$ will become a zero of $\frac{1}{f(x)}$
• The sign of $f(x)$ is the same as the sign of $\frac{1}{f(x)}$
• For all $x$ such that $f(x) = 1, \frac{1}{f(x)} = 1$ as well so the graphs of $y=f(x)$ and $y=\frac{1}{f(x)}$ will intersect here
• A maximum of $f(x)$ will become a minimum of $\frac{1}{f(x)}$ and vice versa

### Parabolas

$y^2 = 4ax$ or $x^2 = 4ay$ The focus of a parabola is where parallel rays hitting a parabola would be reflected to. The coordinates of the focus are $(a, 0)$.

### Ellipses

$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$ Where $\pm a$ are the points where the ellipse crosses the x-axis and $\pm b$ are the points where the ellipse crosses the y-axis.

### Hyperbola

$\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1 \text { or } \frac{y^2}{a^2} - \frac{x^2}{b^2} = 1$ Where $\pm a$ and $\pm b$ are the points where the hyperbola crosses the x-axis and or y-axis respectively.
A hyperbola has oblique asymptotes. The oblique asymptotes of a hyperbola which is symmetric about the axes are $y=\pm \frac{b}{a}x$.
Rectangular Hyperbola
A rectangular hyperbola is one whose asymptotes are perpendicular, therefore:
$\frac{b}{a}x \times -\frac{b}{a} = -1$ so $\frac{b^2}{a^2} = 1$ so $b^2 = a^2$.
Therefore, if both $a^2$ and $b^2$ are equal to the same number, $c^2$, the equation of a rectangular hyperbola is: $x^2 - y^2 = c^2$ Unsymmetrical Hyperbola
$y = \frac{k}{x}$ is a rectangular hyperbola but not symmetrical about the axes.

## Numerical Methods

### Mid-Ordinate Rule

The mid-ordinate rule is a method of estimating the are under a curve when it cannot be integrated.
The area is estimated by splitting the area into rectangles, where the height of the rectangle is the y-value of the curve midway between the top sides of the rectangle.
Formula: $$\int^b_a y \space dx \approx h(y_{\frac{1}{2}} + y_{\frac{3}{2}} + ... + y_{n-\frac{3}{2}} + y_{n-\frac{1}{2}})$$ where $h = \frac{b-a}{n}$ $h$ = the width of each rectangle
$b$ = the upper bound
$a$ = the lower bound
$n$ = the number of rectangles
$y_k$ = the $y$ value after k rectangles

e.g. Use the mid-ordinate rule with four strips to find an estimate for $\int^5_1 \ln x \space dx$, giving your answer to 3 sig figs.
$$h=\frac{b-a}{n}=\frac{5-1}{4}=1$$
 $x$ values Mid-$x$ values Mid-ordinates ($y = \ln x)$ $x_0 = 1$ $x_{\frac{1}{2}} = 1.5$ $y_{\frac{1}{2}} = \ln 1.5 = 0.40546...$ $x_1 = 2$ $x_{\frac{3}{2}} = 2.5$ $y_{\frac{3}{2}} = \ln 2.5 = 0.916291...$ $x_2 = 3$ $x_{\frac{5}{2}} = 3.5$ $y_{\frac{5}{2}} = \ln 3.5 = 1.252762...$ $x_3 = 4$ $x_{\frac{7}{2}} = 4.5$ $y_{\frac{7}{2}} = \ln 4.5 = 1.504077...$ $x_4 = 5$
^The $x$ values used are h distance apart. $$\int^5_1 \ln x \space dx \approx h(y_{\frac{1}{2}} + y_{\frac{3}{2}} + ... + y_{n-\frac{3}{2}} + y_{n-\frac{1}{2}})$$ $$\approx 1 \times (0.405456 + 0.916291 + 1.252762 + 1.504077)$$ $$\approx 4.078596 \approx 4.08$$ More strips will give a more accurate answer.
It is most accurate when there is a constant gradient.
You will get an underestimate if the curve is concave upwards.
You will get an overestimate if the curve is concave downwards.

### Simpson's Rule

$$\int^b_a y \space dx \approx \frac{1}{3} h \{(y_0 + y_n) + 4(y_1 + y_3 + ... + y_{n-1}) + 2(y_2 + y_4 + ... + y_{n-2})\}$$ where $h = \frac{b-a}{n}$ and $n$ is even All letters represent the same as for the mid-ordinate rule.

e.g. Use Simpson's Rule with 5 ordinates (4 strips) to find an approximation to $\int^3_1 \frac{1}{\sqrt{1+x^3}} dx$ giving your answer to 3 sig figs.
$$h=\frac{b-a}{n}=\frac{3-1}{4}=\frac{1}{2}$$
 $x$ values $y$ values $x_0 = 1$ $y_0 = \frac{1}{\sqrt{1 + 1^3}} = 0.707107$ $x_1 = \frac{3}{2}$ $y_1 = \frac{1}{\sqrt{1 + {{3}\overwithdelims () {2}}^3}} = 0.478091$ $x_2 = 2$ $y_2 = \frac{1}{\sqrt{1 + 2^3}} = 0.33333$ $x_3 = \frac{5}{2}$ $y_3 = \frac{1}{\sqrt{1 + {{5}\overwithdelims () {2}}^3}} = 0.245256$ $x_4 = 3$ $y_4 = \frac{1}{\sqrt{1 + 3^3}} = 0.188982$
$y_0 + y_n = 0.896089$
$y_1 + y_3 = 0.723347$
$$\int^3_1 \frac{1}{\sqrt{1+x^3}} dx \approx \frac{1}{3} h \{(y_0 + y_n) + 4(y_1 + y_3) + 2(y_2)\}$$ $$\approx \frac{1}{3} \frac{1}{2} [(0.896089) + 4(0.723347) + 2(0.33333)]$$ $$\approx 0.742691 \approx 0.743$$

### Euler's Method

Euler's Method approximates a curve with a series of straight lines when you have an expression for $\frac{dy}{dx}$. If you have a value of $y$ for a given $x$, you can use this to estimate the value of $y$ at another $x$.
For when $\frac{dy}{dx}$ is just a function of $x$: For $\frac{dy}{dx} = f(x)$ and small $h$, recurrence relations are:$$y_{n+1} = y_n + hf(x_n); x^{n+1} = x_n + h$$ For when $\frac{dy}{dx}$ is a function of $x$ and $y$: For $\frac{dy}{dx} = f(x, y)$: $$y_{r+1} = y_r + hf(x_r, y_r)$$ $h$ = the step size

e.g. Consider a function such that $f(1) = 2$ and $f'(x) = 6x^2 + 2x$. Use Euler's method with step size $\frac{1}{2}$ to find an approximation of $f(3)$:
 $x_n$ $y_n$ $\frac{dy}{dx} = f(x_n)$ 1 2 6 x 1$^2$ + 2 x 1 = 8 1.5 2 + 0.5 x 8 = 6 6 x (1.5)$^2$ + 2 x 1.5 = 16.5 2 6 + 0.5 x 16.5 = 14.25 6 x 2$^2$ + 2 x 2 = 28 2.5 6 + 0.5 x 28 = 28.25 6 x (2.5)$^2$ + 2 x 2.5 = 42.5 3 28.25 + 0.5 x 42.5 = 49.5
$$\therefore f(3) \approx 49.5$$ Using a smaller step size will give an improved approximation.

### Improved Midpoint Method

$y_{r+1} = y_{r-1} + 2hf(x_ry_r), x_{r+1} = x_r+ h$ You may need to do one iteration of the regular Euler's Method to get the second y value.

## Hyperbolic Functions

### Parametric Equations

If you take any $\theta$ and substitute it into $x = \cos \theta, y = \sin \theta$ you will get a point on the circle $x^2 + y^2 = 1$. This is known as the parametric equation of the circle.
There are similar functions $x = \cosh u, y = \sinh u$ which have the same property for the rectangular hyperbola $x^2 - y^2 = 1$. If you pick any $u$ then you will get a point on that rectangular hyperbola.

### Hyperbolic Functions

 Hyperbolic sine: $\sinh x = \frac{e^x - e^{-x}}{2}, x ∈ \mathbb R$ Domain: $x ∈ \mathbb R$ Range: $y ∈ \mathbb R$ Hyperbolic cosine: $\cosh x = \frac{e^x + e^{-x}}{2}, x ∈ \mathbb R$ Domain: $x ∈ \mathbb R$ Range: $y ∈ [1, \infty)$ Hyperbolic tangent: $\tanh x = \frac{\sinh x}{\cosh x} = \frac{e^{2x} - 1}{e^{2x} + 1}, x ∈ \mathbb R$ Domain: $x ∈ \mathbb R$ Range: $y ∈ (-1, 1)$ Hyperbolic secant: $\text{sech} \space x = \frac{1}{\cosh x} = \frac{2}{e^{x} + e^{-x}}, x ∈ \mathbb R$ Domain: $x ∈ \mathbb R$ Range: $y ∈ \mathbb R$ Hyperbolic cosecant: $\text{cosech} \space x = \frac{1}{\sinh x} = \frac{2}{e^{x} - e^{-x}}, x ∈ \mathbb R, x \ne 0$ Domain: $x ∈ [1, \infty)$ Range: $y ∈ \mathbb R$ Hyperbolic cotangent: $\coth x = \frac{1}{\tanh x} = \frac{e^{2x} + 1}{e^{2x} - 1}, x ∈ \mathbb R, x \ne 0$ Domain: $x ∈ (-1, 1)$ Range: $y ∈ \mathbb R$

### Hyperbolic Identities

$\cosh^2 x - \sinh^2 x \equiv 1$ $\text{sech}^2 x \equiv 1 - \tanh^2 x$ $\text{cosech}^2 x \equiv \coth^2 x - 1$ $\cosh 2x \equiv \cosh^2 x + \sinh^2 x$ $\sinh 2x \equiv 2 \sinh x \cosh x$

### Hyperbolic Compound Angle Formulae

$\sinh (A \pm B) \equiv \sinh A \sinh B \pm \cosh A \cosh B$ $\cosh (A \pm B) \equiv \cosh A \cosh B \pm \sinh A \sinh B$ $\tanh (A \pm B) \equiv \frac{\tanh A \pm \tanh B}{1 \pm \tanh A \tanh B}$

### Osborn's Rule

When replacing $\sin \rightarrow \sinh$ or $\cos \rightarrow \cosh$, you have to negate any product of two sines (whenever there are two sines or tans multiplied together).

### Inverse Hyperbolic Functions

$\sinh^{-1}x = \ln (x + \sqrt{x^2 + 1})$ $\cosh^{-1}x = \ln (x + \sqrt{x^2 - 1}), x \ge 1$ $\tanh^{-1}x = \frac{1}{2} \ln (\frac{1+x}{1-x}), |x| \lt 1$

### Summary ### Differentiating Hyperbolic Functions

$\frac{d}{dx} \sinh ax = a\cosh ax$ $\frac{d}{dx} \cosh ax = a\sinh ax$ $\frac{d}{dx} \tanh ax = a\text{sech}^2 ax$ $\frac{d}{dx} \text{cosech} \space ax = -a\text{cosech} \space ax \coth ax$ $\frac{d}{dx} \text{sech} \space ax = -a\text{sech} \space ax \tanh ax$ $\frac{d}{dx} \coth ax = -a\text{cosech}^2 ax$ $\frac{d}{dx} \sinh^{-1} ax = \frac{a}{\sqrt{(ax)^2 + 1}}$ $\frac{d}{dx} \cosh^{-1} ax = \frac{a}{\sqrt{(ax)^2 - 1}}$ $\frac{d}{dx} \tanh^{-1} ax = \frac{a}{1-(ax)^2}$

### Integration

$\int \sinh x \space dx = \cosh x + C$ $\int \cosh x \space dx = \sinh x + C$ $\int \tanh x \space dx = \ln \cosh x + C$ $\int f'(ax+b) \space dx = \frac{1}{a}f(ax+b) + C$ $\int f'(x)[f(x)]^n \space dx = \frac{[f(x)]^{n+1}}{n+1} + C$ $\int \frac{1}{\sqrt{a^2 + x^2}}dx = \text{arcsinh} {{x}\overwithdelims (){a}} + C$ $\int \frac{1}{\sqrt{x^2 - a^2}}dx = \text{arccosh} {{x}\overwithdelims (){a}} + C$

## Polar Coordinates

Polar Coordinates are of the form $(r, \theta)$ where $r$ is the distance from the origin and $\theta$ is the anticlockwise angle from an initial line, often the horizontal axis
$r$ and $\theta$ are like the modulus and argument in the Argand diagram.
There are multiple ways to write polar coordinates as you can keep adding $2\pi$ to each of the angles, or add $\pi$ to anoy of the angles and negate $r$.
$r^2=x^2+y^2$
$x=r\cos\theta$
$y=r\sin\theta$

### Maximum and Minimum Distance from Origin

When $\frac{dr}{d\theta}=0$ and $\frac{d^2r}{d\theta^2} \gt 0$ then it is the minimum distance from the origin. When $\frac{dr}{d\theta}=0$ and $\frac{d^2r}{d\theta^2} \lt 0$ then it is the maximum distance from the origin.

### Differentiating Polar Coordinates

When parallel to the initial line, $\frac{dy}{d\theta} = 0$ When perpendicular to the initial line, $\frac{dx}{d\theta} = 0$ A graph with a dimple has 3 tangents perpendicular to the initial line, whereas a circle only has 2.

### Integrating Polar Coordinates

The area of each sector: $\frac{1}{2} r^2 \space d\theta$ The area bound between a polar curve and two half lines $\theta = \alpha$ and $\theta = \beta$ is $\frac{1}{2}\int^\beta_\alpha r^2 \space d\theta$

## Further Calculus

### Improper Integrals

An integral is improper when the area to be found is unbounded horizontally or vertically.
e.g. Evaluate $\int^\infty_1 \frac{1}{x^2} \space dx$:
$$\int^\infty_1 \frac{1}{x^2} \space dx = \lim_{c\to \infty} \int^c_1 x^{-2} \space dx$$ $$=\lim_{c\to\infty} [-x^{-1}]^c_1$$ $$=\lim_{c\to\infty} (-\frac{1}{c} + 1)$$ $$= 0 + 1$$ $$= 1$$ No Limit
If there is no limit, we say the integral does not have a value. There is no limit if it is oscillating or tends to infinity.
Vertically Unbounded
If there is a vertical asymptote then the integral is vertically unbounded and improper.
When an asymptote is in the middle of a region then split it into two integrals.

### Volumes of Revolution

Imagine a curve on a graph being rotated fully around the x-axis. The resulting shape is called a volume of revolution.
The volume of revolution formed by rotating a portion of a curve between $x = a$ and $x = b$ around the x-axis is given by: $$V = \pi \int^b_a y^2 \space dx$$ To resolve the y-axis instead of the x-axis, integrate $x^2$ with respect to $dy$.

### Average Value of a Function

The average value of a function f(x) over a given integral, [a,b] is given by: $f_{avg} = \frac{1}{b-a} \int^b_a f(x) \space dx$ Proof:
Break the interval [a,b] into $n$ equal subdivisions: $$\Delta x = \frac{b-a}{n}$$ Choose $x_i$, i ∈ [1,n]. Then the average of the function can be given by: $$f_{avg} = \frac{f(x_1) + f(x_2) + ... + f(x_n)}{n}$$ But $n = \frac{b-a}{\Delta x}$ so: $$f_{avg} = \sum_{i=1}^n \frac{f(x_i)\Delta x}{b-a}$$ $$=\frac{1}{b-a}\sum_{i=1}^n f(x_i)\Delta x$$ E.g. Find the average value of $R(z) = \sin (2z) e^{1 - \cos (2z)}$ on the interval $[-\pi, \pi]$.
$$\frac{1}{\pi + \pi} \int^{-\pi}_\pi \sin (2z) e^{1- \cos (2z)} \space dz$$ $$\text{let } u = 1- \cos (2z)$$ $$dz = \frac{du}{2 \sin(2z)}$$ $$\frac{1}{2\pi} \int^{-\pi}_\pi \sin (2z) e^u \frac{du}{2 \sin (2z)}$$ $$\frac{1}{4\pi} \int^0_0 e^u du = 0$$
E.g. If the average value of a function $f(x) = x$ on [3,c] is 10, then find the value of c.
$$10 = \frac{1}{c-3} \int^c_3 x \space dx$$ $$10 = \frac{1}{c-3}{{x^2}\overwithdelims [] {2}} ^c _3$$ $$10 = \frac{1}{c-3}\biggr(\frac{c^2}{2} - \frac{3^2}{2}\biggr)$$ $$10 = \frac{c^2 - 9}{2(c-3)}$$ $$20c - 60 = c^2 - 9$$ $$c^2 - 20c + 51 = 0$$ $$(c-17)(c-3) = 0$$ $$\enclose{updiagonalstrike}{c = 3} \text{ or } c = 17$$

### Differentiate Inverse Trig Functions

$\frac{d}{dx} \text{arcsin} {{x}\overwithdelims (){a}} = \frac{1}{\sqrt{a^2 - x^2}}$ $\frac{d}{dx} \text{arccos} {{x}\overwithdelims (){a}} = - \frac{1}{\sqrt{a^2 - x^2}}$ $\frac{d}{dx} \frac{1}{a}\text{arctan} {{x}\overwithdelims (){a}} = \frac{1}{a^2+x^2}$

### Differential Equations

Differential Equations are equations whhich include two variables with derivatives. Separating Variables
$$\frac{dy}{dx} = f(x)g(y)$$ $$\frac{1}{g(y)}\frac{dy}{dx} = f(x)$$ $$\int \frac{1}{g(y)} \space dy = \int f(x) \space dx$$ General Solutions
General solutions are when we are just given a differential equation so there is a family of equations which could be the final answer.
e.g. General solution to $\frac{dy}{dx} = 2$ is $y = 2x + C$
When you are given values for $y$ and $x$ you can find the particular solution from the general solution.
Reversing Product Rule
You can only reverse the product rule if the coefficient of y is the derivative of the coefficient of $\frac{dy}{dx}$.
Whatever term ends up in front of the $\frac{dy}{dx}$ will be on the front of the $y$ in the integral.
e.g. $$x^3\frac{dy}{dx} + 3x^2 y = \sin x$$ $$\frac{d}{dx}(x^3 y) = \sin x$$ $$x^3 y = \int \sin x dx = - \cos x + C$$ $$y = \frac{C - \cos x}{x^3}$$ Integrating Factor
To solve $\frac{dy}{dx} + Py = Q$, you can multiply through by the integrating factor, which produces an equation which you can then use with the reverse product rule trick.
Integrating Factor = $$e^{\int P \space dx}$$ This rule only works if the original coefficient of $\frac{dy}{dx}$ is 1. If it isn't, you must divide through by the coefficient of $\frac{dy}{dx}$.

### Second Order Differential Equations

Second order differential equations include the second derivative, and sometimes the first derivative as well.

A differential equation is homogenous if the right-hand side is zero.
We know the solution of $a\frac{dy}{dx} + by = 0$ is $y = Ae^{-\frac{b}{a}x}$. Assuming the solution of $a\frac{d^2y}{dx^2} + b\frac{dy}{dx} + cy = 0$ is similar, and of the form $Ae^{mx}$:
Let $y = Ae^{mx}$
Then $\frac{dy}{dx} = Ame^{mx}$ and $\frac{d^2y}{dx^2} = Am^2e^{mx}$
Thus $a\frac{d^2y}{dx^2} + b\frac{dy}{dx} + cy = aAm^2e^{mx} + bAme^{mx} + cA = 0$
$\therefore Ae^{mx} (am^2 + bm + c) = 0$
Since $Ae^{mx} \gt 0$, $am^2 + bm + c = 0$

The equation $am^2 + bm + c = 0$ is called the auxiliary equation, and if $m$ is a root of the auxiliary equation then $y = Ae^{mx}$ is a solution of the differential equation $a\frac{d^2y}{dx^2} + b\frac{dy}{dx} + cy = 0$ When the auxiliary equation has two real distinct roots $\alpha$ and $\beta$, the general solution of the differential equation is then $y = Ae^{\alpha x} + Be^{\beta x}$, where $A$ and $B$ are arbitrary constants. When the auxiliary equation has two equal roots $\alpha$, the general solution is $y = (A + Bx)e^{ax}$ If the auxiliary equation has two pure imaginary roots $\pm i\omega$, the general solution is $y = A\cos (\omega x) + B\sin (\omega x)$ where $A$ and $B$ are arbitrary complex constants. If the auxiliary equation has two complex roots $p \pm iq$, the general solution is $y = e^{px}(A \cos (qx) + B \sin (qx))$ where $A$ and $B$ are arbitrary complex constants.
A differential equation is inhomogenous if the right-hand side is not zero.
1. First you have to solve for when it is equal to 0. So for $a\frac{d^2y}{dx^2} + b\frac{dy}{dx} + cy = f(x)$, solve $a\frac{d^2y}{dx^2} + b\frac{dy}{dx} + cy = 0$. This is known as the Complementary Function.
2. Then solve $a\frac{d^2y}{dx^2} + b\frac{dy}{dx} + cy = f(x)$ by doing the following.
1. For a constant RHS, set $y = \lambda$. For a linear RHS, set $y = \lambda x + \mu$. For a quadratic RHS, set $y = \lambda x^2 + \mu x + \nu$. For an exponential RHS, set $y = Ae^x$. For a trigonometric RHS, e.g. $13 \sin 3x$, set $y = \lambda \sin 3x + \mu \cos 3x$.
2. Then differentiate to find $\frac{dy}{dx}$ and $\frac{d^2y}{dx^2}$.
3. Plug back in to the differential equation given.
4. Then equate coefficients and solve for the necessary values of $\lambda$, $\mu$, $\nu$.
5. Plug these values back into $y$. This is known as the Particular Integral.
3. To find the general solution, $y = C.F. + P.I.$.
Note: Your P.I. can't be part of your C.F.. If it is, you need to set $y = x \times$ whatever $y$ would have been otherwise.

With boundary conditions (when you are given a point), you can find the constants of the C.F. First find the general solution and plug in $y$ and $x$. If you're given $\frac{dy}{dx}$ in the question, differentiate $y$ then plug in again. Then solve the simultaneous equations.

### Parametric Equations

$$\frac{dy}{dx} = \frac{dy}{dt} \times \frac{dt}{dx}$$ $$\therefore \frac{dt}{dx} = \frac{1}{\frac{dx}{dt}}$$ $$\frac{dy}{dx} = \frac{dy}{dt} \times \frac{1}{\frac{dx}{dt}} = \frac{\frac{dy}{dt}}{\frac{dx}{dt}}$$

### Arc Length

If $y=f(x)$, the length of the arc of curve from the point where $x = a$ to the point where $x = b$ is given by: $$s = \int_b^a\sqrt{1 + {{dy} \overwithdelims () {dx}} ^2} \space dx$$ The length of arc of a curve in terms of a parameter $t$ is given by: $$s = \int_{t_1}^{t_2} \sqrt{{{dx} \overwithdelims () {dt}} ^2 + {{dy} \overwithdelims () {dt}} ^2} dt$$ where $t_1$ and $t_2$ are the values of the parameter at each end of the arc.

### Surface Area

The area of surface of revolution obtained by rotating an arc of the curve $y = f(x)$ through $2\pi$ radians about the $x$-axis between the points where $x = a$ and $x = b$ is given by: $A = \int_a^b 2 \pi y \sqrt{1 + {{dy} \overwithdelims () {dx}} ^2} \space dx$

### Important Limits

When $x \rightarrow \infty, x^ke^{-x} \rightarrow 0$ for any real number $k$

## Discrete/Decision

### Graph Theory

• Vertex: A point on a graph
• Edge: A line between vertices
• Weight: A number associated with an edge, representing something like distance or time
• Network: A graph with weights
• Directed Graph (Digraph): A graph where edges have directions
• Walk: A sequence of edges where the end of one edge is the start of the next (no jumps between vertices)
• Trail: A walk in which no edge is repeated
• Closed Trail: A trail which starts and ends at the same vertex
• Path: A trail where no vertex is repeated
• Cycle: A closed path (a path which starts and ends at the same vertex)
• Connected: A graph is connected if every vertex has an edge connecting it to another (so it has no isolated vertices)
• Multiple Edges: Where there is more than one edge between two vertices
• Loop: Where a vertex has an edge connecting it to itself
• Simple Graph: A graph with no multiple edges or loops
• Tree: A simple connected graph with no cycles
• Subgraph: A graph formed from some of the vertices and edges of another graph. Every edge must have a vertex at both ends but a vertex can be left isolated
• Spanning Tree: A subgraph, H, of a connected graph, G, is said to be a spanning tree of G if H is a tree and contains all the vertices of G. There may be many possible spanning trees for a connected graph
• Degree (Order): The degree of a vertex is the number of edges that join to it
• Semi-Eulerian: A trail which covers every edge
• If a graph has just two odd degree vertices, then it can be shown to be Semi-Eulerian
• Eulerian: A Semi-Eulerian graph which finishes at its starting vertex
• If a graph has no odd degree vertices, then it can be shown to be Eulerian
• Hamiltonian Cycle (A Tour): A cycle where no vertices are repeated
• Hamiltonian Graph: A graph which possesses a Hamiltonian cycle
• Adjacency Matrix (Incidence Matrix): A graph can be displayed in a table for computers to understand.
• A loop counts as 2
• A simple graph will only have ones and zeros in its adjacency matrix
• If you add up a row or column of an adjacency matrix you get the corresponding degree
e.g. Consider the vertex set {A, B, C, D, E} and the edge set {(A, A), (A, B), (A, E), (B, C), (C, D)}. This would give the adjacency matrix below, where each entry represents the number of edges between the vertices corresponding to that row and column • Distortion: Where the vertices and edges of a graph are moved around, but no connections are changed. Its adjacency matrix will stay the same
• Planar: A graph which can be distorted so no edges cross each other
• Face: If a graph is both connected and planar, the plane containing the graph can be divided up into regions, known as faces, which are bounded by edges. You also include the infinite face with no boundary (the outside of the graph)
• Euler's Formula is true for all connected planar graphs:F + V = E + 2
Faces + Vertices = Edges + 2
• Complete Graph: A simple graph where every vertex is connected to every other vertex. A complete graph with $n$ vertices is denoted $K_n$
• Complement (Inverse): The complement of a simple graph is obtained by adding the edges necessary to make a complete graph, then removing the original edges. This may leave isolated vertices
• Subdivision: A subdividion of a graph is obtained by inserting a new vertex into an edge one or more times
• Bipartite: A graph where vertices are split into two sets, with edges joining each vertex to at least one vertex from the other set, but none from its own set
• Complete Bipartite: A bipartite graph where every edge from each set is connected exactly once to every edge from the other set
• Isomorphic: Two graphs are isomorphic if one can be distorted to produce the other. They will have identical adjacency matrices, but the vertices may have been relabelled
• Kuratowski's Theorem: A graph is non-planar if and only if it contains a subgraph that is a subdivision of either $K_{3,3}$ or $K_5$
• $K_5$ and $K_{3,3}$ are shown below respectively ### Networks

• Network: A weighted graph
• Node: A vertex on a network
• Arc: An edge on a network

#### Minimum Connector Problem

Aim: To simplify a network down to a spanning tree with the minimum possible weight.
Method 1
Remove arcs in order of decreasing weight, ensuring the network remains connected. Arcs of the same weight may be arbitrarily chosen.
Prim's Algorithm
2. Add the arc leading to the nearest node (the node connected by the arc with the least weight).
3. Add the arc leading (from any of the nodes collected so far) to the nearest uncollected node, and repeat.
4. Stop once all nodes have been collected.
Kruskal's Algorithm
2. Choose the next shortest arc, provided it doesn't create a cycle, and repeat. If two arcs are of equal length, either may be chosen.
These three methods should all give the same total weight, even though they may produce slightly different spanning trees. Prim's Algorithm is most suitable for computers as you don't have to check for cycles.

#### Route Inspection/Chinese Postman problem

Establish whether the graph is Eulerian, semi-Eulerian or neither.

## Statistics

### Discrete Random Variables (DRVs)

A variable is discrete if it can only take a countable number of values, e.g. if you flip a coin three times, the number of times you get heads is 0, 1, 2 or 3.
A variable is random if the sum of the probabilities is 1.

### Discrete Uniform Distribution

This is where equally spaced values are equally likely to happen, so all DRVs have the same probability.
e.g.
 $x$ 1 2 3 4 5 6 $P(X=x)$ $\frac{1}{6}$ $\frac{1}{6}$ $\frac{1}{6}$ $\frac{1}{6}$ $\frac{1}{6}$ $\frac{1}{6}$

If you have an n-sided fair die, then $$E(X) = \frac{n+1}{2}$$ $$Var(X) = \frac{n^2 - 1}{12} = \frac{(n+1)(n-1)}{12}$$