Various notes (crib sheet) from trying to understand various things... really just stuff you'd find in any old math text book. Oh and this is a useful MathJax reference on StackExchange. And Detxify can be used to draw the symbol you're looking for! For an interactive view of your mathjax try here. For graphing functions try Desmos.
Page Contents
References and Resources
The following are absolutely amazing, completely free, well taught resources
that just put things in plain English and make concepts that much easier
to understand! Definitely worth a look!
The point of partial fractions is to do the reverse of:
$$\frac{5}{x-4}+\frac{3}{x+1}=\frac{8x-7}{{x}^{2}-3x-4}$$
In other words, given the following,
$$\frac{8x-7}{{x}^{2}-3x-4}$$
Partial fractions let us go back to,
$$\frac{5}{x-4}+\frac{3}{x+1}$$
We would start this by first factoring the above to get,
$$\frac{8x-7}{{x}^{2}-3x-4}=\frac{8x-7}{(x-4)(x+1)}$$
We know that we want to get to something like,
$$\frac{8x-7}{(x-4)(x+1)}=\frac{A}{(x-4)}-\frac{B}{(x+1)}$$
We can remove the denominators by multiplying through by $(x-4)(x+1)$ to give,
$$8x-7=A(x+1)-B(x-4)=(A-B)x-(-A-4B)$$
Matching terms between $8x-7$ and $(A-B)x+(A+4B)$, we get,
$$A-B=8\text{and,}$$$$A+4B=7$$
Solving these simultaneous linear equations, we get,
$$A=7\frac{4}{5}\text{, and}B=-\frac{1}{5}$$
And so we can get back to
$$\frac{8x-7}{(x-4)(x+1)}=\frac{7\frac{4}{5}}{(x-4)}+\frac{\frac{1}{5}}{(x+1)}$$
Yucky example, sorry!
Summary Of Partial Fraction Rules
Numerator must be lower degree than denominator. If not, then first divide out.
Factorise denominator into its prime factors.
A linear factor $s+a$ gives partial fraction $\frac{A}{(s+a)}$
A repeated factor $(s+a{)}^{2}$ gives $\frac{A}{(s+a)}+\frac{B}{(s+a{)}^{2}}$
A quadratic factor $({s}^{2}+ps+q)$ gives $\frac{Ps+Q}{{s}^{2}+ps+q}$
A repeated quadratic factor $({s}^{2}+ps+q{)}^{2}$ gives $\frac{Ps+Q}{{s}^{2}+ps+q}+\frac{Rs+T}{({s}^{2}+ps+q{)}^{2}}$
For example,
$$\frac{3{s}^{2}-4s+11}{(s+3)(s-2{)}^{2}}$$
Has partial fractions of the form...
$$\frac{A}{(s+3)}+\frac{B}{(s-2)}+\frac{C}{(s-2{)}^{2}}$$
Difference Of Two Cubes
$${a}^{3}-{b}^{3}=(a-b)({a}^{2}+ab+{b}^{2})$$
So for example...
$${x}^{3}-27={x}^{3}-{3}^{3}=(x-3)({x}^{2}+3x+9)$$
Some trig functions are "complementary" in the sense that a in pair of such functions, one equals the other shifted by 90 degrees ($\pi /2$ radians).
So, "cos" is "complementary sine", for example, and "cot" is "complementary tan". The one that doesn't quite fit this
pattern is the "csc" and "sec" complements :'(...
$$\mathrm{sin}(\theta )=\mathrm{cos}(\frac{\pi}{2}-\theta ),\text{}\mathrm{cos}(\theta )=\mathrm{sin}(\frac{\pi}{2}-\theta )$$$$\mathrm{tan}(\theta )=\mathrm{cot}(\frac{\pi}{2}-\theta ),\text{}\mathrm{cot}(\theta )=\mathrm{tan}(\frac{\pi}{2}-\theta )$$$$\mathrm{sec}(\theta )=\mathrm{csc}(\frac{\pi}{2}-\theta ),\text{}\mathrm{csc}(\theta )=\mathrm{sec}(\frac{\pi}{2}-\theta )$$
Radians
Here is a fantastic page about the why of radians. In summary
radians define a circle such that the angle measured in radians is also the arc length around the unit circle for that arc.
$$\text{angle in radians}=\frac{\pi}{180}\times \text{angle in degrees}$$
Consider summing the numbers in the series $1,2,3,4$. For each number in the series, $n$ imagine stacking $n$ blocks to the left of the last stack.... you'd build up the triangle shown to the left. It's basically one half of a square. Trouble is if you divide the square by 2 you would chop off the top half of each block at the top of its stack. Therefore for each number in the series you have "lost" half a block.
Therefore the total number of blocks is...
$$number\mathrm{\_}of\mathrm{\_}blocks=\frac{{n}^{2}}{2}+\frac{n}{2}=\frac{n(n+1)}{2}$$
Where ${n}^{2}/2$ is half of the square and $n/2$ is the total of all the halves that we "cut" off (but didn't mean to) when we took the half.
Therefore, we can say...
$$\sum _{1}^{n}n=\frac{n(n+1)}{2}$$
And...
$$\sum _{1}^{n}{n}^{2}=\frac{n(n+1)(2n+1)}{6}$$$$\sum _{1}^{n}{n}^{3}={\left(\frac{n(n+1)}{2}\right)}^{2}$$
$$x(n)=a,ar,a{r}^{2},a{r}^{3},\dots ,a{r}^{n},\dots $$$${S}_{n}=\frac{a(1-{r}^{n})}{1-r},n>0$$
If $|r|<1$, then as $n\to \mathrm{\infty}$, ${r}^{n}\to 0$,
$${S}_{\mathrm{\infty}}=\frac{a}{1-r}$$
Or, if $n<0$ (useful for the z-transform), then
$${S}_{\mathrm{\infty}}=\frac{ar}{r-1}$$
Binomial Theorem
The binomial theorem is summarised as...
$$(1+x{)}^{n}=1+nx+\frac{n(n-1)}{2!}{x}^{2}+\frac{n(n-1)(n-2)}{3!}{x}^{3}+\cdots +\frac{n(n-1)(n-2)\dots 1}{(n-1)!}{x}^{n}$$
Where the n^{th} term is given by the following equation.
$${u}_{a}=\frac{xn!}{a!(n-a)!}$$
Exponential Series
The exponential series is defined as...
$$e={(1+\frac{1}{x})}^{n},\text{}n\to \mathrm{\infty}$$
This is why $e=2.71828\dots $
The constant $e$ can be expanded as follows...
$$e=1+1+\frac{1}{2!}+\frac{1}{3!}+\frac{1}{4!}+\cdots $$
And powers of $e$ as ...
$${e}^{x}=1+x+\frac{{x}^{2}}{2!}+\frac{{x}^{3}}{3!}+\cdots $$$${e}^{kx}=1+kx+\frac{(kx{)}^{2}}{2!}+\frac{(kx{)}^{3}}{3!}+\cdots $$
Malclauren's Theorem
Attempts to express a function as a polynomial.
$$f(x)={a}_{0}+{a}_{1}x+{a}_{2}{x}^{2}+{a}_{3}{x}^{3}+\cdots +{a}_{n}{x}^{n}$$
Note that the series must be shown to converge.
Need to figure out the ${a}_{i}$ coefficients. Notice the following.
$$f(x)={a}_{0}+{a}_{1}x+{a}_{2}{x}^{2}+{a}_{3}{x}^{3}+{a}_{4}{x}^{4}+{a}_{5}{x}^{5}+{a}_{6}{x}^{6}+\cdots $$$${f}^{\prime}(x)={a}_{1}+2{a}_{2}x+3{a}_{3}{x}^{2}+4{a}_{4}{x}^{3}+5{a}_{5}{x}^{4}+6{a}_{6}{x}^{5}+\cdots $$$${f}^{\u2033}(x)=2{a}_{2}+3\cdot 2{a}_{3}x+4\cdot 3{a}_{4}{x}^{2}+5\cdot 4{a}_{5}{x}^{3}+6\cdot 5{a}_{6}{x}^{4}+\cdots $$$${f}^{\u2034}(x)=3\cdot 2{a}_{3}+4\cdot 3\cdot 2{a}_{4}x+5\cdot 4\cdot 3{a}_{5}{x}^{2}+6\cdot 5\cdot 4{a}_{6}{x}^{3}+\cdots $$
Notice then...
$$f(0)={a}_{0}$$$${f}^{\prime}(0)={a}_{1}$$$${f}^{\u2033}(0)=2!{a}_{2}$$$${f}^{\u2034}(0)=3!{a}_{3}\dots $$
And so on...
It thus looks like, and indeed is the case, that:
$${a}_{n}=\frac{{\mathrm{d}}^{n}f}{\mathrm{d}{x}^{n}}/n!$$
The above definition can then be used to derive the expansion
of ${e}^{x}$, which is why we were able to say:
$${e}^{x}=1+x+\frac{{x}^{2}}{2!}+\frac{{x}^{3}}{3!}+\cdots $$
Remember, the series must be shown to converge! This is easily seen because
the denominator is growing at a faster rate than the numerator.
Using this we can derive Euler's Formula.
$$\mathrm{cos}(x)=1-\frac{{x}^{2}}{2!}+\frac{{x}^{4}}{4!}-\frac{{x}^{6}}{6!}+\frac{{x}^{8}}{8!}-\cdots $$$$\mathrm{sin}(x)=x-\frac{{x}^{3}}{3!}+\frac{{x}^{5}}{5!}-\frac{{x}^{7}}{7!}+\frac{{x}^{9}}{9!}-\cdots $$
Summing the above two expansions we get...
$$\mathrm{cos}(x)+\mathrm{sin}(x)=1+x-\frac{{x}^{2}}{2!}-\frac{{x}^{3}}{3!}+\frac{{x}^{4}}{4!}+\frac{{x}^{5}}{5!}-\frac{{x}^{6}}{6!}-\frac{{x}^{7}}{7!}+\cdots $$
Doing a similar expansion for ${e}^{jx}$ we get the following and can then see how we get Euler's formula...
$$\begin{array}{rl}{e}^{jx}& =1+jx+\frac{(jx{)}^{2}}{2!}+\frac{(jx{)}^{3}}{3!}+\frac{(jx{)}^{4}}{4!}+\cdots \\ & =1+jx+\frac{{x}^{2}}{2!}+\frac{j{x}^{3}}{3!}+\frac{{x}^{4}}{4!}++\frac{j{x}^{5}}{5!}+\cdots \\ & =\mathrm{cos}(x)+j\mathrm{sin}(x)\end{array}$$
Imaginary & Complex Numbers
Intro
The definition of a imaginary number, $i$, is ${i}^{2}=-1$, which is also often seen written as $i=\sqrt{-1}$. It's called "imaginary"
because the square root of -1 doesn't exist in real terms: there is no real number that when multiplied by itself is negative. Thus, the
set of real numbers has been "extended". Sometimes $j$ is used instead of $i$ to denote an imaginary number.
A complex number is one that has a real and imaginary part, even if the real part is zero. Thus real numbers are a subset of complex numbers.
Rectangular Form
In rectangular form we write complex numbers like this: $z=a+bi$, where $a$ and $b$ are real numbers.
If $z=a+bi$ then it we can write $\mathrm{Re}(z)=a$ and $\mathrm{Im}(z)=b$.
The representation of a complex number as $z=a+bi$ is called rectangular form.
A complex number can be viewed graphically on an argand diagram, which is a little like a Cartesian diagram, except
the horizontal axis is the real component and the vertical axis is the imaginary component of the complex number:
Need complex conjugates for this. See below...
$${z}_{1}\xf7{z}_{2}\phantom{\rule{0ex}{0ex}}=\frac{({a}_{1}+{b}_{1}i)}{({a}_{2}+{b}_{2}i)}\phantom{\rule{0ex}{0ex}}=\frac{({a}_{1}+{b}_{1}i)}{({a}_{2}+{b}_{2}i)}\cdot \frac{({a}_{2}-{b}_{2}i)}{({a}_{2}-{b}_{2}i)}\phantom{\rule{0ex}{0ex}}=\frac{({a}_{1}+{b}_{1}i)({a}_{2}-{b}_{2}i)}{{a}_{2}^{2}+{b}_{2}^{2}}\phantom{\rule{0ex}{0ex}}=\frac{{a}_{1}{a}_{2}+{a}_{1}{b}_{2}i+{a}_{2}{b}_{1}i-{b}_{1}{b}_{2}}{|{z}_{2}{|}^{2}}$$
Complex conjugates. The complex conjugate of $z=a+bi$ is defined as
$\overline{z}={z}^{\ast}=a-bi$ (where $\overline{z}$ and ${z}^{\ast}$ are just two different notations
that mean the same thing). The can be visualised as shown below:
The next thing to talk about is the magnitude of a complex number. In the above
diagrams we can see that the vector forms a right-angled triangle with the horizontal axis.
The magnitude is the length of the vector and this is defined as follows:
$$|z|=\sqrt{\mathrm{Re}(z{)}^{2}+\mathrm{Im}(z{)}^{2}}$$
Multiplying By j Rotates 90 Degrees Anticlockwise
Multiplying a real or complex number by $j$ rotates it, in the complex plan, by 90 degrees
anticlockwise, whilst dividing by $j$ rotates in 90 degrees clockwise.
Polar Form
Complex numbers can also be written in polar form by specifying magnitude (distance
from the origin) and angle...
How do we go from rectangular form ($z=a+bi$) to polar form ($(r,\theta )$)?
Recall $\mathrm{tan}(\theta )=\frac{b}{a}$ and that therefore $\theta ={\mathrm{tan}}^{-1}(\frac{b}{a})$.
We already know row to calculate $r$.
Lets do a quick example. Let $z=2+3i$:
$$\mathrm{tan}\theta =\frac{3}{2}$$$$\therefore \theta =0.983\text{}\text{radians, rounded to 3dp}$$$$|z|=r=\sqrt{13}$$
Thus, in polar form the complex number $z=2+3i$ is expressed as $(\sqrt{13},0.983)$.
Exponential Form
From polar form we go to exponential form.
$$\mathrm{cos}(\theta )=\frac{a}{r}\phantom{\rule{0ex}{0ex}}\therefore r\mathrm{cos}(\theta )=a\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}\mathrm{sin}(\theta )=\frac{b}{r}\phantom{\rule{0ex}{0ex}}\therefore r\mathrm{sin}(\theta )=b$$
This means that we can re-write $z=a+bi$ as
$$\begin{array}{rl}z& =r\mathrm{cos}(\theta )+r\mathrm{sin}(\theta )i\\ & =r[\mathrm{cos}(\theta )+i\mathrm{sin}(\theta )]\end{array}$$
And using Euler's amazing formula ${e}^{i\theta}=\mathrm{cos}(\theta )+i\mathrm{sin}(\theta )$,
we can write...
$$z=r{e}^{i\theta}$$
This is the exponential form. It is especially useful if you want to do
division of complex numbers.
We can go back from exponential form to rectangular form using the same two
formulas we saw above: $r\mathrm{cos}(\theta )=a$ and $r\mathrm{sin}(\theta )=b$.
Taking the example from the previous section where we saw that we could represent $z=2+3i$
as $(\sqrt{13},0.983)$, we can now see this is easily converted to exponential form: $z=\sqrt{13}{e}^{0.983i}$.
And, if we wish to go back we can see that $a\approx \sqrt{13}cos(0.983)\approx 2$ (approximately because the 0.983 value is rounded).
The exponential form is particularly useful in subjects like DSP for doing things like changing a signal's phase, which ends up just being a multiplication like this:
$${z}^{\prime}=z{e}^{j\alpha}$$
The number ${z}^{\prime}$ is $z$ rotated anticlockwise by $\alpha $ radians!
To explain a little more clearly, a discrete complex exponential is generally represented as:
$$A[\mathrm{cos}(\omega n+\theta )+j\mathrm{sin}(\omega n+\theta )]=A{e}^{j\omega n+\theta}$$
Where $\omega $ is the angular frequency is radians per sample period, $A$ is the amplitude and $\theta $ is the initial phase of the signal in radians. If we had to do a phase shift of the signal we'd either have to remember a whole load of trig identities or use the complex exponetial form, which makes life easier...
We can see this very simply by noting the following:
$$\begin{array}{rl}\mathrm{cos}(\omega n+\theta )& =\mathcal{R}\{{e}^{j(\omega n+\theta )}\}\\ & =\mathcal{R}\{{e}^{j\omega n}{e}^{j\theta}\}\end{array}$$
The phase shift has become just multiplication.
Combinations & Permutations
Selecting $r$ samples from a set of $n$ samples at random. If the order of the elements matters then it is
a permutation, otherwise it is a combination.
$$Permutations={}_{n}{P}_{r}=\frac{n!}{(n-r)!}$$
If I have a set of $n$ objects and I select $r=2$ objects, on the first selections I can choose from
$n$ objects. For the second selection I can choose from $n-1$ objects. This looks like the beginning of
a factorial, but the factorial would look like this.
$$n!=n(n-1)(n-2)(n-3)...1$$
We only want the first 2 terms. To get rid of the remaining terms we need to divide by $(n-2)(n-3)...1$, in
other words, $(n-r)!$. Hence the formula above.
With combinations the order is not important. This time take $r=3$... for every choice I make there will be
$3\times 2$ permutations with the same elements, which is one combination (as order is now not important). This
is the case because the 3 elements can be ordered any how and count as the same object. $(a,b,c)$ and $(b,a,c)$
are, for example, no different. Out of all sets containing only these elements there must be $3!$ permutations. Therefore need
to get rid of this count from ${}_{n}{P}_{r}$ by divifing by $r!$ in the general case...
$$Combinations={}_{n}{C}_{r}=\frac{{}_{n}{P}_{r}}{r!}=\frac{n!}{r!(n-r)!}$$
For a fairly explicit example of combinations v.s. permutations and their
application in statistics see
this little example.
When we talking about the limit of a function, we are asking what value the function will give as the
input gets closer and closer to some value, but without reaching that value. The animation below
tries to explain this:
What you should see in the animation is that we can always get closer to the point $x=3$ without ever
reaching $x=3$ by adding smaller and smaller amounts. In other words, you can get $\mathrm{f}(x)$ as
close to the limit (in the above case 9) by getting $x$ sufficiently close to some input (in the
above case 3)
Intuitively,
$$\underset{x\to c}{lim}f(x)=L$$
If $f$ is defined near, but not necessarily at, $c$, then $\mathrm{f}(x)$ will approach $L$
as $x$ approaches $c$. More rigorously, let $f$ be defined at all $x$ in an open interval
containing $c$, except possibly at $c$ itself. Then,
$$\underset{x\to c}{lim}f(x)=L$$
If and only if for each $e>0$, there exists $d>0$ such that if $0<Z|x-c|<d$
then $|f(x)-L|<e$.
I.e., no matter how close to $L$ we get, there is always a value of $x$ arbitrarily close to the limit value, $c$, but which is not $c$, that yields this value close to $L$.
Ie. We can pick a window around the y-axis value $L$ and if we keep shrinking this window,
we will always find an $x$ value, either side of $c$ that will yield a y-value in this
error window. So, the error can be arbitrarily small and we will always find an $x$ either side
of $c$ to satisfy it :) This is the two sided limit.
Limit Properties
Most of these are only true when the limits are finite...
Uniquness:
If $limf(x)={L}_{1}$ and $limf(x)={L}_{2}$ then ${L}_{1}={L}_{2}$
If the limit is not of the form $x\to \mathrm{\infty}$, i.e., $x$ tends to some known number, first try pluggin that number into the
equation. If you get a determinate answer thats great.
Generally though we get an indeterminate answer because we get a divide by zero...
If you can transform the function you're taking the limit of to something where there is no longer a divide by zero, note
that original function and the transform are not exactly identical. Let's say the original function is $f(x)$ and you have
transformed it to $g(x)$.
If $x\to n$ and at $n$$f(n)$ is indeterminate, our transform to $g(n)$ is not. This means $f(x)$ and $g(x)$ are not the
same function. But because limits are only concerned with values near$n$ and not at$n$, this is okay
as the functions are identical everywhere else.
When numerator non-zero but denominator zero, there is at least one vertical asymptote in your function. Either the limit
will exist here or you will only have a left or right sided limit if on one side the y-axis tends in the opposite direction
to the other side of the asymptote.
Factor Everything
If $f(x)=\frac{2{x}^{2}+12x+10}{2x+2}$ then factor to $f(x)=\frac{(2x+2)(x+5)}{(2x+2)}$ and cancel out the
denominator. You can also use the difference of two cubes to help with more complex functions: ${a}^{3}-{b}^{3}=(a-b)({a}^{2}+ab+{b}^{2})$.
Get Rid Of Square Roots
For problems where $x\to a$, where a is not $\mathrm{\infty}$ you get rid of square roots by multiplying by the conjugate.
If you have $\sqrt{expr}+C$ you multiply by $\sqrt{expr}-C$. If you have $\sqrt{expr}-C$ you multiply by $\sqrt{expr}+C$:
$$\underset{x\to a}{lim}(\frac{\sqrt{\text{expr}}-C}{x-D}\times \frac{\sqrt{\text{expr}}+C}{\sqrt{\text{expr}}+C})=\underset{x\to a}{lim}\frac{\text{expr}-{c}^{2}}{(x-D)(\sqrt{\text{expr}}+C)}$$
Now when $x=a$, hopefully you won't have an indeterminate fraction!
Rational Functions
When $x\to \mathrm{\infty}$, the leading term dominates. If $p(x)={a}_{0}{x}^{n}+{a}_{1}{x}^{n-1}+{a}_{n}$ then the leading term is ${a}_{1}{x}^{n}$. Putting ${p}_{L}(x)={a}_{1}{x}^{n}$, we say
that...
$$\underset{x\to \mathrm{\infty}}{lim}\frac{p(x)}{{p}_{L}(x)}=1$$
This does not mean that $p(x)$ ever equals ${p}_{L}(x)$, just that the ratio of the two tends to one as $x$ tends to
infinity!
If you are taking the limit of a rational function like the one below...
$$\underset{x\to \mathrm{\infty}}{lim}\left(\frac{n(x)}{d(x)}\right)$$
... you can't just substitute infinity for $x$ because you
get $\frac{\mathrm{\infty}}{\mathrm{\infty}}$ which doesn't make a lot of sense...
So do this...
$$\underset{x\to \mathrm{\infty}}{lim}\left(\frac{n(x)\times \frac{{n}_{L}(x)}{{n}_{L}(x)}}{d(x)\times \frac{{d}_{L}(x)}{{d}_{L}(x)}}\right)=\underset{x\to \mathrm{\infty}}{lim}\left(\frac{\frac{n(x)}{{n}_{L}(x)}\times {n}_{L}(x)}{\frac{d(x)}{{d}_{L}(x)}\times {d}_{L}(x)}\right)=\underset{x\to \mathrm{\infty}}{lim}(\frac{\frac{n(x)}{{n}_{L}(x)}}{\frac{d(x)}{{d}_{L}(x)}}\times \frac{{n}_{L}(x)}{{d}_{L}(x)})$$
We know that because the leading term dominates...
$$\text{As}x\to \mathrm{\infty}\text{the fraction}\frac{n(x)}{{n}_{L}(x)}\text{tends to}1\text{as does}\frac{d(x)}{{d}_{L}(x)}$$
Which means that we will really be looking at the limit of...
$$\frac{{n}_{L}(x)}{{d}_{L}(x)}$$
... as $x$ tends to infinity!
A numerical example. Solve
$$\underset{x\to \mathrm{\infty}}{lim}\left(\frac{x+5{x}^{5}}{4{x}^{8}+7{x}^{4}+x+1234}\right)$$
So we do...
$$\underset{x\to \mathrm{\infty}}{lim}(\frac{\frac{x+5{x}^{5}}{5{x}^{5}}}{\frac{4{x}^{8}+7{x}^{4}+x+1234}{4{x}^{8}}}\times \frac{5{x}^{5}}{4{x}^{8}})$$
Everything tends to one except $\frac{5{x}^{5}}{4{x}^{8}}$, which is equal to $\frac{5}{4{x}^{3}}$, so we know that this tends to zero!
In general, for our polynomials $n(x)$ and $d(x)$:
If degree of n == degree of d, then limit is finite and nonzero as $x\to \pm \mathrm{\infty}$
If degree of n > degree of d, then limit is $\mathrm{\infty}$ or $-\mathrm{\infty}$ as as $x\to \pm \mathrm{\infty}$
If degree of n < degree of d, then limit is 0 as $x\to +\mathrm{\infty}$
N-th Roots...
When there are square roots, or indeed n-th roots in the equation the leading term idea still works. All that happens is that
when you divide the n-th root by the leading term you bring it back under the square root.
Be careful when $x\to -\mathrm{\infty}$ because $\sqrt{{x}^{2}}\ne x$ when $x$ is negative! It equals $-x$. Use the following rule.
If you write...
$${}^{n}\sqrt{{x}^{\text{some power}}}={x}^{m}$$
You need a minus in front of ${x}^{m}$ when $n$ is even and $m$ is odd.
The Sandwich Principle / Squeeze Theorem
If $g(x)\le f(x)\le h(x)$ for all x near a, and $\underset{x\to a}{lim}g(x)=\underset{x\to a}{lim}h(x)=L$, then $\underset{x\to a}{lim}f(x)=L$ too.
Sal on Khan Achademy gives a good example of using the squeeze theorem to solve the following limit.
$$\underset{\theta \to 0}{lim}\left(\frac{\mathrm{sin}\theta}{\theta}\right)=1$$
The limit cannot be defined at $\theta =0$, so the function is not continuous at that point, but it can still
have a limit as $\theta $ tends to 0, as it does not have to equal 0... we only have to be able to get
arbitrarily close.
L'Hopital's Rule
Can be summarised as follows...
$$\underset{x\to a}{lim}\left\{\frac{f(x)}{g(x)}\right\}=\underset{x\to a}{lim}\left\{\frac{{f}^{\prime}(x)}{{g}^{\prime}(x)}\right\}$$
L'Hopital's rule can be used when the normal limit is indeterminate. For example...
$$\underset{x\to 0}{lim}\frac{\mathrm{cosh}(x)-{e}^{x}}{x}$$
Substituting in $0$ gives $(1-1)/0$, which is indeterminate. So, apply L'Hopital's rule by
differentiating numerator and denominator separately...
$$\underset{x\to 0}{lim}\frac{\mathrm{cosh}(x)-{e}^{x}}{x}=\underset{x\to 0}{lim}\frac{\mathrm{sinh}(x)-{e}^{x}}{1}$$
Substitute in $0$ for $x$ gives $-1$. Therefore...
$$\underset{x\to 0}{lim}\frac{\mathrm{cosh}(x)-{e}^{x}}{x}=-1$$
Trig functions
For small values of $x$, $y=\mathrm{sin}(x)$ and $y=x$ are approximately the same. For really small values, I'm not even
sure my computer has enough precision to be able to accurately show the difference. But, using a small python script
we can get the idea:
The above image was generated using the following script:
import matplotlib.pyplot as pl
import numpy as np
x = np.arange(-0.0001,0.0001, 0.000001)
y_lin = x
y_sin = np.sin(x)
pl.plot(x,y_sin, color="green")
pl.plot(x,y_lin, color="red")
pl.grid()
pl.legend(['y=sin(x)', 'y=x'])
pl.show()
In fact if your print(y_sin/y_lin) you just get an array of 1's because the computer does
not have the precision to do any better. I think even at small values of $x$ it is not the case
that $sin(x)=x$, but it is very close. So in reality, even for tiny $x$:
$$\frac{\mathrm{sin}(x)}{x}\ne 1$$
But, the following does hold:
$$\underset{x\to 0}{lim}\frac{\mathrm{sin}(x)}{x}=1$$
The following also hold:
$$\underset{x\to 0}{lim}\mathrm{cos}(x)=1$$$$\underset{x\to 0}{lim}\frac{\mathrm{tan}(x)}{x}=1$$
"Continuous" In Terms Of Limits
Continuous At A Point
A function is continuous at a point if it can "be drawn without taking the pen off the
paper" [Ref]. This means, more formally, that
a function $f(x)$ is continuous at $x=a$ if $\underset{x\to a}{lim}f(x)=f(a)$.
As Adrian Banner says in his book "The Calculus Lifesaver", it is continuity that connects
the "near" with the "at" in terms of limits. It is what allows us to find
limits by direct substitution.
Continuity Over An Interval
If a function is continuous over the interval [a, b]:
The function is continuous at over point in (a,b).
The function is right-continuous at x = a: $f(x)=a$ and $li{m}_{x\to {a}^{+}}f(x)$ exists.
The function is left-continuous at x = b: $f(x)=b$ and $li{m}_{x\to {a}^{-}}f(x)$ exists.
Intermediate Value Theorem
If $\mathrm{f}$ is a function continuous at every point of the interval $[a,b]$, then:
$\mathrm{f}$ will take on every value between $\mathrm{f}(a)$ and $\mathrm{f}(b)$ over the interval, and
For any $L$ between the values $\mathrm{f}(a)$ and $\mathrm{f}(b)$, there exists a $c\in [a,b]$ such that
$\mathrm{f}){c}_{=}L$
Min/Max Theorem
If $f$ is continuous on [a,b], then $f$ has at least one maximum and one minimum on [a,b].
Differentiation
Definition
Recall that a function is continuous at a point if it can "be drawn without taking the pen off the
paper" [Ref]. This means, more formally, that
a function $f(x)$ is continuous at $x=a$ if $\underset{x\to a}{lim}f(x)=f(a)$.
For a function to be differentiable at a point, it must be continuous at that point....
The derivative of a function $f(x)$ with respect to it's variable $x$ is defined
by the following limit, assuming that limit exists.
$${f}^{\prime}(x)=\underset{h\to 0}{lim}\frac{f(x+h)-f(x)}{h}$$
If the limit does not exist the function does not have a derivative at that point.
This is why, for example the function $f(x)=|x|$ does not have a defined derivative
at $x=0$, because the limit (of the differentiation) does not exist at that point.
The function is continuous at that point because the left and right limits are the same, but the derivative function does not exist.
A function is continuous over an interval $[a,b]$ if:
it is continuous at every point in $(a,b)$,
it is right-continuous at $x=a$, i.e., $\underset{x\to {a}^{+}}{lim}f(x)=f(a)\ne \mathrm{\infty}$
it is left-continuous at $x=b$, i.e., $\underset{x\to {b}^{-}}{lim}f(x)=f(b)\ne \mathrm{\infty}$
If a function is differentiable then it must also be continuous.
If $h$ is replaced with $\mathrm{\Delta}x$ we would write
$${f}^{\prime}(x)=\underset{\mathrm{\Delta}x\to 0}{lim}\frac{f(x+\mathrm{\Delta}x)-f(x)}{\mathrm{\Delta}x}$$
Where $\mathrm{\Delta}$ just means a "small change in". This small change in $x$ leads
to a small change in $y$, which is given by $f(x+\mathrm{\Delta}x)-f(x)$. Therefore we can
write:
$${f}^{\prime}(x)=\underset{\mathrm{\Delta}x\to 0}{lim}\frac{\mathrm{\Delta}y}{\mathrm{\Delta}x}=\frac{\mathrm{d}y}{\mathrm{d}x}$$
I.e, if $y=f(x),{f}^{\prime}(x)$ can be written as $\frac{\mathrm{d}y}{\mathrm{d}x}$.
Now at school, as we'll see in the integration by parts section, sometimes $\frac{\mathrm{d}y}{\mathrm{d}x}$ was treated as a fraction although my teacher always said that it wasn't a fraction. The above,
which I didn't know at the time, explains why. And, finally, in the awesome book "The Calculus
Lifesaver" by Adrian Banner [Ref] I know why :)
Unfortunately neither $\mathrm{d}y$ or $\mathrm{d}x$ means anything by itself ...
... the quantity $\frac{\mathrm{d}y}{\mathrm{d}x}$ is not actually a fraction at all - it's the limit of the fraction $\frac{\mathrm{d}y}{\mathrm{d}x}$ as $\mathrm{\Delta}x\to 0$.
Example
We can use the definition above to take the derivative of $y=4{x}^{3}$:
$$\begin{array}{rl}{f}^{\prime}(x)& =\underset{h\to 0}{lim}\frac{4(x+h{)}^{3}-4{x}^{3}}{h}\\ & =4\underset{h\to 0}{lim}\frac{(x+h{)}^{3}-{x}^{3}}{h}\\ & =4\underset{h\to 0}{lim}\frac{3h{x}^{2}+3{h}^{2}x+{h}^{3}}{h}\\ & =4\underset{h\to 0}{lim}3{x}^{2}+3hx+{h}^{2}\end{array}$$
We know that this limit exists because the function $f$ is continuous by virtue of it being
a polynomial. We also know that $h$ is tending to zero, so those terms just dissapear,
which leaves us with...
$$\begin{array}{rl}{f}^{\prime}(x)& =4\underset{h\to 0}{lim}3{x}^{2}+3hx+{h}^{2}\\ & =12{x}^{2}\end{array}$$
Thus,
$$\frac{d}{dx}(4{x}^{3})={f}^{\prime}(x)=12{x}^{2}$$
Using this when you need to differentiate a function that is made up
of the result of one function passed into the next and so on. I.e. if
$h(x)=f(g(x))$.
When you have a function $h(x)$ that can be expressed as a fraction, $\frac{f(x)}{g(x)}$
use the quotient rule to allow you do differentiate $f(x)$ and $g(x)$ independently (easier!)
and the combine the results using this rule. $f(x)$ becomes $u$ and $g(x)$ becomes $v$:
Solve the following:
$$y=\frac{3{x}^{4}+2{x}^{3}+1}{4-9x}$$
To do this we note that it is of the form $\frac{u}{v}$ where:
$$u=3{x}^{4}+2{x}^{3}+1\text{, and}v=4-9x$$
So we can calculate:
$$\frac{du}{dx}=12{x}^{3}+6{x}^{2}\text{, and}\frac{dv}{dx}=-9$$
Plugging these into the quotient rule formula we get,
$$\begin{array}{rl}\frac{dy}{dx}& =\frac{v\frac{du}{dx}-u\frac{dv}{dx}}{{v}^{2}}\\ & =\frac{v(12{x}^{3}+6{x}^{2})-u(-9)}{{v}^{2}}\\ & =\frac{(4-9x)(12{x}^{3}+6{x}^{2})-(3{x}^{4}+2{x}^{3})(-9)}{(4-9x{)}^{2}}\\ & =\frac{48{x}^{3}+24{x}^{2}-108{x}^{4}-54{x}^{3}+18{x}^{4}+18{x}^{3}}{(4-9x{)}^{2}}\\ & =\frac{-90{x}^{4}+12{x}^{3}+24{x}^{2}}{(4-9x{)}^{2}}\end{array}$$
Product Rule
Use the product rule when trying to differentiate a quantity that is the
multiplication of two function. I.e when trying to find the derivative of
$h(x)$ when $h(x)=f(x)g(x)$. One of these functions becomes $u$, the
other $v$. Use this so you can differentiate the simpler functions $f(x)$ and
$g(x)$ independently and then combine the results.
For two functions...
$$\frac{dy}{dx}=u\frac{dv}{dx}+v\frac{du}{dx}$$
Or for three functions...
$$\frac{dy}{dx}=\frac{du}{dx}vw+u\frac{dv}{dx}w+uv\frac{dw}{dx}$$
Or, even, for any number of funtions...
... add up the group $abc...uvw$$n$ times and put a $d/dx$ in front of a different variable in each term ...
The following is a really cool visual tutorial by Eugene Khutoryansky...
Example
A fairly simple example - find the derivative of $y=(2x+1{)}^{3}(x-1{)}^{4}$. Okay, so we could just expand this out but it would
be pretty tedious. The product rule help make it easier!
In this case if we let $f(x)=(2x+1{)}^{3}$ and $g(x)=3(x-1{)}^{4}$, then we can write $h(x)=f(x)g(x)=(2x+1{)}^{3}(x-1{)}^{4}$,
which is in the exact pattern we need for the product rule where we will label $f(x)$ as $u$ and $g(x)$ as $v$:
$$\begin{array}{rl}u& =(2x+1{)}^{3}\\ v& =(x-1{)}^{4}\end{array}$$
We can find their derivatives as follows. Let $a=(2x+1)$.
$$u={a}^{3}$$$$\therefore \frac{du}{da}=3{a}^{2}$$
And ...
$$\frac{da}{dx}=2$$
Therefore...
$$\frac{du}{dx}=\frac{da}{dx}\cdot \frac{du}{da}=2\cdot 3{a}^{2}=6{a}^{2}=6(2x+1{)}^{2}$$
Let $b=(x-1)$.
$$v={b}^{4}$$$$\therefore \frac{dv}{db}=4{b}^{3}$$
And...
$$\frac{db}{dx}=1$$
Therefore...
$$\frac{dv}{dx}=\frac{db}{dx}\cdot \frac{dv}{db}=1\cdot 4{b}^{3}=4(x-1{)}^{3}$$
Now that we have the above we can use the product rule:
$$\frac{dy}{dx}=u\frac{dv}{dx}+v\frac{du}{dx}$$$$\begin{array}{rl}\frac{dy}{dx}& =u(4(x-1{)}^{3})+v(6(2x+1{)}^{2})\\ & =(2x+1{)}^{3}4(x-1{)}^{3}+(x-1{)}^{4}6(2x+1{)}^{2}\\ & =2(2x+1{)}^{2}(x-1{)}^{3}[2(2x+1)+3(x-1)]\\ & =2(2x+1{)}^{2}(x-1{)}^{3}(7x-1)\end{array}$$
Use variable substitution, for example, if the integral is
$$y=\int (ax+b{)}^{n}{\textstyle \phantom{\rule{0.278em}{0ex}}}\mathrm{d}x$$
Then put $u=ax+b$, giving...
$$y=\int {u}^{n}{\textstyle \phantom{\rule{0.278em}{0ex}}}\mathrm{d}x$$
But now we need to integrate with respect to $u$, not $x$!. In other words we need to go from the above integral to
something like,
$$y=\int ???{\textstyle \phantom{\rule{0.278em}{0ex}}}\mathrm{d}u$$
We can use the chain rule as follows...
$$\frac{dy}{du}=\frac{dy}{dx}{\textstyle \phantom{\rule{0.278em}{0ex}}}\frac{dx}{du}$$
By integrating both sides of the equation with respect to $u$ we get....
$$y=\int \frac{dy}{dx}{\textstyle \phantom{\rule{0.278em}{0ex}}}\frac{dx}{du}{\textstyle \phantom{\rule{0.278em}{0ex}}}\mathrm{d}u$$
We now have an integral of some function with respect to $u$. We can find $dy/dx$
from the definition of the original integral (just differentiate both sides!). We can find $dx/du$
by firstly rearranging $u=ax+b$ to...
$$x=\frac{u-b}{a}$$
Then take the derivative with respect to $u$ to get
$$\frac{dx}{du}=\frac{1}{a}$$
Notice how, that because the "thing" (factor) that is raised to a power in the integral is a linear function, the $x$ terms
will always dissapear, leaving only a constant, which can be differentiated w.r.t $u$. If $x$ terms remained after differentiating
the "thing" (factor), it would stop us doing our desired integration w.r.t $u$!
Now we can see that...
$$\frac{dy}{dx}{\textstyle \phantom{\rule{0.278em}{0ex}}}\frac{dx}{du}=(ax+b{)}^{n}\cdot \frac{1}{a}={u}^{n}\cdot \frac{1}{a}$$
Substitute this into our integral and we have...
$$y=\int {u}^{n}\cdot \frac{1}{a}{\textstyle \phantom{\rule{0.278em}{0ex}}}\mathrm{d}u$$
This we know how to integrate the above using the list of standard integrals.
$$y=\frac{{u}^{n+1}}{a(n+1)}+C$$
Now we can substitute back in for $u$ to obtain the answer
$$y=\frac{(ax+b{)}^{n+1}}{a(n+1)}+C$$
One point to note is as follows. I always remember being taught to re-arrange the substitution that was made...
$$\frac{dx}{du}=\frac{1}{a}$$
To...
$$\mathrm{d}x=\frac{\mathrm{d}u}{a}$$
..And then subsitute for the $\mathrm{d}x$ term in the integral. However, this is not strictly correct as far
as I understand because a differential coefficient is not a fraction... it is a limit:
$$\frac{\mathrm{d}y}{\mathrm{d}x}=\underset{a\to \mathrm{\infty}}{lim}\frac{f(x+a)-f(x)}{a}$$
So, as we can see $\mathrm{d}y/\mathrm{d}x$ is not really a fraction... hence the above method and explanation,
even if the "trick" I was taught works.
Integration By Parts
Phasors
This awesome GIF is produced by
RadarTutorial.eu, although I couldn't find it on their site.
I originally found the image on this forum thread and the
watermark bears RadarTutorial's site address (it's not too visible on the white background of this page).