Calculus
Power rule
We learn this this high school: a function xn has the derivative nxn-1.
Proof example
Let's take the square root function x1/2. The power rule says this should result in 1/2 * x-1/2.
We'll prove it using the limit "as delta x goes to zero". Some people like to call that variable h or d, but we'll go with delta x.
We cannot set delta x to zero, because it's sitting in the denominator. Let's change the expression. Using the difference of squares (a + b)(a - b) = a2 - b2, we multiply numerator and denominator by (a + b).
Normally, when we cancel something from the denominator, we add a clause that it must be nonzero. Irrelevant/implicit when you're working with limits.
We're left with the expression , and can now pretend that delta x is zero without breaking the equation. That's just , or , which fits the power rule.
Chain rule
A function f(x) that composes two functions g(x) and h(x) such that one is inside the other like: g(h(x)), will have the following derivative: g'(h(x)) * h'(x).
Note that in the first factor you neglect modifying the inner function.
Example: the derivative of sin(2x) is 2cos(2x), because you are deriving two functions: sin(something), and 2x. The derivative of 2x is just 2.
Aside: As you know, the sine of x derives to the cosine of x, but there are two functions in sine of x: the sine, and the x itself. What gives? Indeed you should derive both. But x derives to 1, which you just don't need to write.
Product rule
A product of two functions f(x) g(x) has the derivative f'(x) g(x) + f(x) g'(x).
For a product of three functions, derive "one at a time", leaving the rest untouched, for each term.
Function inputs
Take f(x).
- Reflect across the x-axis: -f(x). This should be intuitive.
- Reflect across the y-axis: f(-x)
- Reflect across both axes: -f(-x)
Example: Periodic functions
sin(2x) är dubbelt så “snabb” som sin(x). sin(x/2) hälften så snabb. 2sin(x) gör kurvan dubbelt så hög, men ändrar inte svängningens frekvens. Dvs den når fortfarande 0 efter 1pi tid (alltså vid x=1pi). sin(x) + 2 förflyttar kurvan 2 steg uppåt, så att dess “noll-axel” är en horisontell linje som skär y vid 2.
TODO Plot a third-degree polynomial and paste images for the difference between f(x), -f(x), f(-x).
Odd and even functions
An even function is where f(-x) = f(x), i.e. symmetric, reflected across y-axis. An odd function is where -f(-x) = f(x). i.e. that if you reflect it across both axes, you end up with the same graph.
If there is a constant — an y-intersect offset from origin — it will fail to be odd.
A function is neither even nor odd when f(x) doesn't tell you anything about f(-x), i.e. when it isn't perfectly reflected across axes.
The terminology came from exponents of some functions, but the exponents don't necessarily determine it.
When the highest-degree exponent is odd, the function may be odd. When the highest-degree exponent is even, the function may be even.
What's a function
All these are functions:
- sin x
- x2
- 2x
- x
- ln x
- ex
- 2x
A function evaluates to only one value for a given x.
Elementary function
Euclid's similarity cases
△ABC ≅ △DEF means triangle ABC is congruent to (has the same measurements as) triangle DEF.
Compare two triangles:
- Two pairs of sides being proportional and the angle between them the same => similar triangles.
- Any pairs of sides being proportional => similar. (??)
- Any pair of angles the same => similar. (??)
Partial fraction expansion
A way to change a rational expression into multiple terms, which is then easier for some purposes – like taking the derivative or integral, or doing a Laplace transform.
Part of elementary school in some countries…
Take an example.
First check to see what degree these polynomials are in. If the denominator has a lower degree, then you do a polynomial division, and then do the partial fraction expansion, okay? The technique is for stuff that you can't divide, like the remainder left by a polynomial division.
Now observe the denominator. What x makes it reach zero? That's 2. So you can factor out (x - 2), right? (Scribble a polynomial division of with )
We get an altered denominator:
You could also try to factor the second-degree expression we just got, but doing it in our head tells us it will not be possible in the reals. Skip it.
Now the aim of the technique is to set variables A and B, sometimes C etcetera:
A is a constant because the denominator is first-degree. You perceive the rule: the numerator should be one degree less.
To put the right-hand side over the same denominator, expand the fractions. You know how…
Now you can just remove the denominator from both sides. Unfactor the RHS.
Logarithms
Addition: lg 5 + lg 5 = lg 25 Subtraction: lg 10 - lg 5 = lg 10/5 = lg 2
Exponents: lg 23 = 3 lg 2
Change of base formula
Limits
"What is the limit of when n goes to infinity?"
Solution: we want to do a variable substitution on the function inputs. We'll have to change the n out front so that it is something something 1/n. The expression above is equal to .
Thereafter, subbing 1/n for x, we ask the limit of x as it goes to zero – because 1/infty goes to zero. We get the expression
What happens here? The solution lies in Standard limits. The limit goes to 1 * 1 = 1.
Standard limits🔗
x → 0 | x → ∞ | x → + ∞ | |||
---|---|---|---|---|---|
Differential equation
Differential equations involve a function and its derivative, sort of. In a "first-order" differential equation, the highest order of derivative is one. In an "ordinary" differential equation, the function is of one variable.
If there is a requirement that the solution take on a particular value at a particular input, the exercise is called a `begynnelsevärdesproblem'.
You can plot the solutions to differential equations, in the form of a grid of 'directions' in a coordinate system: the tangent that a solution function would have at a given (x,y). `Riktningsfält'.
Now, by a linear differential equation, this is meant:
Take a generic equation , and name the LHS . Linearity will imply that and that . This is similar to linearity in other fields of math, no?
Solving a first-order differential equation
The simplest form of a first-order equation is , which I will name A. A more general form is To solve that one, turn it into A.
Choose a primitive to the coefficient for y. Multiply both sides of the equation with the so-called integrerande faktorn . (Because is nonzero for all x, the resulting equation is strictly equivalent!)
Notice, that the LHS is the derivative of ! That is neat since we are working on differential equations. Rewrite the equation as
Now you can treat this as you would treat A. Namely, turn into (see my seminar writeups for another example):
Applying this to the equation we got, we get
Now you can divide both sides by and get an expression for y. Insert whatever required values you were given for y and x to find out C, etc.
Fundamental Theorem of Calculus
Proves the useful formula of Insattningsformeln (PB 298), which is just the basic rule of how to integrate a defined interval.
Can also be used for (PB 298)
Optimization
Finding the "best" solution to a problem, like the maximum payoff or minimal material use. Involves finding asymptotes, zeros of derivatives, etc.
Continuity
A function is continuous around a point if it isn't undefined nor jumps to some unexpected value at exactly that point. Mathematically, the limit from both sides of that point is equal to the value of the function at that point.
Differentiability
Standard derivatives
The derivatives of common functions are as follows
f(x) | f'(x) |
---|---|
sin x | cos x |
cos x | -sin x |
tan x | |
cot x | |
arcsin x | |
arccos x | |
arctan x | |
arccot x |
Standard integrals
Implicit differentiation
When you work with unknown functions and use their derivatives, it's called implicit differentiation. Nothing to be scared of.
Absolute value
- |xy| = |x||y| Proof:
- For |x + y|, see Triangle inequality
Taking the absolute value of a complex number lets you get rid of the "i".
Absolute equation
Consider the graph of an absolute value linear function like |x - 1|. It has two distinct segments, and no other notation would give it that shape. It's not a smooth function, it's two functions, depending on which x you're at.
So to rewrite |x - 1| without bar notation, make cases for different x. The quick way is to see where it becomes zero, which is at x=1. For the case of x>1, you can just remove the bars. For the case of x<1, also flip the signs.
It's easier with complex numbers. Consider the case of having a complex number inside bar notation, like |a + bi|. Rewriting it as a vector gives us a circle around origin, any point on which may be touched by the vector, but it's irrelevant - we want only the radius r in reiθ{}.
Now consider the case of some unknown complex number z, plus other stuff: |z + 1 + i|. The origin of the "vector" z will now be at -1, -i, so this circle is centred off-origin.
Example solving the above: Set z := x + yi. Say its absolute value is 2. Write the equation.
The last form can be recognized as Pythagoras.
After squaring, the equation resembles that of a circle:
Created Maclaurin/Taylor series
Standardutvecklingar
In these cases, θ is a number between 0 and 1 that depends on x and n.
\hfill where
\hfill where
Proofs for the above (PB )
Mean value theorem
Partial integration
What links here
- Math