Calculus

Calculus

Power rule

We learn this this high school: a function xn has the derivative nxn-1.

Proof example

Let's take the square root function x1/2. The power rule says this should result in 1/2 * x-1/2.

We'll prove it using the limit "as delta x goes to zero". Some people like to call that variable h or d, but we'll go with delta x.

limΔx0x+ΔxxΔx \lim_{\Delta x \to 0} \frac{ \sqrt{x + \Delta x} - \sqrt{x} }{ \Delta x }

We cannot set delta x to zero, because it's sitting in the denominator. Let's change the expression. Using the difference of squares (a + b)(a - b) = a2 - b2, we multiply numerator and denominator by (a + b).

limΔx0x+ΔxxΔxx+Δx+xx+Δx+x=limΔx0(x+Δx)xΔx(x+Δx+x)=limΔx0ΔxΔx(x+Δx+x) \lim_{\Delta x \to 0} \frac{ \sqrt{x + \Delta x} - \sqrt{x} }{ \Delta x } \cdot \frac{ \sqrt{x + \Delta x} + \sqrt{x} } { \sqrt{x + \Delta x} + \sqrt{x} } = \lim_{\Delta x \to 0} \frac{ \textcolor{red}{(x +} \Delta x\textcolor{red}{) - x} }{ \Delta x (\sqrt{x + \Delta x} + \sqrt{x})} = \lim_{\Delta x \to 0} \frac{ \textcolor{red}{\Delta x} }{ \textcolor{red}{\Delta x} (\sqrt{x + \Delta x} + \sqrt{x})}

Normally, when we cancel something from the denominator, we add a clause that it must be nonzero. Irrelevant/implicit when you're working with limits.

We're left with the expression 1x+Δx+x \frac{1}{\sqrt{x + \Delta x} + \sqrt{x}} , and can now pretend that delta x is zero without breaking the equation. That's just 12x \frac{1}{2\sqrt{x}} , or 12x12\frac{1}{2} x^{-\frac{1}{2}} , which fits the power rule.

Chain rule

A function f(x) that composes two functions g(x) and h(x) such that one is inside the other like: g(h(x)), will have the following derivative: g'(h(x)) * h'(x).

Note that in the first factor you neglect modifying the inner function.

Example: the derivative of sin(2x) is 2cos(2x), because you are deriving two functions: sin(something), and 2x. The derivative of 2x is just 2.

Aside: As you know, the sine of x derives to the cosine of x, but there are two functions in sine of x: the sine, and the x itself. What gives? Indeed you should derive both. But x derives to 1, which you just don't need to write.

Product rule

A product of two functions f(x) g(x) has the derivative f'(x) g(x) + f(x) g'(x).

For a product of three functions, derive "one at a time", leaving the rest untouched, for each term.

Function inputs

Take f(x).

  • Reflect across the x-axis: -f(x). This should be intuitive.
  • Reflect across the y-axis: f(-x)
  • Reflect across both axes: -f(-x)

Example: Periodic functions

sin(2x) är dubbelt så “snabb” som sin(x). sin(x/2) hälften så snabb. 2sin(x) gör kurvan dubbelt så hög, men ändrar inte svängningens frekvens. Dvs den når fortfarande 0 efter 1pi tid (alltså vid x=1pi). sin(x) + 2 förflyttar kurvan 2 steg uppåt, så att dess “noll-axel” är en horisontell linje som skär y vid 2.

TODO Plot a third-degree polynomial and paste images for the difference between f(x), -f(x), f(-x).

Odd and even functions

An even function is where f(-x) = f(x), i.e. symmetric, reflected across y-axis. An odd function is where -f(-x) = f(x). i.e. that if you reflect it across both axes, you end up with the same graph.

If there is a constant — an y-intersect offset from origin — it will fail to be odd.

A function is neither even nor odd when f(x) doesn't tell you anything about f(-x), i.e. when it isn't perfectly reflected across axes.

The terminology came from exponents of some functions, but the exponents don't necessarily determine it.

When the highest-degree exponent is odd, the function may be odd. When the highest-degree exponent is even, the function may be even.

What's a function

All these are functions:

  • sin x
  • x2
  • 2x
  • x
  • ln x
  • ex
  • 2x

A function evaluates to only one value for a given x.

Elementary function

Euclid's similarity cases

△ABC ≅ △DEF means triangle ABC is congruent to (has the same measurements as) triangle DEF.

Compare two triangles:

  1. Two pairs of sides being proportional and the angle between them the same => similar triangles.
  2. Any pairs of sides being proportional => similar. (??)
  3. Any pair of angles the same => similar. (??)

Partial fraction expansion

A way to change a rational expression into multiple terms, which is then easier for some purposes – like taking the derivative or integral, or doing a Laplace transform.

Part of elementary school in some countries…

Take an example. 10x2+12x+20x38\dfrac{10x^2 + 12x + 20}{x^3 - 8}

First check to see what degree these polynomials are in. If the denominator has a lower degree, then you do a polynomial division, and then do the partial fraction expansion, okay? The technique is for stuff that you can't divide, like the remainder left by a polynomial division.

Now observe the denominator. What x makes it reach zero? That's 2. So you can factor out (x - 2), right? (Scribble a polynomial division of x38x^3-8 with x2x-2 )

We get an altered denominator: 10x2+12x+20(x2)(x2+2x+4)\dfrac{10x^2 + 12x + 20}{(x - 2)(x^2 + 2x + 4)}

You could also try to factor the second-degree expression we just got, but doing it in our head tells us it will not be possible in the reals. Skip it.

Now the aim of the technique is to set variables A and B, sometimes C etcetera:

10x2+12x+20(x2)(x2+2x+4)=Ax2+Bx+Cx2+2x+4 \frac{10x^2 + 12x + 20}{(x - 2)(x^2 + 2x + 4)} = \frac{A}{x -2} + \frac{Bx + C}{x^2 + 2x + 4}

A is a constant because the denominator is first-degree. You perceive the rule: the numerator should be one degree less.

To put the right-hand side over the same denominator, expand the fractions. You know how…

10x2+12x+20(x2)(x2+2x+4)=A(x2+2x+4)(x2)(x2+2x+4)+(Bx+C)(x2)(x2)(x2+2x+4) \frac{10x^2 + 12x + 20}{(x - 2)(x^2 + 2x + 4)} = \frac{A(x^2 + 2x + 4)}{(x -2)(x^2 + 2x + 4)} + \frac{(Bx + C)(x -2)}{(x -2)(x^2 + 2x + 4)}

Now you can just remove the denominator from both sides. Unfactor the RHS.

Logarithms

Addition: lg 5 + lg 5 = lg 25 Subtraction: lg 10 - lg 5 = lg 10/5 = lg 2

Exponents: lg 23 = 3 lg 2

Change of base formula

log2x=log10xlog102 \log_2 x = \frac{\log_{10} x}{\log_{10} 2}

log2x=lnxln2 \log_2 x = \frac{\ln_{} x}{\ln_{} 2}

Limits

"What is the limit of ncos1nsin1nn \cos \frac1n \sin \frac1n when n goes to infinity?"

Solution: we want to do a variable substitution on the function inputs. We'll have to change the n out front so that it is something something 1/n. The expression above is equal to cos1nsin1n1n\frac{\cos \frac1n \sin \frac1n}{\frac1n} .

Thereafter, subbing 1/n for x, we ask the limit of x as it goes to zero – because 1/infty goes to zero. We get the expression limx0sinxxcosx\lim_{x\to 0} \frac{\sin x}{x} \cos x

What happens here? The solution lies in Standard limits. The limit goes to 1 * 1 = 1.

Standard limits🔗

xx \to - \infty x → 0   x → ∞ x → + ∞  
  sinxx1\frac{\sin x}{x} \to 1   (1+1n)ne(1 + \frac1n)^n \to e    
           
  (1+x)1xe(1+x)^{\frac1x} \to e        
  ln(1+x)x1\Leftrightarrow \dfrac{\ln (1+x)}{x} \to 1        
  ex1x1\Leftrightarrow \dfrac{e^x - 1}{x} \to 1        

x+x0x(a>1)xnax0sinxx1\begin{array}{rrr} x \to +\infty & x \to 0 & x \to - \infty \\ (a > 1) & \frac{x^n}{a^x} \to 0 & \frac{\sin x}{x} \to 1 \end{array}

Differential equation

Differential equations involve a function and its derivative, sort of. In a "first-order" differential equation, the highest order of derivative is one. In an "ordinary" differential equation, the function is of one variable.

If there is a requirement that the solution take on a particular value at a particular input, the exercise is called a `begynnelsevärdesproblem'.

You can plot the solutions to differential equations, in the form of a grid of 'directions' in a coordinate system: the tangent that a solution function would have at a given (x,y). `Riktningsfält'.

Now, by a linear differential equation, this is meant:

Take a generic equation y+g(x)y=h(x)y' + g(x)y = h(x) , and name the LHS L(y)L(y) . Linearity will imply that L(y1+y2)=L(y1)+L(y2) L(y_1 + y_2) = L(y_1) + L(y_2) and that L(ky)=kL(y)L(ky) = kL(y) . This is similar to linearity in other fields of math, no?

Solving a first-order differential equation

The simplest form of a first-order equation is y=h(x)y' = h(x) , which I will name A. A more general form is y+g(x)y=h(x). y' + g(x)y = h(x). To solve that one, turn it into A.

Choose a primitive G(x)G(x) to the coefficient g(x)g(x) for y. Multiply both sides of the equation with the so-called integrerande faktorn eG(x)e^{G(x) } . (Because eG(x)e^{G(x)} is nonzero for all x, the resulting equation is strictly equivalent!)

yeG(x)+yeG(x)=h(x)eG(x) y'e^{G(x)} + ye^{G(x)} = h(x)e^{G(x)}

Notice, that the LHS is the derivative of yeG(x)ye^{G(x)} ! That is neat since we are working on differential equations. Rewrite the equation as

(yeG(x))=h(x)eG(x) {\left(ye^{G(x)} \right)}' = h(x)e^{G(x)}

Now you can treat this as you would treat A. Namely, turn yy' into dydx\frac{dy}{dx} (see my seminar writeups for another example):

dydx=h(x)    dy=h(x)dx \frac{dy}{dx} = h(x) \iff dy = h(x) \, dx     dy=h(x)dx \iff \int dy = \int h(x) \, dx y=H(x)+C y = H(x) + C

Applying this to the equation we got, we get

d(yeG(x))dx=h(x)eG(x) \frac{d\left(ye^{G(x)} \right)}{dx} = h(x)e^{G(x)}     d(yeG(x))=h(x)eG(x)dx \iff d\left(ye^{G(x)} \right) = h(x)e^{G(x)} \, dx     d(yeG(x))=h(x)eG(x)dx \iff \int d\left(ye^{G(x)} \right) = \int h(x)e^{G(x)} \, dx     yeG(x)=h(x)eG(x)dx+C \iff ye^{G(x)} = \int h(x)e^{G(x)} \, dx + C

Now you can divide both sides by eG(x)e^{G(x)} and get an expression for y. Insert whatever required values you were given for y and x to find out C, etc.

Fundamental Theorem of Calculus

Proves the useful formula of Insattningsformeln (PB 298), which is just the basic rule of how to integrate a defined interval.

Can also be used for (PB 298)

Optimization

Finding the "best" solution to a problem, like the maximum payoff or minimal material use. Involves finding asymptotes, zeros of derivatives, etc.

Continuity

A function is continuous around a point if it isn't undefined nor jumps to some unexpected value at exactly that point. Mathematically, the limit from both sides of that point is equal to the value of the function at that point.

Differentiability

Standard derivatives

The derivatives of common functions are as follows

f(x) f'(x)
sin x cos x
cos x -sin x
tan x 1cos2x,xπ2+nπ\frac{1}{\cos^2 x}, \quad x\ne \frac{\pi}{2} + n\pi
cot x 1sin2x,xnπ-\frac{1}{\sin^2 x}, \quad x\ne n\pi
arcsin x 11x2\frac{1}{\sqrt{1-x^2}}
arccos x 11x2-\frac{1}{\sqrt{1-x^2}}
arctan x 11+x2\frac{1}{1+x^2}
arccot x 11+x2-\frac{1}{1+x^2}

Standard integrals

Implicit differentiation

When you work with unknown functions and use their derivatives, it's called implicit differentiation. Nothing to be scared of.

Absolute value

Taking the absolute value of a complex number lets you get rid of the "i".

Absolute equation

Consider the graph of an absolute value linear function like |x - 1|. It has two distinct segments, and no other notation would give it that shape. It's not a smooth function, it's two functions, depending on which x you're at.

So to rewrite |x - 1| without bar notation, make cases for different x. The quick way is to see where it becomes zero, which is at x=1. For the case of x>1, you can just remove the bars. For the case of x<1, also flip the signs.


It's easier with complex numbers. Consider the case of having a complex number inside bar notation, like |a + bi|. Rewriting it as a vector gives us a circle around origin, any point on which may be touched by the vector, but it's irrelevant - we want only the radius r in re{}.

Now consider the case of some unknown complex number z, plus other stuff: |z + 1 + i|. The origin of the "vector" z will now be at -1, -i, so this circle is centred off-origin.

Example solving the above: Set z := x + yi. Say its absolute value is 2. Write the equation.

2=z+1+i=(x+1)+(y+1)i=(x+1)2+(y+1)2 2 = |z + 1 + i| = |(x + 1) + (y + 1)i| = \sqrt{(x+1)^2 + (y+1)^2}

The last form can be recognized as Pythagoras.

After squaring, the equation resembles that of a circle:

    4=(x+1)2+(y+1)2 \iff 4 = (x + 1)^2 + (y+1)^2

[2024-04-22 Mon] Created Maclaurin/Taylor series

Standardutvecklingar

In these cases, θ is a number between 0 and 1 that depends on x and n.

ex=1+x+x22!+x33!++xnn!+Rn+1(x)\begin{equation} e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \ldots + \frac{x^n}{n!} + R_{n+1}(x) \end{equation} \hfill where Rn+1(x)=eθx(n+1!)xn+1 \displaystyle R_{n+1}(x) = \frac{e^{\theta x}}{(n+1!)} x^{n+1}

ln(1+x)\begin{equation} \ln(1 + x) \end{equation}

(1+x)α\begin{equation} (1 + x)^{\alpha} \end{equation}

sinx=xx33!+x55!+(1)n1x2n1(2n1)!+R2n+1(x)\begin{equation} \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \ldots + (-1)^{n-1} \frac{x^{2n-1}}{(2n-1)!} + R_{2n+1}(x) \end{equation} \hfill where R= \displaystyle R =

cosx\begin{equation} \cos x \end{equation}

arctanx\begin{equation} \arctan x \end{equation}

Proofs for the above (PB )

Mean value theorem

Partial integration

What links here

  • Math
Created (8 years ago)