Skip to main content

Latest Post

Power Essence Coefficients and Bernoulli Numbers

Previously, we have explored methods to compute the essence of power functions $Ȣx^n$ which involves solving a large system of linear equations. This method is equivalent to solving for the inverse of a large $n\times n$ matrix where entries are values of pascal's triangle. Though the matrix method allow us to solve for large number of essences at once, it does not extend easily to solve for next iterations of essence coefficients. Rather than reusing the values we have already solved for, we will have to solve for inverse of a separate larger matrix again. Here we will introduce an iterative method for solving for these coefficients. Chapter 0: Recap Let us first remind ourselves of the definition of essence. For a function $f(x)$, we want to find the transformation $Ȣf(x)$ such that we are able to 'smooth out' its series: $$\sum_{i=a}^b f(i) = \int_{a-1}^b Ȣf(x) dx$$ For example, we can solve for the following functions: $$\begin{align*}Ȣ1 &= 1 \\ Ȣx &= x +

Curvature in Terms of r', r'', and Their Magnitude

 This article is intended to provide a simpler formula for the curvature in terms of easy-to-solve $r'$, $r''$, $| r' |$, and $|r'' |$. 

Let curve $C$ be defined by $r(t)$ such that

$$r:[a,b] \subseteq \mathbb{R}  \rightarrow \mathbb{R}^n $$

Then we can define Tagent, Normal, and Binormal Unit Vectors to be $$\begin{align*} T &= \frac{{r}'}{|{r}'|} \\  \\
T' &= \left(  \frac{r '}{|r'|} \right)'  \\
&= \frac{|r'|r'' + | r '|' r'}{| r '|^2} \\ 
|T'| &= \frac{\left |  | r '| r'' + | r' |' r' \right |}{| {r}'|^2} \\ \\
N &= \frac{T'}{| T' |} \\
&= T' \frac{ 1 }{ |T'| } \\
&=  \frac{ | {r}'|  {r}'' + | r' |'  r' }{ | {r}'|^2 }  \frac{ | {r}' |^2 }{ \left |  | {r}'| {r}'' + | {r}'|' {r}' \right | } \\
&= \frac{| {r}'| {r}'' + | {r}'|' {r}'}{\left |  | {r}'|  {r}'' + | {r}'|' {r}' \right |} \\ \\
B &= T\times N \\ 
&= \frac{ {r}'}{| {r}'|} \times \frac{| {r}'|r'' + | {r}'|'r'}{\left |  | {r}'| r'' + | {r}'|' r' \right |} \\
&= \frac{1}{|r'|} \frac{ 1 }{ \left |  | {r}'| r'' + | {r}'|' r' \right | } \left( r' \times (|r'| r'' - |r'|' r') \right), \text{ Factor our Scalars.} \\
&= \frac{1}{|r'|} \frac{ 1 }{ \left |  | {r}'| r'' + | {r}'|' r' \right | } \left( |r'| \, r' \times r'' - |r''| \, r' \times r' \right), \text{Distribute out cross product. }\\
&=  \frac{1}{|r'|} \frac{ 1 }{ \left |  | {r}'| r'' + | {r}'|' r' \right | } \left( |r'| \, r' \times r'' \right) \\
&= \frac{ r' \times r'' }{ \left |  | {r}'| r'' + | {r}'|' r' \right | }
\end{align*}$$

Notice that $$|T| = 1 \\ \therefore |T|^2 = T\cdot T = 1$$

$$\begin{align*}
\because T \cdot T &= 1 \\
\therefore (T\cdot T)'  &= 0\\
T'\cdot T + T\cdot T' &= 0\\
2T\cdot T'  &= 0\\
T\cdot \frac{T'}{| T' |} &= 0\\
\therefore T\cdot N &= 0 
\end{align*}$$ This shows that $T$ and $N$ are orthogonal. 

$$\therefore |B| = \sin(\theta_{T,N})|T||N| = |T||N|= 1$$

$$\begin{align*}
\because |B| &= \frac{ \left |  r' \times r''  \right | }{ \left | |r'| r'' - |r'|' r' \right | } \\ \\
\therefore \left | |r'| r'' - |r'|' r' \right | &=  \left |  r' \times r''  \right | \\
&= \sin(\theta_{r', r''})|r'||r''|
\end{align*}$$

By definition of cross product, where $\theta_{r', r''}$ is angle between the vectors $r'$ and $r''$.

$$\begin{align*}
\therefore \sin(\theta_{r', r''})|r''| &= \left |  r'' - |r'|' \frac{r'}{|r'|} \right | \\
&=  \left |  r'' - |r'|' T \right | \\
\end{align*}$$

Arclength is defined as $$s(t) = \int_{a}^t | {r}'| dt \\ \therefore s' = | {r}' |$$

Curvature is originally defined as $$\begin{align*} 
\kappa &= \left | \frac{dT}{ds} \right | \\
&= \left | \frac{dT}{dt} \frac{dt}{ds}  \right | \\
&= \left | \frac{T'}{s'} \right | \\
\therefore \kappa &= \frac{ \left | T' \right | }{ \left | {r}' \right | }
\end{align*}$$

We can define the second derivatives of the curve using the terms above.

$$\begin{align*}
r'' &= (r')' \\
&= \left( |r'|T \right)' \\
&= |r'|' T + |r'| T' \\
&= |r'|' T + |r'| |T'| \frac{T'}{|T'|} \\
&= |r'|' T + |r'| |T'| N \\
&= |r'|' T + |r'| ^2\frac{|T'|}{|r'|} N \\
r'' &= |r'|' T + |r'| ^2\kappa N \\ \\
\end{align*}$$

$$\therefore |r'| ^2\kappa N = r'' - |r'|' T $$

Remember that $$\begin{align*}
\sin(\theta_{r', r''}) |r''| &= \left |  r'' - |r'|' T \right | \\
&= \left | |r'|^2 \kappa N \right | \\
&= |r'|^2 \kappa
\end{align*}$$

So we can redefine Curvature as

$$\kappa = \frac{ \sin(\theta_{r' , r''} ) |r''| }{ |r'|^2 }$$

Where $\theta_{r',r''} \in [0,\pi)$ is the angle between the two vectors $r'$ and $r''$.

Notice that because $\theta_{r',r''} \in [0,\pi)$, 
$$\sin(\theta_{r',r''}) \geq 0$$  We also know from properites of dot product and trig identity that $$\begin{align*}
\because \cos(\theta_{r',r''}) &= \frac{r'\cdot r''}{|r'||r''|} \\
\because \sin^2(\theta) + \cos^2(\theta) &= 1 \\ \\
\therefore \sin^2(\theta_{r',r''}) &= 1 - \cos^2(\theta_{r',r''}) \\
&= 1 - \left( \frac{ r'\cdot r'' }{ |r'||r''| } \right)^2
\end{align*}$$

Notice that because $\sin(\theta_{r',r''}) \geq 0$, we can solve for $\sin(\theta_{r',r''}) $ by rooting both sides. $$\sin(\theta_{r',r''})  = \sqrt{1 - \left( \frac{ r'\cdot r'' }{ |r'||r''| } \right)^2}$$

Finally, we find that

$$\begin{align*}
\kappa &= \frac{ |r''| }{ |r'|^2 } \sqrt{1 - \left( \frac{ r'\cdot r'' }{ |r'||r''| } \right)^2} \\ \\
&= \frac{1}{|r'|^3} \sqrt{ (|r'||r''|)^2 - (r' \cdot r'')^2 } 
\end{align*}$$ QED

Comments

Popular posts from this blog

Large Polynomial Series using Matrices (Calculating Bernoulli's Number with Pascal Matrix)

Polynomial Series can be easily solved using Power Series Formulas for each term in the polynomial. However, this can be frustrating since not every Power Formula are intuitive to memorize. We would like to find a more elegant and easier-to-recall formula for computing a Polynomial Series. This can be done using matrices. Notations and Equations We will borrow the notations and simple equations from Sequence Curve articles . There, we have extended a series to be continuous through the following identity: $$\sum_{i=m}^n f(i) = \int_{m-1}^nȢ\{f(x)\}dx $$ The $Ȣ\{f(x)\}$ acts as the rate of change of a series relative to its bounds. $$\frac{d}{dt} \sum_{i=m}^nf(i) = Ȣ\{f(n)\}\frac{dn}{dt} - Ȣ\{f(m-1)\}\frac{dm}{dt}$$ Letting $t=m=n$, we find $$\frac{d}{dt} \sum_{i=t}^tf(i) = \frac{d}{dt}f(t)= Ȣ\{f(t)\} - Ȣ\{f(t-1)\} $$ Remebering that $Ȣ$ Transformation is Linear, we can derive a simple identity of $$\frac{d}{dx}f(x+1) = Ȣ\{f(x+1)\} - Ȣ\{f(x)\} = Ȣ\{f(x+1)-f(x)\}$$ This will be use

Partition Counter using Trees, Recursion, Tables, and Algorithm

Partitions are number of ways an integer can be represented as sum of positive integers. We can ask, for example, what is the partition of 5? If we write out every possible combination, $$\begin{align*}  5 &= 1+1+1+1+1 \\  &= 1+1+1+2\\ &= 1+1+3\\ &= 1+4\\ &= 1+2+2\\ &= 2+3\\  &= 5 \end{align*} $$ we can see that partition of 5 is 7. One will immediately notice, however, that this is not the most efficient approach to answering the question: not only does partition grow quickly, attempting to organize and not miss or repeat an algebraic expression becomes messy and impractical. Chapter 1: Partition Tree A cleaner, but still not the best, approach would be to use tree diagrams to represent the partitions of a number. In a Partition Tree of $n$, every path to a terminating node will add up to the number $n$. And also, every child of a node (every nodes below a given parent node) will always be greater than or equal to the parent node in order to not

Vector Plane Rotation

Vector is a useful tool in maths and in physics that describes both magnitude and direction in a space. Vectors are used to describe concepts where direction matter, such as forces and velocity. For example, two cars can have same speed(scalar) but if one if heading north and the other south, they will have different velocity(vector). Vectors are usually written in terms of their components. Components are magnitude of the vector in one dimension. In 2D space, vector can be represented as coordinate of 2 components, <x,y>, and in 3D space, vector can be represented as coordinate of 3 components, <x, y, z>. Breaking down vectors into their components is extremely useful, especially in physics, because it allows one to break down complicated changes in vector into simpler and easier-to-work-with changes in its components. Take a look at projectile motion, for example. Velocity of an object fired up into the sky at an angle changes both its direction and magnitude. Attem

Summation of Harmonic Interval and Simplified Riemann Sum

When you first learn of Summation in Pre-Calc class, it is probably in build-up to Riemann Sum. You are taught the basics of Summation and simple manipulations, such a $\sum_nc*a_n = c*\sum_na_a$ for constant $c$, but you don't get to explore deeper into Summations and really play around with it. One thing that really intrigued me was why Summations only incremented by a unit. When we have a Sum like $\sum_{n=3}^{10}a_n$, it is equal to $a_3+a_4+a_5+\dots+a_9+a_{10}$, always incrementing by a unit( $1$ ). Why couldn't we Sum with interval that is not a unit? How could we have something like $a_3+a_{3.5}+a_4+a_{4.5}+a_5+a_{5.5}+\dots+a_{8.5}+a_9+a_{9.5}+a_{10}$, incrementing by a half instead of a full unit? Sets Of course you technically can use Sets and Set Builder Notation to create a specific Summation. $$\sum_{i \in S}i,S=\{\frac{x}{2}|x\in\mathbb{N_0}\} \, = \, 0+\frac{1}{2}+1+\frac{3}{2}+\dots$$ The above is Sum of all Rational Numbers incrementing by $\frac{1