Skip to main content

Latest Post

Power Essence Coefficients and Bernoulli Numbers

Previously, we have explored methods to compute the essence of power functions $Ȣx^n$ which involves solving a large system of linear equations. This method is equivalent to solving for the inverse of a large $n\times n$ matrix where entries are values of pascal's triangle. Though the matrix method allow us to solve for large number of essences at once, it does not extend easily to solve for next iterations of essence coefficients. Rather than reusing the values we have already solved for, we will have to solve for inverse of a separate larger matrix again. Here we will introduce an iterative method for solving for these coefficients. Chapter 0: Recap Let us first remind ourselves of the definition of essence. For a function $f(x)$, we want to find the transformation $Ȣf(x)$ such that we are able to 'smooth out' its series: $$\sum_{i=a}^b f(i) = \int_{a-1}^b Ȣf(x) dx$$ For example, we can solve for the following functions: $$\begin{align*}Ȣ1 &= 1 \\ Ȣx &= x +

Nested Summation Tricks

Summations are fun, but they can be really tricky. Most often, the fun properties come from the complexity of  terms defined within Sigma. What I became curious about is the properties of Sigma Summation itself: what interesting properties will Summation have when they are multiplied to each other, or even nested inside one other?

That is the question we are going to explore with this article: tricks and properties of Summation Multiplication and Nested Summations!

Before exploring our own properties, let us cover some of the more well known ones:
$$\sum_{i=n}^m f(i) + \sum_{i=n}^mg(i) = \sum_{i=n}^m(f(i)+g(i))\\
\sum_{i=n}^mcf(i) = c\sum_{i=n}^mf(i)\\
\sum_{i=n}^m1 = m+1-n\\
\sum_{i=n}^mf(i) = \sum_{i=k}^mf(i) + \sum_{i=n}^{k-1}f(i)$$
With these simpler properties out of the way, let's begin with more exciting concepts:

Multiplication $$\sum_{i=n}^mf(i) \times \sum_{j = a}^bg(j)$$
Let's start off simple: what comes after addition?
Multiplication.

The best way to approach multiplication will be to visualize it, similar to how quadratic expansion is geometrically drawn. We can draw a square/rectangle with each side length representing either of the summations:

Visual Representation of Summation Multiplication

Interesting pattern can be noted from this: take a look at each columns.
First column can be summed as such:
$$f(n)g(a)+f(n)g(a+1)+f(n)g(a+2)+\dots+f(n)g(b-1)+f(n)g(b)$$ Notice the sequential pattern in $g(x)$. Using that pattern we can express this simply as $$f(n)\sum_{j=a}^bg(j)$$
Similarly, the second column which is represented as
$$f(n+1)g(a)+f(n+1)g(a+1)+f(n+1)g(a+2)+\dots+f(n+1)g(b-1)+f(n+1)g(b)$$ which can be simplified into $$f(n+1)\sum_{j=a}^b g(j)$$
This pattern follows for every columns. We know that columns exist from $n$ to $m$, for every element in $\sum_{i=n}^mf(i)$.
This means that the summation can be expressed as
$$ f(n)\sum_{j=a}^bg(j) + f(n+1)\sum_{j=a}^bg(j) + f(n+2)\sum_{j=a}^bg(j) + \dots + f(m-1)\sum_{j=a}^bg(j) + f(m)\sum_{j=a}^bg(j)  $$ which can be expressed as simple Summation of $$\begin{align*} \sum_{i=n}^mf(i) \times \sum_{j=a}^bg(j) = & \sum_{i=n}^mf(i)\sum_{j=a}^bg(j)\\
\text{understanding that $f(i)$ is constant within the Summation,}\\
=& \sum_{i=n}^m\sum_{j=a}^bf(i)g(j)\end{align*}$$

By performing the same process onto the rows instead of the columns, we can also find that
$$\sum_{i=n}^mf(i) \times \sum_{j=a}^bg(j) = \sum_{j=a}^b\sum_{i=n}^m f(i)g(j)$$
This shows an interesting insight that multiplication of two Summation is equal to Nesting a Summation in each other.
$$\sum_{i=n}^mf(i) \times \sum_{j=a}^bg(j) = \sum_{i=n}^m\sum_{j=a}^bf(i)g(j) = \sum_{j=a}^b\sum_{i=n}^mf(i)g(j) \\ \blacksquare$$

Being able to nest Summations within another and being able to separate them into two Summation naturally leads to following questions:
What about Nested Summations that cannot be separated into two simple Summations?

Nested Dependent Summations
One example of such Summations is Nested Summation whose inner Summation depends on the outer Summation:
$$\sum_{i=1}^n\sum_{j=1}^if(j)$$ In this case, the Summation cannot simply be broken down into $$\sum_{i=1}^n1\times\sum_{j=1}^if(j)$$ because variable $i$ must exist within the bounds of first Summation.

Then how else can we make this Summation simpler?

Let's first start by expanding it term by term:
$$\begin{align*}
\sum_{i=1}^n\sum_{j=1}^if(j) =& \sum_{j=1}^1f(j) + \\
&\sum_{j=1}^2f(j)+\\
&\sum_{j=1}^3f(j)+\\
&\vdots \\
&\sum_{j=1}^{n-1}f(j)+\\
&\sum_{j=1}^nf(j)
\end{align*}$$ Which can be further expanded to
$$\begin{align*}
\sum_{i=1}^n\sum_{j=1}^if(j) =&f(1) + \\
&f(1)+f(2)+\\
&f(1)+f(2)+f(3)+\\
&f(1)+f(2)+f(3)+f(4)+\\
&f(1)+f(2)+f(3)+f(4)+f(5)+\\&\vdots \\
&f(1)+f(2)+f(3)+f(4)+\dots+f(n-1)+\\
&f(1)+f(2)+f(3)+f(4)+\dots+f(n-1)+f(n)\\\end{align*}$$
Notice the triangle shape forming. We can recognize another pattern from this arrangement.
We can see that $f(1)$ appears in each row, so $n$  times, while $f(n)$ only appears once in the last term. From this we can notice a pattern between the index of the term with how many times it appears in the summation. The term $f(i)$ will appear $n+1-i$ times. This allows us to simplify the Summation as:
$$\sum_{i=1}^n\sum_{j=1}^if(j) = \sum_{i=1}^n(n+1-i)f(i)$$ With this we are able to reduce Nested Summations to simple single Summation! In computer terminology, we reduced runtime from O(n!) to simply O(n)!

We can continue to generalize Nested Dependent Summations as such:

Outer Initial Index
$$\begin{align*} \sum_{i=m}^n\sum_{j=1}^if(j) = & \sum_{i=1}^n\sum_{j=1}^if(j) - \sum_{i=1}^{m-1}\sum_{j=1}^if(j) \\
= & \sum_{i=1}^n (n+1-i)f(i) - \sum_{i=1}^{m-1}(m-i)f(i)
\end{align*}$$
Inner Initial Index
$$\begin{align*}\sum_{i=1}^n\sum_{j=1}^if(j) =& \sum_{i=1}^n(\sum_{j=k}^if(j) + \sum_{j=1}^{k-1}f(j))\\
=&\sum_{i=1}^n\sum_{j=k}^if(j) + \sum_{i=1}^n\sum_{j=1}^{k-1}f(j)\\
=&\sum_{i=1}^n\sum_{j=k}^if(j) + \sum_{i=1}^n1\times\sum_{j=1}^{k-1}f(j)\\
=&\sum_{i=1}^n\sum_{j=k}^if(j) + n\times\sum_{j=1}^{k-1}f(j)
\\
\therefore \sum_{i=1}^n\sum_{j=k}^if(j) = & \sum_{i=1}^n\sum_{j=1}^if(j) - n\sum_{j=1}^{k-1}f(j) \\
=& \sum_{i=1}^n (n+1-i)f(i) - n\sum_{j=1}^{k-1}f(j)
\end{align*}$$
Combing these two together will lead to:
Generalized Nested Dependent Summation
$$\begin{align*}\sum_{i=m}^n\sum_{j=k}^if(j) = & \sum_{i=m}^n(\sum_{j=1}^if(j) - \sum_{j=1}^{k-1}f(j))\\
= & \sum_{i=m}^n\sum_{j=1}^if(j) - \sum_{i=m}^n\sum_{j=1}^{k-1}f(j)\\
= & \sum_{i=m}^n\sum_{j=1}^if(j) - (n+1-m)\sum_{j=1}^{k-1}f(j)\\
= & \sum_{i=1}^n\sum_{j=1}^if(j) - \sum_{i=1}^{m-1}\sum_{j=1}^if(j) - (n+1-m)\sum_{j=1}^{k-1}f(j)\\
= & \sum_{i=1}^n(n+1-i)f(i) - \sum_{i=1}^{m-1}(m-i)f(i) - (n+1-m)\sum_{j=1}^{k-1}f(j)
\end{align*}$$

This general form is still consistent with our original formula even when $k=1$ and or $m=1$. This is because  $$\sum_{i=n}^{n-1}f(i) = 0$$ for all function $f$ and all real number $n$.
This leads to that $$\begin{align*} \sum_{i=1}^n\sum_{j=1}^if(j) = & \sum_{i=1}^n(n+1-i)f(i) -\sum_{i=1}^{1-1}(1-i)f(i) - n\sum_{j=1}^{1-1}f(j) \\
= & \sum_{i=1}^n(n+1-i)f(i) - 0 - n*0\\
= & \sum_{i=n}^n(n+1-i)f(i)
\end{align*}$$  The reason for why $\sum_{i=n}^{n-1}f(i) = 0$ will be further discussed in my Sequence Curve series. Short answer for why it turns out this way is if $\sum_{i=n}^mf(i) = \int_{n-1}^mȢf(x) dx$, then $\sum_{n}^{n-1}f(i) = \int_{n-1}^{n-1}Ȣf(x)dx = 0$.

Of course, there are countless more questions that we haven't explored in this short article, such as how can we simplify a little more complicated Nested Summation $$\sum_{i=n}^m\sum_{j=a}^bf(i)g(j)$$ or even just how does $$\sum_{i=n}^mf(i)g(i)$$ relate to $\sum_{i=n}^mf(i)$ and $\sum_{i=n}^mg(i)$ individually, but I hope that starting to ask and thinking about these topics has at least begun the journey to answer all of these questions.
As I explore more and discover more interesting results about nesting Summations, I will post them all in this blog soon!

Comments

Popular posts from this blog

Large Polynomial Series using Matrices (Calculating Bernoulli's Number with Pascal Matrix)

Polynomial Series can be easily solved using Power Series Formulas for each term in the polynomial. However, this can be frustrating since not every Power Formula are intuitive to memorize. We would like to find a more elegant and easier-to-recall formula for computing a Polynomial Series. This can be done using matrices. Notations and Equations We will borrow the notations and simple equations from Sequence Curve articles . There, we have extended a series to be continuous through the following identity: $$\sum_{i=m}^n f(i) = \int_{m-1}^nȢ\{f(x)\}dx $$ The $Ȣ\{f(x)\}$ acts as the rate of change of a series relative to its bounds. $$\frac{d}{dt} \sum_{i=m}^nf(i) = Ȣ\{f(n)\}\frac{dn}{dt} - Ȣ\{f(m-1)\}\frac{dm}{dt}$$ Letting $t=m=n$, we find $$\frac{d}{dt} \sum_{i=t}^tf(i) = \frac{d}{dt}f(t)= Ȣ\{f(t)\} - Ȣ\{f(t-1)\} $$ Remebering that $Ȣ$ Transformation is Linear, we can derive a simple identity of $$\frac{d}{dx}f(x+1) = Ȣ\{f(x+1)\} - Ȣ\{f(x)\} = Ȣ\{f(x+1)-f(x)\}$$ This will be use

Partition Counter using Trees, Recursion, Tables, and Algorithm

Partitions are number of ways an integer can be represented as sum of positive integers. We can ask, for example, what is the partition of 5? If we write out every possible combination, $$\begin{align*}  5 &= 1+1+1+1+1 \\  &= 1+1+1+2\\ &= 1+1+3\\ &= 1+4\\ &= 1+2+2\\ &= 2+3\\  &= 5 \end{align*} $$ we can see that partition of 5 is 7. One will immediately notice, however, that this is not the most efficient approach to answering the question: not only does partition grow quickly, attempting to organize and not miss or repeat an algebraic expression becomes messy and impractical. Chapter 1: Partition Tree A cleaner, but still not the best, approach would be to use tree diagrams to represent the partitions of a number. In a Partition Tree of $n$, every path to a terminating node will add up to the number $n$. And also, every child of a node (every nodes below a given parent node) will always be greater than or equal to the parent node in order to not

Vector Plane Rotation

Vector is a useful tool in maths and in physics that describes both magnitude and direction in a space. Vectors are used to describe concepts where direction matter, such as forces and velocity. For example, two cars can have same speed(scalar) but if one if heading north and the other south, they will have different velocity(vector). Vectors are usually written in terms of their components. Components are magnitude of the vector in one dimension. In 2D space, vector can be represented as coordinate of 2 components, <x,y>, and in 3D space, vector can be represented as coordinate of 3 components, <x, y, z>. Breaking down vectors into their components is extremely useful, especially in physics, because it allows one to break down complicated changes in vector into simpler and easier-to-work-with changes in its components. Take a look at projectile motion, for example. Velocity of an object fired up into the sky at an angle changes both its direction and magnitude. Attem

Summation of Harmonic Interval and Simplified Riemann Sum

When you first learn of Summation in Pre-Calc class, it is probably in build-up to Riemann Sum. You are taught the basics of Summation and simple manipulations, such a $\sum_nc*a_n = c*\sum_na_a$ for constant $c$, but you don't get to explore deeper into Summations and really play around with it. One thing that really intrigued me was why Summations only incremented by a unit. When we have a Sum like $\sum_{n=3}^{10}a_n$, it is equal to $a_3+a_4+a_5+\dots+a_9+a_{10}$, always incrementing by a unit( $1$ ). Why couldn't we Sum with interval that is not a unit? How could we have something like $a_3+a_{3.5}+a_4+a_{4.5}+a_5+a_{5.5}+\dots+a_{8.5}+a_9+a_{9.5}+a_{10}$, incrementing by a half instead of a full unit? Sets Of course you technically can use Sets and Set Builder Notation to create a specific Summation. $$\sum_{i \in S}i,S=\{\frac{x}{2}|x\in\mathbb{N_0}\} \, = \, 0+\frac{1}{2}+1+\frac{3}{2}+\dots$$ The above is Sum of all Rational Numbers incrementing by $\frac{1