Skip to main content

Latest Post

Power Essence Coefficients and Bernoulli Numbers

Previously, we have explored methods to compute the essence of power functions $Ȣx^n$ which involves solving a large system of linear equations. This method is equivalent to solving for the inverse of a large $n\times n$ matrix where entries are values of pascal's triangle. Though the matrix method allow us to solve for large number of essences at once, it does not extend easily to solve for next iterations of essence coefficients. Rather than reusing the values we have already solved for, we will have to solve for inverse of a separate larger matrix again. Here we will introduce an iterative method for solving for these coefficients. Chapter 0: Recap Let us first remind ourselves of the definition of essence. For a function $f(x)$, we want to find the transformation $Ȣf(x)$ such that we are able to 'smooth out' its series: $$\sum_{i=a}^b f(i) = \int_{a-1}^b Ȣf(x) dx$$ For example, we can solve for the following functions: $$\begin{align*}Ȣ1 &= 1 \\ Ȣx &= x +

Quick Generation of a Basis for Orthogonal Complement of an Image

The search for a basis for the orthogonal complement of a subspace has often involved pure guesswork or many tedious computations of removing orthogonal projections from a known basis for the entire vector space. Here we propose an alternative faster method of producing a basis for the orthogonal complement of an image of a matrix. 
Let $A$ be $n \times m$ matrix over $\mathbb{C}$. Then such matrix maps from $\mathbb{C}^m$ to $\mathbb{C}^n$.
$$A:\mathbb{C}^m \rightarrow \mathbb{C}^n$$ (Choice of the field $\mathbb{C}$ is to allow us to use the standard inner product $<\cdot, \cdot>$ to define orthogonality. This, of course, works also for $\mathbb{R}$)
Then its image $im\,A$ and the orthogonal complement of the image $im\,A ^\perp$ are both subspaces of the codomain $\mathbb{C}^n$ such that
$$\mathbb{C}^n = im\, A \oplus im\, A^\perp$$

A Basis for Image of $A$

Our first task will be to determine a basis for the image. I have already discussed this in a separate lengthy post, but we will summarize all the tools we need again.
The matrix $A$ can be represented by its column vectors:
$$A = \begin{bmatrix} \mid & & \mid \\ v_1 & \dots & v_m \\ \mid & & \mid \end{bmatrix}$$ With this representation, it is easy to see that the image is spanned by these vectors $v_1 \dots v_m$. However, as these vectors are not necessarily linearly independent, they do not yet form a basis for $im\, A$. 
$$im\, A = span(v_1 \dots v_m)$$ We must remove the linearly dependent vectors to form a basis. This reduction of linear dependence is what taking the Reduced Row Echelon Form(RREF) does well. Taking RREF of a matrix will reduce each row to independent rows with pivots or zero rows. To utilize this property of RREF, we will have to turn each column of $A$ into rows by taking the transpose of $A$.
$$RREF(A^\intercal) = RREF(\begin{bmatrix} - & v_1 & -\\ & \vdots & \\ -& v_m & - \end{bmatrix} )= \begin{bmatrix} -& u_1 & -\\ & \vdots & \\ - & u_k & - \\ -& 0 & - \\ & \vdots & \end{bmatrix}  $$ We will achieve $k$ independent vectors $u_i$ and $(m-k)$ zero rows. Each of these $u_i$ will contain a pivot at distinct columns, which in some sense guarantees their linear independence. Furthermore, since RREF is calculated using only elementary row operations, we are certain we did not 'lose' any $v_i$ vectors, meaning $u_i$'s still span the same image $im\, A$. 
So $\{u_1 \dots  u_k\}$ for a basis for $k$ dimensional $im\, A$
$$A = span(u_1\dots u_k)$$ 
Let's internalize this with a quick example: Consider this $3\times 4$ matrix:  
$$A = \begin{bmatrix}1 & 1 & i & i \\ 0 & 1 & 0 & 1 \\ 0 & i & 0 & -1 \end{bmatrix}$$ Though in 3D space, it is obvious to see that the image is only 2 dimensional, as the third and fourth columns are multiples of the first two. Let's confirm this:
$$
\begin{align*} 
RREF(A^\intercal) &= RREF(\begin{bmatrix}
1 & 1 & i & i \\
0 & 1 & 0 & 1 \\ 
0 & i & 0 & -1 
\end{bmatrix}^\intercal) \\

&= RREF(\begin{bmatrix} 
1 & 0 & 0 \\ 
1 & 1 & i \\
i & 0 & 0 \\
i & i & -1 \end{bmatrix})   \\

&= \begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & -i \\
0 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix}
\end{align*} $$ So we can confirm that the image is spanned by merely two vectors:
$$\{ \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ -i \end{bmatrix}\}$$

A Basis for Orthogonal Complement of Image

Now that we have found a basis for $im\, A$, we must now generate a basis for its orthogonal counterpart. 
We can try to geometrically interpret orthogonality: orthogonal vectors are perpendicularly direction, each covering a different 'direction' in the vector space. We can characterize this sense of 'direction' using the pivots. 
A pivoted vector controls the component where the pivot is present. For example, on a standard basis, each vector has a single pivot and all zeroes elsewhere. Each statndard vector controls a single direction with its pivot. This holds true for any pivoted basis. Each vector controls the single direction with the pivot, and the remaining components simply follow as byproduct. 
Since orthogonal vectors are perpendicular to each of these pivoted vectors, they must point in direction where pivots are not present.  
To better summarize this directionality, let us organize the vectors computed by RREF into a square upper triangular matrix where all the pivots align with the diagonal. We do so by placing each $u_i$ to the row of its pivot and putting zero-rows elsewhere. 
$$\begin{bmatrix} c_{1,1} & c_{1,2} & \dots & c_{1,n} \\ 
0 & c_{2,1} & \dots & c_{2,n} \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \dots & c_{n.n} \end{bmatrix}$$ where $c_{i,i}$ are either 1 if there is a pivot, and 0 if it is a zero row. 
This construction gives us two useful properties:   
• If $i$th column has a pivot, all other elements in the $i$th column are all zeroes, because of RREF and zero-rows.
$$c_{i,i} = 1 \Rightarrow \forall 1\leq i \leq n, c_{j,i} = 0, i\neq j$$
• If $i$th column does not have a pivot, then it must have come from a zero-row, so all elements in $i$th row are zeroes.
$$c_{i,i} = 0 \Rightarrow \forall  1\leq i \leq n, c_{i,j} = 0, i\neq j$$ This propeties will be useful soon enough. 
Consider this example:
$$ A = \begin{bmatrix} 1 & 0 & 0 & 1 \\ i & 0 & 0 & i \\ 0 & 1 & 0 & 1 \\ -1 & i & 0 & i-1 \\ 0 & 0 & 1 & 1 \end{bmatrix} $$ Its RREF of transpose is 
$$RREF(A^\intercal) = \begin{bmatrix} 1 & i & 0 & -1 & 0 \\ 0 & 0 & 1 & i & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0\end{bmatrix}$$ So we see the $im\,A$ is spanned by these three vectors:
$$\begin{bmatrix} 1 \\ i \\ 0 \\ -1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 1 \\ i \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}$$ We now need to organize them into the square matrix with each pivots aligning on the dioagonal: 
$$\begin{bmatrix} 1 & i & \color{blue}0 & -1 & \color{blue}0 \\  \color{blue} 0 &\color{red} 0 &\color{blue} 0 & \color{red}0 & \color{blue}0 \\ \color{blue}0 & 0 & 1 & i & \color{blue}0 \\ \color{blue}0 & \color{red}0 & \color{blue}0 & \color{red}0 & \color{blue}0 \\ \color{blue}0 & 0 & \color{blue}0 & 0  & 1 \end{bmatrix}$$ Furthermore, notice that the two properties holding on this square matrix: where there is 1 on the diagonal the column is all zero, and where there is zero on the diagonal the row is all zero.

As explained above, we want to 'cover the directions NOT covered by pivots'. We can do so by adding a 'copivot' to unpivoted directions. To generate a copivoted vector, apply complex conjugate to a nonpivoted column, and add -1 to the diagonal element of the column
Seeing that there are $k$ pivots on the $n\times n$ square matrix, we will be able to generate $m = n-k$ copivoted vectors $w_i$
$$w_1\dots w_m$$
Let us use the same example as above. We have the square upper triangular matrix
$$\begin{bmatrix} 1 & i & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & i & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0  & 1 \end{bmatrix}$$ We see that we have 2 columns without a pivot:
$$\begin{bmatrix} i \\ {\color{red} 0} \\ 0 \\ 0 \\ 0  \end{bmatrix}, \begin{bmatrix} -1 \\ 0 \\ i \\ {\color{red} 0} \\ 0  \end{bmatrix}$$ The zero on the dioagonal are colored red to keep track of which lays on the diagonal. 
To generate copivoted vectors from these, we first complex conjugate them, them add -1 on the diagonal zeroes. Then we get:
$$\begin{bmatrix} -i \\ {\color{red} {-1}} \\ 0 \\ 0 \\ 0  \end{bmatrix}, \begin{bmatrix} -1 \\ 0 \\ -i \\ {\color{red} {-1}} \\ 0  \end{bmatrix}$$ Notice that combined with basis for the image, the 5 vectors cover all 5 directions with their pivots and copivots:
$$\begin{bmatrix}{\color{red} 1} \\ i  \\ 0 \\ -1 \\ 0 \end{bmatrix}, \begin{bmatrix} -i \\ {\color{red} {-1}} \\ 0 \\ 0 \\ 0  \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ {\color{red} 1} \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} -1 \\ 0 \\ -i \\ {\color{red} {-1}} \\ 0  \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ {\color{red} 1} \end{bmatrix}$$
We will now prove that the two copivoted vectors form a basis for the orthogonal complement of the image.

We can show that these $w_i$s form basis for the orthogonal complement of image of $A$
$$im\,A ^\perp = span(w_1 \dots w_m)$$ We will do so in 3 parts: 
1. Linear Independence of Copivoted Vectors
2. Orthogonality of Copivoted vectors fo Image of $A$
3. Spanning of Orthogonal Complement by Copivoted Vectors


1. Linear Independence of Copivoted Vectors

The argument for $w_i$'s linear independence follows almost the same as the argument for pivoted vector's linear independence.
First notice that the copivot is each placed in a different row by definition. This means from a given set of copivoted vectors, we can always choose a vector with the copivot on the lowest row. 
The proof is done inductively:
Base Case: Single Copivoted Vector.
This is trivial, as a set of a single vector is always linearly independent. 
Inductive Case: $m > 1$ Copivoted Vectors.
Let $w_1 \dots w_m$ be set of copivoted vectors such that $w_j$ has its copivot on a lower row than $w_i$ iff $i < j$. Let's also assume by induction hypothesis that $w_1 \dots w_{m-1}$ are linearly independent. 
To show independence, let $c_1 \dots c_m \in \mathbb{C}$ be constants such that
$$c_1 w_1 + \dots + c_{m-1}c_{m-1} + c_m w_m = 0$$ Let $w_m$ have the lowest copivot on the $j$th row. Notice that, because each of $w_i$ came from an upper-triangular matrix, $w_m$ alone has -1 on the $j$th row, and all other vectors have zeros. So, the sum of $j$th row is $-c$. 
Because the sum must equal the zero vector, the $j$th component of the sum must also equal zero, which is only possible iff
$$c_m = 0$$ So the sum further reduces to 
$$\begin{align*}& c_1 w_1 + \dots + c_{m-1}w_{m-1} + c_m w_m &=& \\ &c_1 w_1 +\dots + c_{m-1}w_{m-1} &= &0 \end{align*}$$ and by induction hypothesis $w_1 \dots w_{m-1}$ are linearly independent, meaning
$c_1 = \dots =c_{m-1} = 0$
This leads to that every coefficent $c_{i} = 0$, proving linear indepdence of $m$ copivoted $w_1 \dots w_m$. 
QED
Let's see this playing out for thw two copivoted example from above. Let $c_1, c_2$ be such that
$$\begin{align*}c_1 \begin{bmatrix} -i \\ {\color{red} {-1}} \\ 0 \\ 0 \\ 0  \end{bmatrix} + c_2\begin{bmatrix} -1 \\ 0 \\ -i \\ {\color{red} {-1}} \\ 0  \end{bmatrix}   &= \begin{bmatrix}  -c_1 i - c_2 \\ {\color{red} {-c_1}} \\ -c_2 i \\ {\color{red} {-c_2} } \\ 0 \end{bmatrix} &= \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \end{align*} $$ So $$\therefore c_1 = c_2 = 0$$ proving their independence. 

2. Orthogonality of Copivoted vectors fo Image of $A$

We have seen that the image is spanned by the basis $v_1 \dots v_k$ (the rows of the square matrix from which $w_i$'s are generated). So, if every copivoted $w_i$ is orthogonal to each $v_j$, we will know that $w_i$ is in the Orthogonal Complement of $A$. 
$$w_i \in im\, A^\perp,\forall w_i$$ So let us compute the inner product of arbitrary pivoted $v_j$ and copivoted $w_i$. Let $v_i$ have come from $\bar{i}$th row of the upper triangular square matrix, and let $w_j$ have been generated from $\bar{i}$th column. This means we can write out its components using $c_{i,j}$
$$v_j = \begin{bmatrix} 0 \\ \vdots \\ 0 \\ 1 \\ c_{\bar{j}, \bar{j} + 1} \\ \vdots \\ c_{\bar{j}, n} \end{bmatrix} \, w_i = \begin{bmatrix} \overline{c_{1, \bar{j}}} \\ \vdots \\  \overline{c_{\bar{i} - 1, \bar{i}}} \\ -1 \\ 0 \\ \vdots \\ 0 \end{bmatrix}$$ First, we see that copivoted vector is generated from non-pivoted column of the square matrix, $\bar{i} \neq \bar{j}$. This leaves us two cases. 
 
Case 1: $\bar{i} < \bar{j}$
In this case, it is trivial to see that the inner product is zero since $w_i$ has all zeroes below $\bar{i}$th row, and $v_j$ has all zeroes above $\bar{j}$th row. This means when the inner product is applied since either of the components is zero, the inner product is also zero. 
$$\bar{i} < \bar{j} \, \Longrightarrow \,<v_i, w_j> = 0$$ Case 2:  $\bar{i} > \bar{j}$
In this case, let us first expand the inner product
$$
\begin{align*}
<v_j, w_i> &= \sum_{k = 1}^{n} (v_j)_k (w_i)_k \\
&= \sum_{k = \bar{j}}^{\bar{i}} (v_j)_k (w_i)_k, \,\text{other componets beyond this bound are zeroes} \\
&= 1*{c_{\bar{j}, \bar{i}}} + \sum_{k = \bar{j} + 1} ^{\bar{i} - 1} c_{\bar{j}, k} {c_{k, \bar{i}}} + c_{\bar{j}, \bar{i}}*-1 \\
&= \sum_{k = \bar{j} + 1} ^{\bar{i} - 1} c_{\bar{j}, k} {c_{k, \bar{i}}} 
\end{align*}
$$ Now we can utilize the two propeties we have outlined above. 
$$c_{i,i} = 1 \Rightarrow \forall 1\leq i \leq n, c_{j,i} = 0, i\neq j$$ $$c_{i,i} = 0 \Rightarrow \forall  1\leq i \leq n, c_{i,j} = 0, i\neq j$$ For each $k$, if $c_{k,k} = 1$, then $c_{\bar{j}, k} = 0$, or if $c_{k,k} = 0$, then $ {c_{k, \bar{i}}} = 0$. In either case, each term in the 
summation will therefore be all zeroes.  $$\therefore c_{\bar{j}, k} c_{k, \bar{i}} = 0$$ $$\therefore \bar{i} > \bar{j} \, \Longrightarrow \, <v_j, w_i>  = 0$$
So in every possible $v_j, w_i$ combinations, we see that $w_i$ is orthogonal to basis of $im\, A$.
$$\therefore w_i \in im\, A^\perp, \forall w_i$$  QED
Let's see this holding true using the same example as above. 
Take the $w_i = \begin{bmatrix}  -1 \\ 0 \\ -i \\ -1 \\ 0  \end{bmatrix}$. We will see that this is orthogonal to every $v_j \in \{ \begin{bmatrix} 1 \\ i \\ 0 \\ -1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 1 \\ i \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} \}$
$$\begin{align*}
<\begin{bmatrix} {\color{red} 1} \\ i \\  {\color{blue}0} \\ -1 \\ 0 \end{bmatrix},\begin{bmatrix}  -1 \\ {\color{blue} 0} \\ -i \\ {\color{red} {-1}} \\ \color{purple}  0  \end{bmatrix}> &= ({\color{red} 1}*-1) + (i * {\color{blue} 0}) + ({\color{blue}0}*i) + (-1*{\color{red} {-1}}) + \color{purple}0  \\
&= \color{red}{-1} + \color{blue}{0} + \color{red}1 \\
&= 0 \\
<\begin{bmatrix} \color{purple} 0 \\ \color{purple} 0 \\ {\color{red}1} \\ i \\ 0 \end{bmatrix},\begin{bmatrix}  -1 \\ 0 \\ -i \\ {\color{red} {-1}} \\ \color{purple}  0  \end{bmatrix}> &=  \color{purple}0  + ({\color{red} 1}*i) +  (i*{\color{red} {-1}}) + \color{purple}0 \\
&= \color{red}{i} +  \color{red}{-i} \\
&= 0 \\
< \begin{bmatrix}\color{purple}  0 \\\color{purple} 0 \\ \color{purple} 0 \\ \color{purple} 0 \\ \color{red}1 \end{bmatrix},\begin{bmatrix}  -1 \\ 0 \\ -i \\ {\color{red} {-1}} \\ \color{purple} 0  \end{bmatrix}> &= \color{purple}0  + \color{purple}0  \\ &= 0
\end{align*}$$ This holds also true for $w_i =  \begin{bmatrix} -1 \\ 0 \\ -i \\ {\color{red} {-1}} \end{bmatrix} $, which readers can quickly check for themselves.

3. Spanning of Orthogonal Complement by Copivoted Vectors

This final statement easily follows from the previous two lemmas. We have seen that each $w_1 \dots w_m \in im\,A^\perp$, and we have seen that they are linearly independent. 
Because $\mathbb{C}^n = im\, A \oplus im\, A^\perp$ we know their dimensions are
$$\begin{align*}n &= \dim(im\,A) + \dim(im\, A^\perp) \\ &= k +  \dim(im\, A^\perp) \end{align*}$$ $$\begin{align*} \therefore  \dim(im\, A^\perp)  &= n - k \\ &= m\end{align*}$$ Since we have found $m$ linearly independent vectors $w_1 \dots w_m$ in $m$ dimensional space, this is enough to show that the vectors span the space.
$$\therefore im\, A^\perp = span(w_1\dots w_m)$$ QED

Basis for Orthogonal Complement

Finally, combining all three statements 1, 2, and 3 together, we have proof that $w_1\dots w_m$ form basis for $im\,A^\perp$.
QED 



Basis for $\mathbb{C}^n$

Now that we have found a basis for image and orthogonal complement of the image, combining these two sets form basis for the entire codomain $\mathbb{C}^n$. This proof holds more generally to any basis and basis for its complement, not only for the pivoted and copivoted vectors. 
Consider $v_1 \dots v_k$ form a basis for $im\,A$ and $w_1 \dots w_m$ form a basis for $im\,A^\perp$. We frist demonstrate the linear independence of the entire set. 
Let $a_j, b_i \in \mathbb{C}$ be constants such that
$$a_1 v_1 + \dots + a_k v_k + b_1 w_1 + \dots + b_m w_m = 0$$ Then, let 
$$v = a_1 v_1 + \dots + a_k v_k \in im$$ $$w = b_1 w_1 + \dots + b_m w_m \in im\,A^\perp$$ Then,
$$\begin{align*} v + w &= 0 \\ \therefore <v,w> + <w,w> &= \\ <w,w> &= 0 \end{align*}$$ Similarly, we find that $$<v,v> = 0$$ By property of inner product, $<v,v> = 0 \Longleftrightarrow v = 0$, so we see that 
$$v = 0$$ $$ w = 0$$ which by linear independence of $v_j, w_i$ implies $a_j = b_i = 0$, proving their linear independence. 
Now that we have found total of $k + m = n$ linearly independent vectors in $n$ dimensional space, they clearly form a basis for $\mathbb{C}^n$. 
$$\{v_1\dots v_k, w_1 \dots w_m \} \text{ form a basis for } \mathbb{C}^n$$    QED
Using this, we can see that the 5 vectors
$$ \begin{bmatrix}{\color{red} 1} \\ i  \\ 0 \\ -1 \\ 0 \end{bmatrix}, \begin{bmatrix} -i \\ {\color{red} {-1}} \\ 0 \\ 0 \\ 0  \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ {\color{red} 1} \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} -1 \\ 0 \\ -i \\ {\color{red} {-1}} \\ 0  \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ {\color{red} 1} \end{bmatrix} $$ form basis for $\mathbb{C}^n$. 

Notice the generated copivoted basis is not an orthonormal basis. To generate ONB for each space, it is still simpler to have found this separate basis first, so we can apply, for example, the Graham Schmidt Procedure on this smaller set of basis. 


Image, Kernel, and their Orthogonal Complements

Furthermore, with this method of generating basis, we can easily compute a basis for image, kernel, and their orthogonal complements at once. By utilizing the method from my previous post, we can compute a pivoted basis for kernel by augmenting $A^\intercal$ with identity matrix; then the augmented rows on the zero-row on the original matrix keep track of the combinations yielding a zero vector. Using these pivoted vectors, apply the same method used for the image to produce basis for the orthogonal complement of the kernel. 
Let's see this with a quick example:
Cosnider the same matrix $A$ as above example. 
$$ A = \begin{bmatrix} 1 & 0 & 0 & 1 \\ i & 0 & 0 & i \\ 0 & 1 & 0 & 1 \\ -1 & i & 0 & i-1 \\ 0 & 0 & 1 & 1 \end{bmatrix} $$  To generate a pivoted basis for the image and the kernel, we take RREF of transpose of $A$ augemented with identity matrix:
$$\begin{align*} RREF(aug(A^\intercal, \mathbb{I}_{4})) &= RREF\left[ \begin{matrix} 1 & i & 0 & -1 & 0 \\ 0 & 0 & 1 & i & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 1 & i & 1 & i - 1 & 1 \end{matrix} \right| \left. \begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{matrix} \right] \\ &= \left[ \left .\begin{matrix} 1 & i & 0 & -1 & 0 \\ 0 & 0 & 1 & i & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0   \end{matrix} \, \right |  \begin{matrix} 0 & -1 & -1 & 1 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 1 & 1 & 1 & -1 \end{matrix}\right] \end{align*}$$ So we see that kernel is spanned by a single vector
$$\begin{bmatrix} 1 \\ 1 \\ 1 \\ -1 \end{bmatrix}$$ Now that we have a pivoted basis for the kernel, we can apply the same method to find the orthogonal complement of the kernel:
First set up the square matrix:
$$\begin{bmatrix}  1 & 1 & 1 & -1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}$$ Now for each of the unpivoted columns, generate the following copivoted vectors:
$$\begin{bmatrix} 1 \\ \color{red} {-1} \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \\ \color{red}  {-1} \\ 0 \end{bmatrix}, \begin{bmatrix} -1 \\ 0 \\ 0 \\ \color{red}{-1}\end{bmatrix}$$ These vectors form a basis for the orthogonal complement of the kernel. 

Another method to generating kernel and its orthogonal complemet is to utilize adjoint matrix:
$$im\, A^* = (ker\, A)^\perp$$ $$ker\,A^* = (im\,A)^\perp$$ Where adjoint maps are defined 
$$A^* = \bar{A}^\intercal$$ Now, we can find image of $A^*$ using RREF the same as above:
$$RREF((A^*)^\intercal) = RREF(\bar{A})$$ 
Let's try with the same example:
$$\begin{align*} RREF\overline{\begin{bmatrix} 1 & 0 & 0 & 1 \\ i & 0 & 0 & i \\ 0 & 1 & 0 & 1 \\ -1 & i & 0 & i-1 \\ 0 & 0 & 1 & 1 \end{bmatrix}} &= RREF\begin{bmatrix} 1 & 0 & 0 & 1 \\ -i & 0 & 0 & -i \\ 0 & 1 & 0 & 1 \\ -1 & -i & 0 & -i-1 \\ 0 & 0 & 1 & 1 \end{bmatrix} \\ &= \begin{bmatrix}  1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}   \end{align*}$$ where we see that $im\,A^* = (ker\,A)^\perp$ is spanned by 
$$\begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \\ 1 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 1 \\ 1 \end{bmatrix}$$ Though these vectors may look different from the first basis we found, they indeed span the same space. 
Lastly, we can find basis for $(im\,A^*)^\perp = ker\,A$ by generating the copivoted vector:
$$\begin{bmatrix} 1 \\ 1 \\ 1 \\ -1 \end{bmatrix}$$ This indeed is the same vector as previous kernel we have found. 

With these methods, we are able to find all four major subspaces of a matrix. 


General Application

This method discussed here can be utilized generally for discovering a basis for any space. For any set of vectors, we produce a matrix $A$ such that its image is the span of those vectors. We can do this easily by defining $A$ where its columns are the desired vectors. Then, trivially, its image is spanned by the column vectors, so the orthogonal complement of the image is the orthogonal complement of the original space. 
Let us see this in action with two problems:

Subspace

Consider the space $V$ spanned by vectors $\begin{bmatrix} 3 \\ 2 \\ 1 \\ 0 \\ -1\end{bmatrix}, \begin{bmatrix}1 \\ 0 \\ 1 \\ 0 \\ 1\end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ -1 \\ 0 \\ 1 \end{bmatrix}$. To find $V^\perp$, we first setup the matrix $A$ as
$$A = \begin{bmatrix} 3 & 1 & 0 \\ 2 & 0 & 1 \\ 1 & 1 & -1 \\ 0 & 0 & 0 \\ -1 & 1 & 1 \end{bmatrix}$$ Now we simply perform our short algorithm: 
$$RREF(A^\intercal) = \begin{bmatrix} 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix}$$ $$\begin{bmatrix} 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & -1 & 0 & 0 \\ 0 & 0 & \color{red}0 & 0 & 0 \\ 0 & 0 & 0 & \color{red}0 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix}$$ where we finally find the basis for $V^\perp$ to be
$$\{ \begin{bmatrix} 1 \\ -1 \\ \color{red}{-1} \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 0 \\  \color{red}{-1} \\ 0 \end{bmatrix} \}$$

Normal Vector

Now consider another problem if finding the normal vector to a plane in 3D. 
Consider a planed defined by 
$$2x + 3y - 4z = 0$$ First, we can easily anticipate the answer to be $\begin{bmatrix} 2 \\ 3 \\ -4  \end{bmatrix}$, by looking at the coefficents (this of course is a easy result from calculus 3).
We first have to find at least two independent vectors spanning this plane. This can be easily done. We can, for example, find two vectors:
$$\begin{bmatrix}4 \\ 0 \\ 2 \end{bmatrix}, \begin{bmatrix} 0 \\ 4 \\ 3 \end{bmatrix}$$ Now we follow the same steps as problem above. 
$$A = \begin{bmatrix} 4 & 0 \\ 0 & 4 \\ 2 & 3\end{bmatrix}$$ $$RREF(A^\ intercal) = \begin{bmatrix}  1 & 0 & \frac{1}{2} \\ 0 & 1 & \frac{3}{4} \end{bmatrix}$$ where we can quite easily see that the generated copivoted vector should be
$$\begin{bmatrix} \frac{1}{2} \\ \frac{3}{4} \\ -1 \end{bmatrix}$$ Notice this matches the expected result, but is simply scaled down.
$$4\begin{bmatrix} \frac{1}{2} \\ \frac{3}{4} \\ -1 \end{bmatrix} = \begin{bmatrix} 2 \\ 3 \\ -4  \end{bmatrix}$$

Comments

Popular posts from this blog

Large Polynomial Series using Matrices (Calculating Bernoulli's Number with Pascal Matrix)

Polynomial Series can be easily solved using Power Series Formulas for each term in the polynomial. However, this can be frustrating since not every Power Formula are intuitive to memorize. We would like to find a more elegant and easier-to-recall formula for computing a Polynomial Series. This can be done using matrices. Notations and Equations We will borrow the notations and simple equations from Sequence Curve articles . There, we have extended a series to be continuous through the following identity: $$\sum_{i=m}^n f(i) = \int_{m-1}^nȢ\{f(x)\}dx $$ The $Ȣ\{f(x)\}$ acts as the rate of change of a series relative to its bounds. $$\frac{d}{dt} \sum_{i=m}^nf(i) = Ȣ\{f(n)\}\frac{dn}{dt} - Ȣ\{f(m-1)\}\frac{dm}{dt}$$ Letting $t=m=n$, we find $$\frac{d}{dt} \sum_{i=t}^tf(i) = \frac{d}{dt}f(t)= Ȣ\{f(t)\} - Ȣ\{f(t-1)\} $$ Remebering that $Ȣ$ Transformation is Linear, we can derive a simple identity of $$\frac{d}{dx}f(x+1) = Ȣ\{f(x+1)\} - Ȣ\{f(x)\} = Ȣ\{f(x+1)-f(x)\}$$ This will be use

Partition Counter using Trees, Recursion, Tables, and Algorithm

Partitions are number of ways an integer can be represented as sum of positive integers. We can ask, for example, what is the partition of 5? If we write out every possible combination, $$\begin{align*}  5 &= 1+1+1+1+1 \\  &= 1+1+1+2\\ &= 1+1+3\\ &= 1+4\\ &= 1+2+2\\ &= 2+3\\  &= 5 \end{align*} $$ we can see that partition of 5 is 7. One will immediately notice, however, that this is not the most efficient approach to answering the question: not only does partition grow quickly, attempting to organize and not miss or repeat an algebraic expression becomes messy and impractical. Chapter 1: Partition Tree A cleaner, but still not the best, approach would be to use tree diagrams to represent the partitions of a number. In a Partition Tree of $n$, every path to a terminating node will add up to the number $n$. And also, every child of a node (every nodes below a given parent node) will always be greater than or equal to the parent node in order to not

Vector Plane Rotation

Vector is a useful tool in maths and in physics that describes both magnitude and direction in a space. Vectors are used to describe concepts where direction matter, such as forces and velocity. For example, two cars can have same speed(scalar) but if one if heading north and the other south, they will have different velocity(vector). Vectors are usually written in terms of their components. Components are magnitude of the vector in one dimension. In 2D space, vector can be represented as coordinate of 2 components, <x,y>, and in 3D space, vector can be represented as coordinate of 3 components, <x, y, z>. Breaking down vectors into their components is extremely useful, especially in physics, because it allows one to break down complicated changes in vector into simpler and easier-to-work-with changes in its components. Take a look at projectile motion, for example. Velocity of an object fired up into the sky at an angle changes both its direction and magnitude. Attem

Summation of Harmonic Interval and Simplified Riemann Sum

When you first learn of Summation in Pre-Calc class, it is probably in build-up to Riemann Sum. You are taught the basics of Summation and simple manipulations, such a $\sum_nc*a_n = c*\sum_na_a$ for constant $c$, but you don't get to explore deeper into Summations and really play around with it. One thing that really intrigued me was why Summations only incremented by a unit. When we have a Sum like $\sum_{n=3}^{10}a_n$, it is equal to $a_3+a_4+a_5+\dots+a_9+a_{10}$, always incrementing by a unit( $1$ ). Why couldn't we Sum with interval that is not a unit? How could we have something like $a_3+a_{3.5}+a_4+a_{4.5}+a_5+a_{5.5}+\dots+a_{8.5}+a_9+a_{9.5}+a_{10}$, incrementing by a half instead of a full unit? Sets Of course you technically can use Sets and Set Builder Notation to create a specific Summation. $$\sum_{i \in S}i,S=\{\frac{x}{2}|x\in\mathbb{N_0}\} \, = \, 0+\frac{1}{2}+1+\frac{3}{2}+\dots$$ The above is Sum of all Rational Numbers incrementing by $\frac{1