Skip to main content

Latest Post

Power Essence Coefficients and Bernoulli Numbers

Previously, we have explored methods to compute the essence of power functions $Ȣx^n$ which involves solving a large system of linear equations. This method is equivalent to solving for the inverse of a large $n\times n$ matrix where entries are values of pascal's triangle. Though the matrix method allow us to solve for large number of essences at once, it does not extend easily to solve for next iterations of essence coefficients. Rather than reusing the values we have already solved for, we will have to solve for inverse of a separate larger matrix again. Here we will introduce an iterative method for solving for these coefficients. Chapter 0: Recap Let us first remind ourselves of the definition of essence. For a function $f(x)$, we want to find the transformation $Ȣf(x)$ such that we are able to 'smooth out' its series: $$\sum_{i=a}^b f(i) = \int_{a-1}^b Ȣf(x) dx$$ For example, we can solve for the following functions: $$\begin{align*}Ȣ1 &= 1 \\ Ȣx &= x +

Inverse Matrices: Inverse of Products and Non-Square Matrices UPDATED

Matrices are usually first introduced to students in pre-calculus and only the most basic operations and applications are taught to them. For example, we are taught how to add, multiply a scalar value, multiply matrices, and how to find their Inverses.
We are taught that only the square matrices have inverses, but is this really true? Today, we will explore this concept of the Inverse of Non-Square Matrix.

The Inverse of Matrix Product

First, let us explore how the Inverse of a Product relates to Inverses of its Factors.
Consider this matrix product:
$$AB = C$$ where $A$, $B$, and $C$ are all matrices of legal orders. Now let us ask, how does $A^{-1}$ and $B^{-1}$ relate to $C^{-1}$?
Let's define what we mean by Inverse of $C$. We want the inverse to have the property that $$CC^{-1} = C^{-1}C = \textbf{1}$$ where $\textbf{1}$ is a identity matrix.
First, let's look at the first case:
$$CC^{-1} = \textbf{1}$$ We can substitute the definition of $C$ and find that
$$(AB)C^{-1} = \textbf 1$$ By Associative Property, we know that
$$\therefore A(BC^{-1}) = \textbf 1$$ For $A$ to multiply with $BC^{-1}$ and produce $\textbf{1}$, we can conclude that $BC^{-1} = A^{-1}$. We can show this algebraically as:
$$\begin{align*} \require{cancel}
\because A(BC^{-1}) &= \textbf 1 \\
\cancel{A^{-1}A}(BC^{-1}) &= A^{-1}\,\textbf 1 \\
BC^{-1} &= A^{-1}
\end{align*}$$ We continue this process and find that
$$\begin{align*}
\because BC^{-1} &= A^{-1}\\
\require{cancel} \cancel{B^{-1}B}C^{-1} &= B^{-1}A^{-1} \\
\therefore C^{-1} &= B^{-1}A^{-1}
& \blacksquare
\end{align*}$$
We find the exact same result from the other case:
$$\begin{align*}
\because C^{-1}C &= \textbf 1 \\
\therefore C^{-1}(AB) &= \textbf 1 \\
(C^{-1}A)\cancel{BB^{-1}} &= \textbf{1} B^{-1} \\
C^{-1}\cancel{AA^{-1}} &= B^{-1}A^{-1} \\
\therefore C^{-1} &= B^{-1}A^{-1}
&\blacksquare
\end{align*}$$

So we find wonderful symmetry that Inverse of Product is simply Products of Inverses of its Factors in reverse order.
$$AB = C \\
 \Leftrightarrow \\
B^{-1}A^{-1} = C^{-1}$$
Try it for yourselves! Here is one examples:
$$\begin{align*}
\begin{bmatrix}
3 & 6\\
7 & 9
\end{bmatrix}^{-1}
& =
\begin{bmatrix}
-\frac{3}{5} & \frac{2}{5}\\
\frac{7}{15} & -\frac{1}{5}
 \end{bmatrix} \\
\begin{bmatrix}
1 & 0 \\
9 & 6
\end{bmatrix} ^{-1}
&= \begin{bmatrix}
1 & 0 \\
-\frac{3}{2} & \frac{1}{6}
\end{bmatrix} \\ \\
\begin{bmatrix}
3 & 6\\
7 & 9
\end{bmatrix}
\begin{bmatrix}
1 & 0 \\
9 & 6
\end{bmatrix}
&=
\begin{bmatrix}
57  & 36 \\
88 & 54
 \end{bmatrix}
\\
\begin{bmatrix}
1 & 0 \\
-\frac{3}{2} & \frac{1}{6}
\end{bmatrix}
\begin{bmatrix}
-\frac{3}{5} & \frac{2}{5}\\
\frac{7}{15} & -\frac{1}{5}
 \end{bmatrix}
&=
\begin{bmatrix}
-\frac{3}{5} & \frac{2}{5} \\
\frac{44}{45} & -\frac{19}{30} \\
\end{bmatrix} \\ \\
\begin{bmatrix}
57  & 36 \\
88 & 54
 \end{bmatrix}
\begin{bmatrix}
-\frac{3}{5} & \frac{2}{5} \\
\frac{44}{45} & -\frac{19}{30} \\
\end{bmatrix} & = \begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix}
\end{align*}$$
Notice that in the proof, it was never specified that all the matrices have to be Square Matrices. We only used the properties and definitions of Inverses and Matrix Multiplication to show Inverse of Product.
This means, theoretically, this definition should hold true even for non-Square Matrices and their Inverses!

The Inverse of Non-Square Matrix

Let's say we have a matrix $A$ of order $n\times m$ and we wish to find what $A^{-1}$ is. Because $A$ is not a square matrix, we cannot compute it directly by hand. Instead, we will have to build and use a 'Dummy-Matrix' $B$ to solve this.
Let $B$ be a Matrix of order $m \times n$.
This means that for a product
$$AB = C$$ $C$ will be of order $n \times n$, which is a Square Matrix. We know how to solve for $C^{-1}$ by hand or with calculator, and we also know that
$$C^{-1} = B^{-1}A^{-1}$$
We want to isolate the $A^{-1}$ by multiplying $B$ to both sides:
$$\therefore BC^{-1} = \cancel{BB^{-1}}A^{-1}$$ and so we find that
$$\begin{align*}\therefore A^{-1} &= BC^{-1}\\
&= B(AB)^{-1}
\end{align*}$$ where $A^{-1}$ has order of $m \times n$.

This is the result of when we define $C = AB$, but we could have also as defined $C$ to be
$$BA = C$$ where $C$ is of order $m\times m$, which is still a square matrix.
By similar process, we reach that also
$$\begin{align*}
A^{-1} &= C^{-1}B \\
&=  (BA)^{-1}B\end{align*}$$ where $A^{-1}$ has order of $m \times n$.

Note the cute "BAB" pattern which lends itself to be easily remembered.

Though we have found two definitions for Inverses, you will find that not both methods always work.

It turns out that this problem arises from when we try to take Inverse of $C$.
You will discover that $C$ is singular when its order is larger than the order of its factors. In other words, the Product of Matrix has inverse only if its order is smaller than its factors.

Take a look at these determinants (a matrix is singular if the determinant is 0) :
$$ \begin{align*}
\begin{bmatrix}
a \\
b
\end{bmatrix}
\begin{bmatrix}
c & d
\end{bmatrix} & =
\begin{bmatrix}
ac & ad \\
bc & bd
\end{bmatrix} \\
\begin{vmatrix} ac & ad \\
bc & bd
\end{vmatrix} &= abcd - abcd \\
&= 0
\end{align*} $$ but when we change order of Multiplication to yield a smaller order, $$\begin{align*}
\begin{bmatrix}
c & d
\end{bmatrix}
\begin{bmatrix}
a \\
b
\end{bmatrix} &=
\begin{bmatrix}
ac + bd
\end{bmatrix} \\
\begin{vmatrix}
ac + bd
\end{vmatrix} &= ac+bd \\
& \neq 0
\end{align*}$$
and also:
$$\begin{align*}
\begin{bmatrix}
a & b\\
c & d \\
e & f \\
\end{bmatrix}
\begin{bmatrix}
g & h & i\\
j & k & l \\
\end{bmatrix}
&=
\begin{bmatrix}
ag+bj & ah + bk & ai + bl\\
cg+dj & ch + dk & ci + dl \\
eg+fj & eh + fk &ei + fl
\end{bmatrix} \\
\begin{vmatrix}
ag+bj & ah + bk & ai + bl\\
cg+dj & ch + dk & ci + dl \\
eg+fj & eh + fk &ei + fl
\end{vmatrix} & =
(ag+bj)\begin{vmatrix}
ch + dk & ci + dl \\
eh + fk &ei + fl
\end{vmatrix} \\
&-(ah+bk)\begin{vmatrix}
cg+dj & ci + dl \\
eg+fj &ei + fl
\end{vmatrix}\\
&+(ai+bl)\begin{vmatrix}
cg+dj & ch + dk  \\
eg+fj & eh + fk
\end{vmatrix} \\
& \vdots \\
&\text{for sake of space, I will not write out further steps, but it leads to equal to} \\
& = 0 \end{align*}$$ and when we change the order of Multiplication again,
$$\begin{align*}
\begin{bmatrix}
g & h & i\\
j & k & l \\
\end{bmatrix}
\begin{bmatrix}
a & b\\
c & d \\
e & f \\
\end{bmatrix} &=
\begin{bmatrix}
ag+ch+ei & bg+dh+fi \\
aj + ck + el & bj + dk + fl
\end{bmatrix} \\
\begin{vmatrix}
ag+ch+ei & bg+dh+fi \\
aj + ck + el & bj + dk + fl
\end{vmatrix}
&= (ag+ch+ei)(bj + dk + fl) - (bg+dh+fi)(aj + ck + el )\\
&\neq 0
\end{align*}$$
This reveal an interesting pattern that
$$ n \neq m \Rightarrow \vert AB \vert = 0  \vee \vert BA \vert = 0 $$ or in words, either $AB$ or $AB$ will be singular if $A$ and $B$ are not Square Matrix, and the Singular Product will be that of 'Larger' order of its products.

Let's examine why this may be the case.

Singularity of Matrix Product

If $A$ has order $n\times m$ and $B$ has $m \times n$, we know that we can define Matrix Multiplication as such:
$$ \begin{align*}
AB&=C\\
\begin{bmatrix}
R_1 \\
R_2 \\
R_3 \\
\vdots \\
R_n
\end{bmatrix} \begin{bmatrix}
D_1 & D_2 & D_3 & \dots & D_n
\end{bmatrix}
&= \begin{bmatrix}
R_1 D_1 & R_1 D_2 & \dots & R_1D_n \\
R_2 D_1 & R_2 D_2 & \dots & R_2 D_2 \\
\dots & \ddots  \\
R_n D_1 & R_n D_2 & \dots & R_n D_n
\end{bmatrix}\\
&= \begin{bmatrix}
P_1 \\
P_2 \\
\vdots \\
P_n
\end{bmatrix}
\end{align*}$$Here, $R_i$ represents a Row Vector of $A$, and $D_j$ represents a Column Vector of $B$.

We know that the Product is singular if one of its Row is a Linear Combination of other Rows.
$$ \exists k \in [1,n], P_k = \sum_{i \neq k} x_i P_i \Rightarrow  \det(C) = 0$$ This basically means, that one of the Rows is equal to sum of other rows multiplied by a certain sequence of coefficients.
More explicitly, this means that each Entry in $P_k$ is Linear Combination of other entries of its Column:
$$ \exists k \in [1,n], \forall j \in [1,n], R_k D_i = \sum_{j \neq k} x_j R_j D_i $$
Becuase all Columns are essentially 'symmetric' with each other, we can generalize all the columns as
$$ \begin{align*}R_k D_i &= \sum_{j \neq k} x_j R_j D_i \\
&= (\sum_{j \neq k} x_j R_j)D_i \end{align*}$$ Notice that $D_i$ is on both sides of equality and so can be canceled out. This leads to that
$$ R_k = \sum_{j \neq k} x_j R_j $$ for some coefficients $x_j$.
This means we simply have to show that there exists a Row from matrix $A$ that is a Linear Combination of other Rows of $A$.
Without loss of generalization, let's show that $R_n$ is a Linear Combination of first $m$ number of Rows of $A$.
$$x_1R_1 + x_2R_2 + \dots + x_mR_m = R_n$$ Technically, we could have chosen any combination of Rows, but for sake of argument, we will use this combination.
We can represent this as Matrix multiplication:
$$\begin{bmatrix}
x_1 & x_2 & \dots & x_m
\end{bmatrix}\begin{bmatrix}
R_1 \\
R_2 \\
\vdots \\
R_m
\end{bmatrix} = R_n $$
We know that each $R_i$ had order of $1 \times m$, which makes $\begin{bmatrix}R_1 \\ \vdots \\R_m \end{bmatrix}$ a Square Matrix of order $m\times m$.
Now from here, there is two possible cases:

Case 1:

$\begin{bmatrix} R_1 \\ \vdots \\ R_m \end{bmatrix} ^{-1}$ exists and we can calculate the values for coefficients.
$$\begin{bmatrix} x_1 & x_2 & \dots x_m \end{bmatrix} = R_n \begin{bmatrix} R_1 \\ R_2 \\ \vdots \\R_m \end{bmatrix} ^{-1} $$ This means that $R_n$ indeed is a Linear Combination of other $m$ Rows, and therefore results in the Product $AB = C$ being Singular.

Case 2:

$\begin{bmatrix} R_1 \\ \vdots \\ R_m \end{bmatrix} ^{-1}$ does not exist and we cannot find values of coefficients.
This means that $R_n$ is not a Linear Combination of first $m$ rows, but this reveals another crucial fact.
If Inverse of $\begin{bmatrix} R_1 \\ \vdots \\ R_m \end{bmatrix} $ does not exist, it means that once reduced to Reduced-Row-Echelon Form, the square matrix will have a Zero-Row. In other words, this means that one of the Rows in the Square Matrix is a Linear Combination of other Rows in that Square Matrix.
Though $R_n$ is not a Linear Combination of the Rows, we have shown that then there must exist another Row from $R_1$ to $R_m$ which is a Linear Combinations and therefore the Product $C$ is again Singular.

We see that in both cases, we find that if $n > m$, and $A$ has order $n\times m$ and $B$ has order $m\times n$, we know that
$$\det(AB) = 0$$ or in other words,
$$\text{Product of } AB=C \text{ is Singular } \blacksquare$$

So we can explain why only one of the definitions will work for a given Non-Square Matrix and we can also justify which one to use depending on its order.

For the next parts, we will look at matrix $A$ with order $n \times m$ where $n < m$.
If $B$ has an order $m \times n$, then $BA$ with the order $m\times m$ will be singular and so we can only define Inverse of $A$ to be
$$A^{-1} = B(AB)^{-1}$$ Even so, if we do not choose our Dummy Matrix $B$ carefully, $AB$ may still be Singular.

Let's look at an example:
$$
\begin{align*}A &= \begin{bmatrix}
6 & 9 & 21 \\
7 & 18 & 35
\end{bmatrix} \\
&\text{to find its Inverse, let's set a Dummy Matrix} \\
B &= \begin{bmatrix}
8 & 9\\
25 & 12 \\
6 & 3
\end{bmatrix} \\
&\text{You could have chosen a different Dummy Matrix}\\
&\text{and it would have been equally valid.} \\
AB & = \begin{bmatrix}
399 & 225\\
716 & 384
\end{bmatrix} \\
&\text{We can calculate the Inverse of the Square Matrix and find that} \\
\begin{bmatrix}
399 & 225\\
716 & 384
\end{bmatrix} ^{-1} & = \begin{bmatrix}
-\frac{32}{657} & \frac{25}{876} \\
\frac{179}{1971} & -\frac{133}{2628}
\end{bmatrix} \\
&\text{then using the definition we find $A^{-1}$ to be} \\
A^{-1}  &= B(AB)^{-1} \\
&= \begin{bmatrix}
8 & 9\\
25 & 12 \\
6 & 3
\end{bmatrix} \begin{bmatrix}
-\frac{32}{657} & \frac{25}{876} \\
\frac{179}{1971} & -\frac{133}{2628}
\end{bmatrix}  \\
&= \begin{bmatrix}
\frac{281}{657} & -\frac{199}{876} \\
-\frac{28}{219} & \frac{31}{292} \\
-\frac{13}{657} & \frac{17}{876}
\end{bmatrix}
\end{align*}$$
So, we have computed what the theoretical Inverse should be, but does this actually work? Let's try it out:
$$\begin{align*}
AA^{-1} & = \begin{bmatrix}
6 & 9 & 21 \\
7 & 18 & 35
\end{bmatrix}
\begin{bmatrix}
\frac{281}{657} & -\frac{199}{876} \\
-\frac{28}{219} & \frac{31}{292} \\
-\frac{13}{657} & \frac{17}{876}
\end{bmatrix} \\
&= \begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix}
\end{align*}$$ It does work perfectly!
But what about other way around? Does the Inverse commute?
$$\begin{align*}
A^{-1}A & = \begin{bmatrix}
\frac{281}{657} & -\frac{199}{876} \\
-\frac{28}{219} & \frac{31}{292} \\
-\frac{13}{657} & \frac{17}{876}
\end{bmatrix}
\begin{bmatrix}
6 & 9 & 21 \\
7 & 18 & 35
\end{bmatrix} \\
& = \begin{bmatrix}
\frac{285}{292} & -\frac{35}{146} & \frac{301}{292} \\
-\frac{7}{292} & \frac{111}{146} & \frac{301}{292} \\
\frac{5}{292} & \frac{25}{146} &\frac{77}{292}
\end{bmatrix} \\
&\approx \begin{bmatrix}
.976 & -.240 & 1.031 \\
-.024 & .760 & 1.031 \\
.017 & .171 & .264
\end{bmatrix}
\end{align*}$$ This is nowhere close an Identity Matrix. So we find that Inverses of Non-Square Matrices does not nessecarily hold a Commutative Property.

Again, the order of $A$ and the order of multiplication seems to have a great impact on the product. Similar to finding the $A^{-1}$ with the Dummy Matrix, $A^{-1}$ seems to only work on one multiplicative direction (whichever order produces a 'smaller' product).

That is, if $n <= m$ and $A$ has order $n \times m$, then $A^{-1}$ will have order of $m \times n$. Then $AA^{-1} = \textbf 1$ but it is not guaranteed that $A^{-1}A = \textbf 1$.
Similarly, if $n >= m$, then $AA^{-1} = \textbf 1$ but it is not guaranteed that $A^{-1}A = \textbf 1$.
Combining the two leads to that when $n = m$ and thus a Square Matrix, then $AA^{-1} = A^{-1}A = \textbf 1$.

Interestingly, with our current method of generating Inverse with a Dummy Matrix, the Dummy Matrix does have a significant impact on the actual value of the computed Inverse:
$$A= \begin{bmatrix}
7 & 8 & 9 \\
1 & 13 & 2
\end{bmatrix} $$ $$\begin{align*}
B &= \begin{bmatrix}
7 & 8 \\
1 & 7 \\
2 & 3
\end{bmatrix} \\
&\Rightarrow \\
A^{-1} & =  \begin{bmatrix}
7 & 8 \\
1 & 7 \\
2 & 3
\end{bmatrix} (
\begin{bmatrix}
7 & 8 & 9 \\
1 & 13 & 2
\end{bmatrix}
\begin{bmatrix}
7 & 8 \\
1 & 7 \\
2 & 3
\end{bmatrix}
)^{-1} \\
& = \begin{bmatrix}
\frac{181}{1513} & -\frac{373}{4539}\\
-\frac{21}{1513} & \frac{386}{4539} \\
\frac{46}{1513} & -\frac{53}{4539}
\end{bmatrix} \\
&\approx \begin{bmatrix}
.120 & -.082 \\
-.014 & .085 \\
.030 & -.012
\end{bmatrix}
\end{align*}$$ but
$$\begin{align*}
B & = \begin{bmatrix}
78 & 25 \\
9 & 63 \\
9 & 57
\end{bmatrix} \\
&\Rightarrow \\
A^{-1} & =  \begin{bmatrix}
78 & 25 \\
9 & 63 \\
9 & 57
\end{bmatrix} (\begin{bmatrix}
7 & 8 & 9 \\
1 & 13 & 2
\end{bmatrix}
 \begin{bmatrix}
78 & 25 \\
9 & 63 \\
9 & 57
\end{bmatrix})^{-1} \\
&= \begin{bmatrix}
\frac{7711}{46194} & -\frac{8389}{46194} \\
-\frac{533}{46194} & \frac{3701}{46194} \\
-\frac{391}{46194} & \frac{3235}{46194}
\end{bmatrix} \\
& \approx \begin{bmatrix}
.167 & -.182 \\
-.012 & .080 \\
-.008 & .070
\end{bmatrix}
\end{align*}$$
Though different, both these $A^{-1}$ produces Identity Matrix in one direction
$$  \begin{bmatrix}
7 & 8 & 9 \\
1 & 13 & 2
\end{bmatrix}\begin{bmatrix}
.120 & -.082 \\
-.014 & .085 \\
.030 & -.012
\end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}  \\
 \begin{bmatrix}
7 & 8 & 9 \\
1 & 13 & 2
\end{bmatrix}\begin{bmatrix}
.167 & -.182 \\
-.012 & .080 \\
-.008 & .070
\end{bmatrix}= \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} $$ but not in the other:
$$ \begin{bmatrix}
.120 & -.082 \\
-.014 & .085 \\
.030 & -.012
\end{bmatrix}\begin{bmatrix}
7 & 8 & 9 \\
1 & 13 & 2
\end{bmatrix} = \begin{bmatrix}
.755 & -.111 & .912 \\
-.012 & .994 & .045 \\
.201 & .091 & .250
\end{bmatrix} \\
\begin{bmatrix}
.167 & -.182 \\
-.012 & .080 \\
-.008 & .070
\end{bmatrix}\begin{bmatrix}
7 & 8 & 9 \\
1 & 13 & 2
\end{bmatrix} = \begin{bmatrix}
.987 & -1.025 & 1.139 \\
0 & .949 & .056 \\
.011 & .843 & .064
\end{bmatrix} $$
Seeing how $B$ can impact the product in the 'enlarging' direction, it seems to suggest that choosing the correct Dummy Variable may give us Inverse of Non-Square Matrix that commute just like a Square Inverse
$$\exists B, A^{-1} = B(AB)^{-1}, AA^{-1} = \textbf{1}_n, A^{-1}A = \textbf{1}_m ?$$ However with our current method of choosing Dummy Matrix, this Commutative Property is unlikely to be observed.

Consistent Inverses with Transpose

One method to generate consistent Non-Square Inverses is to use Transposes as Dummy Matrix.
$$A^\intercal$$ Transpose simply takes a matrix and 'flips' it around the main diagonal.
$$
\begin{align*}
\begin{bmatrix}
a&b&c \\
d&e&f \\
g & h & i
\end{bmatrix}^\intercal &=
\begin{bmatrix}
a&d&g\\
b&e&h\\
c&f&i
\end{bmatrix} \\
\begin{bmatrix}
a&b&c\\
d&e&f
\end{bmatrix}^\intercal &=
\begin{bmatrix}
a&d\\
b&e\\
c&f
\end{bmatrix}
\end{align*}$$ A Transpose of a Matrix flips the order from $n\times m$ to $m\times n$ which is exactly what we need for our Dummy Matrix. So we can consistently define Non-Square Matrix as
$$A^{-1} = A^\intercal (AA^\intercal)^{-1}, n \leq m\\
A^{-1} = (A^\intercal A)^{-1}A^\intercal, n \geq m\\$$
Let's try a example:
$$
\begin{align*}
A &= \begin{bmatrix}
7&8&9 \\
1&13&2
\end{bmatrix} \\
A^\intercal &= \begin{bmatrix}
7&1\\
8&13\\
9&2
\end{bmatrix} \\ \\
AA^\intercal &= \begin{bmatrix}
7&8&9 \\
1&13&2
\end{bmatrix}\begin{bmatrix}
7&1\\
8&13\\
9&2
\end{bmatrix} \\
&= \begin{bmatrix}
194&129\\
129&174
\end{bmatrix} \\ \\
(AA^\intercal)^{-1} &= \begin{bmatrix}
194&129\\
129&174
\end{bmatrix} ^{-1}\\
 &= \begin{bmatrix}
\frac{58}{5705} & -\frac{43}{5705}\\
-\frac{43}{5705} & \frac{194}{17115}
\end{bmatrix} \\ \\
\therefore A^{-1} &= A^\intercal (AA^\intercal)^{-1} \\
&= \begin{bmatrix}
7&1\\
8&13\\
9&2
\end{bmatrix}
\begin{bmatrix}
\frac{58}{5705} & -\frac{43}{5705}\\
-\frac{43}{5705} & \frac{194}{17115}
\end{bmatrix} \\
&= \begin{bmatrix}
\frac{363}{5705} & -\frac{709}{17115}\\
-\frac{19}{1141} & \frac{298}{3423}\\
\frac{436}{5705} & -\frac{773}{17115}
\end{bmatrix} \\
& \approx \begin{bmatrix}
.064 & -.041 \\
-.017 & .087 \\
.076 &-.045
\end{bmatrix}
\end{align*}$$ When we multiply this Inverse back to our original matrix, it will produce Identity Matrix in one direction
$$\begin{bmatrix}
7&8&9 \\
1&13&2
\end{bmatrix}\begin{bmatrix}
\frac{363}{5705} & -\frac{709}{17115}\\
-\frac{19}{1141} & \frac{298}{3423}\\
\frac{436}{5705} & -\frac{773}{17115}
\end{bmatrix} = \begin{bmatrix}
1 & 0\\
0 & 1
\end{bmatrix}$$ but not necessarily in the other
$$\begin{align*}
\begin{bmatrix}
\frac{363}{5705} & -\frac{709}{17115}\\
-\frac{19}{1141} & \frac{298}{3423}\\
\frac{436}{5705} & -\frac{773}{17115}
\end{bmatrix} \begin{bmatrix}
7&8&9 \\
1&13&2
\end{bmatrix}& = \begin{bmatrix}
\frac{6914}{17115} & -\frac{101}{3423} & \frac{8383}{17115}\\
-\frac{101}{3423} & \frac{3418}{3423} & \frac{83}{3423} \\
\frac{8383}{17115} & \frac{83}{3423} & \frac{10226}{17115}
\end{bmatrix} \\
&\approx \begin{bmatrix}
.404 & -.030 & .490 \\
-.030 & .999 & .024 \\
.490 & .024 & .597
\end{bmatrix}
\end{align*}$$
Interestingly, when generating Inverse with Transpose, then Transpose of this Inverse is equal to Inverse of Transpose:
$$ (A^{-1})^\intercal = (A^\intercal)^{-1} $$
Let's see that this is true:
$$\begin{align*}
A &= \begin{bmatrix}
7&8&9 \\
1&13&2
\end{bmatrix} \\
A^\intercal &= \begin{bmatrix}
7&1\\
8&13\\
9&2
\end{bmatrix} \\
A^{-1} &= \begin{bmatrix}
\frac{363}{5705} & -\frac{709}{17115}\\
-\frac{19}{1141} & \frac{298}{3423}\\
\frac{436}{5705} & -\frac{773}{17115}
\end{bmatrix} \\ \\
(A^{-1})^\intercal &= \begin{bmatrix}
\frac{363}{5705} & -\frac{19}{1141} & \frac{436}{5705} \\
-\frac{709}{17115} & \frac{298}{3423} & -\frac{773}{17115}
\end{bmatrix} \\ \\
(A^{-1})^\intercal A^\intercal &= \begin{bmatrix}
1 & 0\\
0& 1
\end{bmatrix} \\ \\
A^\intercal (A^{-1})^\intercal &= \begin{bmatrix}
\frac{6914}{17115} & -\frac{101}{3423} & \frac{8383}{17115}\\
-\frac{101}{3423} & \frac{3418}{3423} & \frac{83}{3423} \\
\frac{8383}{17115} & \frac{83}{3423} & \frac{10226}{17115}
\end{bmatrix} \\
&\approx \begin{bmatrix}
.404 & -.030 & .490 \\
-.030 & .999 & .024 \\
.490 & .024 & .597
\end{bmatrix} \\
& = AA^{-1}
\end{align*}$$ We note that the same rule for only producing Identity Matrix with the smaller order applies for Transposes a well.
We also notice that in the 'enlarging' direction, the product of Matrices equals to product of their Transposes. This is because the Products are symmetrical along the main diagonal and so transposing it will result in no change.

To Summarize

We found how Inverse of Products relates to Inverses of its Factors
$$AB=C \Leftrightarrow B^{-1}A^{-1} = C^{-1}$$ and that Matrix Product is Singular if order 'enlarges'
$$ n > m, \\
A \text{ has order of } n\times m, \\
B \text{ has order of } m\times n, \\AB = C \Rightarrow \det(C) = 0$$ We also defined Inverse of Non-Square Matrix using its Transpose
$$\begin{align*}
A^{-1} &= A^\intercal (AA^\intercal)^{-1}, n \leq m\\
\Rightarrow AA^{-1} &= \textbf 1 _n\\
A^{-1} &= (A^\intercal A)^{-1}A^\intercal, n \geq m \\
\Rightarrow A^{-1}A &= \textbf 1 _m
\end{align*}$$ and with this we find the nice pattern that
$$ (A^{-1})^\intercal = (A^\intercal)^{-1} $$ Though for now, Non-Square Inverses only produces Identity Matrix when multiplied in one direction, we may soon be able to find a way for them to Commute and multiply to Identity Matrix in both directions.

Comments

Popular posts from this blog

Large Polynomial Series using Matrices (Calculating Bernoulli's Number with Pascal Matrix)

Polynomial Series can be easily solved using Power Series Formulas for each term in the polynomial. However, this can be frustrating since not every Power Formula are intuitive to memorize. We would like to find a more elegant and easier-to-recall formula for computing a Polynomial Series. This can be done using matrices. Notations and Equations We will borrow the notations and simple equations from Sequence Curve articles . There, we have extended a series to be continuous through the following identity: $$\sum_{i=m}^n f(i) = \int_{m-1}^nȢ\{f(x)\}dx $$ The $Ȣ\{f(x)\}$ acts as the rate of change of a series relative to its bounds. $$\frac{d}{dt} \sum_{i=m}^nf(i) = Ȣ\{f(n)\}\frac{dn}{dt} - Ȣ\{f(m-1)\}\frac{dm}{dt}$$ Letting $t=m=n$, we find $$\frac{d}{dt} \sum_{i=t}^tf(i) = \frac{d}{dt}f(t)= Ȣ\{f(t)\} - Ȣ\{f(t-1)\} $$ Remebering that $Ȣ$ Transformation is Linear, we can derive a simple identity of $$\frac{d}{dx}f(x+1) = Ȣ\{f(x+1)\} - Ȣ\{f(x)\} = Ȣ\{f(x+1)-f(x)\}$$ This will be use

Partition Counter using Trees, Recursion, Tables, and Algorithm

Partitions are number of ways an integer can be represented as sum of positive integers. We can ask, for example, what is the partition of 5? If we write out every possible combination, $$\begin{align*}  5 &= 1+1+1+1+1 \\  &= 1+1+1+2\\ &= 1+1+3\\ &= 1+4\\ &= 1+2+2\\ &= 2+3\\  &= 5 \end{align*} $$ we can see that partition of 5 is 7. One will immediately notice, however, that this is not the most efficient approach to answering the question: not only does partition grow quickly, attempting to organize and not miss or repeat an algebraic expression becomes messy and impractical. Chapter 1: Partition Tree A cleaner, but still not the best, approach would be to use tree diagrams to represent the partitions of a number. In a Partition Tree of $n$, every path to a terminating node will add up to the number $n$. And also, every child of a node (every nodes below a given parent node) will always be greater than or equal to the parent node in order to not

Vector Plane Rotation

Vector is a useful tool in maths and in physics that describes both magnitude and direction in a space. Vectors are used to describe concepts where direction matter, such as forces and velocity. For example, two cars can have same speed(scalar) but if one if heading north and the other south, they will have different velocity(vector). Vectors are usually written in terms of their components. Components are magnitude of the vector in one dimension. In 2D space, vector can be represented as coordinate of 2 components, <x,y>, and in 3D space, vector can be represented as coordinate of 3 components, <x, y, z>. Breaking down vectors into their components is extremely useful, especially in physics, because it allows one to break down complicated changes in vector into simpler and easier-to-work-with changes in its components. Take a look at projectile motion, for example. Velocity of an object fired up into the sky at an angle changes both its direction and magnitude. Attem

Summation of Harmonic Interval and Simplified Riemann Sum

When you first learn of Summation in Pre-Calc class, it is probably in build-up to Riemann Sum. You are taught the basics of Summation and simple manipulations, such a $\sum_nc*a_n = c*\sum_na_a$ for constant $c$, but you don't get to explore deeper into Summations and really play around with it. One thing that really intrigued me was why Summations only incremented by a unit. When we have a Sum like $\sum_{n=3}^{10}a_n$, it is equal to $a_3+a_4+a_5+\dots+a_9+a_{10}$, always incrementing by a unit( $1$ ). Why couldn't we Sum with interval that is not a unit? How could we have something like $a_3+a_{3.5}+a_4+a_{4.5}+a_5+a_{5.5}+\dots+a_{8.5}+a_9+a_{9.5}+a_{10}$, incrementing by a half instead of a full unit? Sets Of course you technically can use Sets and Set Builder Notation to create a specific Summation. $$\sum_{i \in S}i,S=\{\frac{x}{2}|x\in\mathbb{N_0}\} \, = \, 0+\frac{1}{2}+1+\frac{3}{2}+\dots$$ The above is Sum of all Rational Numbers incrementing by $\frac{1