Previously, we have explored methods to compute the essence of power functions Ȣxn which involves solving a large system of linear equations. This method is equivalent to solving for the inverse of a large n×n matrix where entries are values of pascal's triangle. Though the matrix method allow us to solve for large number of essences at once, it does not extend easily to solve for next iterations of essence coefficients. Rather than reusing the values we have already solved for, we will have to solve for inverse of a separate larger matrix again. Here we will introduce an iterative method for solving for these coefficients. Chapter 0: Recap Let us first remind ourselves of the definition of essence. For a function f(x), we want to find the transformation Ȣf(x) such that we are able to 'smooth out' its series: b∑i=af(i)=∫ba−1Ȣf(x)dx For example, we can solve for the following functions: $$\begin{align*}Ȣ1 &= 1 \\ Ȣx &= x +...
The search for a basis for the orthogonal complement of a subspace has often
involved pure guesswork or many tedious computations of removing orthogonal
projections from a known basis for the entire vector space. Here we propose an
alternative faster method of producing a basis for the orthogonal complement of
an image of a matrix.
Let A be n×m matrix over C. Then such matrix maps from Cm to Cn.
A:Cm→Cn (Choice of the field C is to allow us to use the standard inner product <⋅,⋅> to define orthogonality. This, of course, works also for R)
Then its image imA and the orthogonal complement of the image imA⊥ are both subspaces of the codomain Cn such that
Cn=imA⊕imA⊥
The matrix A can be represented by its column vectors:
A=[∣∣v1…vm∣∣] With this representation, it is easy to see that the image is spanned by these vectors v1…vm. However, as these vectors are not necessarily linearly independent, they do not yet form a basis for imA.
imA=span(v1…vm) We must remove the linearly dependent vectors to form a basis. This reduction of linear dependence is what taking the Reduced Row Echelon Form(RREF) does well. Taking RREF of a matrix will reduce each row to independent rows with pivots or zero rows. To utilize this property of RREF, we will have to turn each column of A into rows by taking the transpose of A.
RREF(A⊺)=RREF([−v1−⋮−vm−])=[−u1−⋮−uk−−0−⋮] We will achieve k independent vectors ui and (m−k) zero rows. Each of these ui will contain a pivot at distinct columns, which in some sense guarantees their linear independence. Furthermore, since RREF is calculated using only elementary row operations, we are certain we did not 'lose' any vi vectors, meaning ui's still span the same image imA.
So {u1…uk} for a basis for k dimensional imA
A=span(u1…uk)
We can try to geometrically interpret orthogonality: orthogonal vectors are perpendicularly direction, each covering a different 'direction' in the vector space. We can characterize this sense of 'direction' using the pivots.
A pivoted vector controls the component where the pivot is present. For example, on a standard basis, each vector has a single pivot and all zeroes elsewhere. Each statndard vector controls a single direction with its pivot. This holds true for any pivoted basis. Each vector controls the single direction with the pivot, and the remaining components simply follow as byproduct.
Since orthogonal vectors are perpendicular to each of these pivoted vectors, they must point in direction where pivots are not present.
To better summarize this directionality, let us organize the vectors computed by RREF into a square upper triangular matrix where all the pivots align with the diagonal. We do so by placing each ui to the row of its pivot and putting zero-rows elsewhere.
[c1,1c1,2…c1,n0c2,1…c2,n⋮⋮⋱⋮00…cn.n] where ci,i are either 1 if there is a pivot, and 0 if it is a zero row.
This construction gives us two useful properties:
• If ith column has a pivot, all other elements in the ith column are all zeroes, because of RREF and zero-rows.
ci,i=1⇒∀1≤i≤n,cj,i=0,i≠j
• If ith column does not have a pivot, then it must have come from a zero-row, so all elements in ith row are zeroes.
ci,i=0⇒∀1≤i≤n,ci,j=0,i≠j This propeties will be useful soon enough.
As explained above, we want to 'cover the directions NOT covered by pivots'. We can do so by adding a 'copivot' to unpivoted directions. To generate a copivoted vector, apply complex conjugate to a nonpivoted column, and add -1 to the diagonal element of the column.
Seeing that there are k pivots on the n×n square matrix, we will be able to generate m=n−k copivoted vectors wi
w1…wm
We can show that these wis form basis for the orthogonal complement of image of A
imA⊥=span(w1…wm) We will do so in 3 parts:
1. Linear Independence of Copivoted Vectors
2. Orthogonality of Copivoted vectors fo Image of A
3. Spanning of Orthogonal Complement by Copivoted Vectors
First notice that the copivot is each placed in a different row by definition. This means from a given set of copivoted vectors, we can always choose a vector with the copivot on the lowest row.
The proof is done inductively:
Base Case: Single Copivoted Vector.
This is trivial, as a set of a single vector is always linearly independent.
Inductive Case: m>1 Copivoted Vectors.
Let w1…wm be set of copivoted vectors such that wj has its copivot on a lower row than wi iff i<j. Let's also assume by induction hypothesis that w1…wm−1 are linearly independent.
To show independence, let c1…cm∈C be constants such that
c1w1+⋯+cm−1cm−1+cmwm=0 Let wm have the lowest copivot on the jth row. Notice that, because each of wi came from an upper-triangular matrix, wm alone has -1 on the jth row, and all other vectors have zeros. So, the sum of jth row is −c.
Because the sum must equal the zero vector, the jth component of the sum must also equal zero, which is only possible iff
cm=0 So the sum further reduces to
c1w1+⋯+cm−1wm−1+cmwm=c1w1+⋯+cm−1wm−1=0 and by induction hypothesis w1…wm−1 are linearly independent, meaning
c1=⋯=cm−1=0
This leads to that every coefficent ci=0, proving linear indepdence of m copivoted w1…wm.
QED
wi∈imA⊥,∀wi So let us compute the inner product of arbitrary pivoted vj and copivoted wi. Let vi have come from ˉith row of the upper triangular square matrix, and let wj have been generated from ˉith column. This means we can write out its components using ci,j
vj=[0⋮01cˉj,ˉj+1⋮cˉj,n]wi=[¯c1,ˉj⋮¯cˉi−1,ˉi−10⋮0] First, we see that copivoted vector is generated from non-pivoted column of the square matrix, ˉi≠ˉj. This leaves us two cases.
Case 1: ˉi<ˉj
In this case, it is trivial to see that the inner product is zero since wi has all zeroes below ˉith row, and vj has all zeroes above ˉjth row. This means when the inner product is applied since either of the components is zero, the inner product is also zero.
ˉi<ˉj⟹<vi,wj>=0 Case 2: ˉi>ˉj
In this case, let us first expand the inner product
<vj,wi>=n∑k=1(vj)k(wi)k=ˉi∑k=ˉj(vj)k(wi)k,other componets beyond this bound are zeroes=1∗cˉj,ˉi+ˉi−1∑k=ˉj+1cˉj,kck,ˉi+cˉj,ˉi∗−1=ˉi−1∑k=ˉj+1cˉj,kck,ˉi Now we can utilize the two propeties we have outlined above.
ci,i=1⇒∀1≤i≤n,cj,i=0,i≠j ci,i=0⇒∀1≤i≤n,ci,j=0,i≠j For each k, if ck,k=1, then cˉj,k=0, or if ck,k=0, then ck,ˉi=0. In either case, each term in the
summation will therefore be all zeroes. ∴cˉj,kck,ˉi=0 ∴ˉi>ˉj⟹<vj,wi>=0
So in every possible vj,wi combinations, we see that wi is orthogonal to basis of imA.
∴wi∈imA⊥,∀wi QED
Because Cn=imA⊕imA⊥ we know their dimensions are
n=dim(imA)+dim(imA⊥)=k+dim(imA⊥) ∴dim(imA⊥)=n−k=m Since we have found m linearly independent vectors w1…wm in m dimensional space, this is enough to show that the vectors span the space.
∴imA⊥=span(w1…wm) QED
QED
Consider v1…vk form a basis for imA and w1…wm form a basis for imA⊥. We frist demonstrate the linear independence of the entire set.
Let aj,bi∈C be constants such that
a1v1+⋯+akvk+b1w1+⋯+bmwm=0 Then, let
v=a1v1+⋯+akvk∈im w=b1w1+⋯+bmwm∈imA⊥ Then,
v+w=0∴<v,w>+<w,w>=<w,w>=0 Similarly, we find that <v,v>=0 By property of inner product, <v,v>=0⟺v=0, so we see that
v=0 w=0 which by linear independence of vj,wi implies aj=bi=0, proving their linear independence.
Now that we have found total of k+m=n linearly independent vectors in n dimensional space, they clearly form a basis for Cn.
Let A be n×m matrix over C. Then such matrix maps from Cm to Cn.
A:Cm→Cn (Choice of the field C is to allow us to use the standard inner product <⋅,⋅> to define orthogonality. This, of course, works also for R)
Then its image imA and the orthogonal complement of the image imA⊥ are both subspaces of the codomain Cn such that
Cn=imA⊕imA⊥
A Basis for Image of A
Our first task will be to determine a basis for the image. I have already discussed this in a separate lengthy post, but we will summarize all the tools we need again.The matrix A can be represented by its column vectors:
A=[∣∣v1…vm∣∣] With this representation, it is easy to see that the image is spanned by these vectors v1…vm. However, as these vectors are not necessarily linearly independent, they do not yet form a basis for imA.
imA=span(v1…vm) We must remove the linearly dependent vectors to form a basis. This reduction of linear dependence is what taking the Reduced Row Echelon Form(RREF) does well. Taking RREF of a matrix will reduce each row to independent rows with pivots or zero rows. To utilize this property of RREF, we will have to turn each column of A into rows by taking the transpose of A.
RREF(A⊺)=RREF([−v1−⋮−vm−])=[−u1−⋮−uk−−0−⋮] We will achieve k independent vectors ui and (m−k) zero rows. Each of these ui will contain a pivot at distinct columns, which in some sense guarantees their linear independence. Furthermore, since RREF is calculated using only elementary row operations, we are certain we did not 'lose' any vi vectors, meaning ui's still span the same image imA.
So {u1…uk} for a basis for k dimensional imA
A=span(u1…uk)
Let's internalize this with a quick example: Consider this 3×4
matrix:
A=[11ii01010i0−1] Though in 3D space, it is obvious to see that the image is only 2 dimensional, as the third and fourth columns are multiples of the first two. Let's confirm this:
RREF(A⊺)=RREF([11ii01010i0−1]⊺)=RREF([10011ii00ii−1])=[10001−i000000] So we can confirm that the image is spanned by merely two vectors:
{[100],[01−i]}
A=[11ii01010i0−1] Though in 3D space, it is obvious to see that the image is only 2 dimensional, as the third and fourth columns are multiples of the first two. Let's confirm this:
RREF(A⊺)=RREF([11ii01010i0−1]⊺)=RREF([10011ii00ii−1])=[10001−i000000] So we can confirm that the image is spanned by merely two vectors:
{[100],[01−i]}
A Basis for Orthogonal Complement of Image
Now that we have found a basis for imA, we must now generate a basis for its orthogonal counterpart.We can try to geometrically interpret orthogonality: orthogonal vectors are perpendicularly direction, each covering a different 'direction' in the vector space. We can characterize this sense of 'direction' using the pivots.
A pivoted vector controls the component where the pivot is present. For example, on a standard basis, each vector has a single pivot and all zeroes elsewhere. Each statndard vector controls a single direction with its pivot. This holds true for any pivoted basis. Each vector controls the single direction with the pivot, and the remaining components simply follow as byproduct.
Since orthogonal vectors are perpendicular to each of these pivoted vectors, they must point in direction where pivots are not present.
To better summarize this directionality, let us organize the vectors computed by RREF into a square upper triangular matrix where all the pivots align with the diagonal. We do so by placing each ui to the row of its pivot and putting zero-rows elsewhere.
[c1,1c1,2…c1,n0c2,1…c2,n⋮⋮⋱⋮00…cn.n] where ci,i are either 1 if there is a pivot, and 0 if it is a zero row.
This construction gives us two useful properties:
• If ith column has a pivot, all other elements in the ith column are all zeroes, because of RREF and zero-rows.
ci,i=1⇒∀1≤i≤n,cj,i=0,i≠j
• If ith column does not have a pivot, then it must have come from a zero-row, so all elements in ith row are zeroes.
ci,i=0⇒∀1≤i≤n,ci,j=0,i≠j This propeties will be useful soon enough.
Consider this example:
A=[1001i00i0101−1i0i−10011] Its RREF of transpose is
RREF(A⊺)=[1i0−10001i00000100000] So we see the imA is spanned by these three vectors:
[1i0−10],[001i0],[00001] We now need to organize them into the square matrix with each pivots aligning on the dioagonal:
[1i0−1000000001i00000000001] Furthermore, notice that the two properties holding on this square matrix: where there is 1 on the diagonal the column is all zero, and where there is zero on the diagonal the row is all zero.
A=[1001i00i0101−1i0i−10011] Its RREF of transpose is
RREF(A⊺)=[1i0−10001i00000100000] So we see the imA is spanned by these three vectors:
[1i0−10],[001i0],[00001] We now need to organize them into the square matrix with each pivots aligning on the dioagonal:
[1i0−1000000001i00000000001] Furthermore, notice that the two properties holding on this square matrix: where there is 1 on the diagonal the column is all zero, and where there is zero on the diagonal the row is all zero.
As explained above, we want to 'cover the directions NOT covered by pivots'. We can do so by adding a 'copivot' to unpivoted directions. To generate a copivoted vector, apply complex conjugate to a nonpivoted column, and add -1 to the diagonal element of the column.
Seeing that there are k pivots on the n×n square matrix, we will be able to generate m=n−k copivoted vectors wi
w1…wm
Let us use the same example as above. We have the square upper triangular
matrix
[1i0−1000000001i00000000001] We see that we have 2 columns without a pivot:
[i0000],[−10i00] The zero on the dioagonal are colored red to keep track of which lays on the diagonal.
To generate copivoted vectors from these, we first complex conjugate them, them add -1 on the diagonal zeroes. Then we get:
[−i−1000],[−10−i−10] Notice that combined with basis for the image, the 5 vectors cover all 5 directions with their pivots and copivots:
[1i0−10],[−i−1000],[00100],[−10−i−10],[00001]
We will now prove that the two copivoted vectors form a basis for the orthogonal
complement of the image.[1i0−1000000001i00000000001] We see that we have 2 columns without a pivot:
[i0000],[−10i00] The zero on the dioagonal are colored red to keep track of which lays on the diagonal.
To generate copivoted vectors from these, we first complex conjugate them, them add -1 on the diagonal zeroes. Then we get:
[−i−1000],[−10−i−10] Notice that combined with basis for the image, the 5 vectors cover all 5 directions with their pivots and copivots:
[1i0−10],[−i−1000],[00100],[−10−i−10],[00001]
We can show that these wis form basis for the orthogonal complement of image of A
imA⊥=span(w1…wm) We will do so in 3 parts:
1. Linear Independence of Copivoted Vectors
2. Orthogonality of Copivoted vectors fo Image of A
3. Spanning of Orthogonal Complement by Copivoted Vectors
1. Linear Independence of Copivoted Vectors
The argument for wi's linear independence follows almost the same as the argument for pivoted vector's linear independence.First notice that the copivot is each placed in a different row by definition. This means from a given set of copivoted vectors, we can always choose a vector with the copivot on the lowest row.
The proof is done inductively:
Base Case: Single Copivoted Vector.
This is trivial, as a set of a single vector is always linearly independent.
Inductive Case: m>1 Copivoted Vectors.
Let w1…wm be set of copivoted vectors such that wj has its copivot on a lower row than wi iff i<j. Let's also assume by induction hypothesis that w1…wm−1 are linearly independent.
To show independence, let c1…cm∈C be constants such that
c1w1+⋯+cm−1cm−1+cmwm=0 Let wm have the lowest copivot on the jth row. Notice that, because each of wi came from an upper-triangular matrix, wm alone has -1 on the jth row, and all other vectors have zeros. So, the sum of jth row is −c.
Because the sum must equal the zero vector, the jth component of the sum must also equal zero, which is only possible iff
cm=0 So the sum further reduces to
c1w1+⋯+cm−1wm−1+cmwm=c1w1+⋯+cm−1wm−1=0 and by induction hypothesis w1…wm−1 are linearly independent, meaning
c1=⋯=cm−1=0
This leads to that every coefficent ci=0, proving linear indepdence of m copivoted w1…wm.
QED
Let's see this playing out for thw two copivoted example from above. Let c1,c2 be such that
c1[−i−1000]+c2[−10−i−10]=[−c1i−c2−c1−c2i−c20]=[00000] So ∴c1=c2=0 proving their independence.
c1[−i−1000]+c2[−10−i−10]=[−c1i−c2−c1−c2i−c20]=[00000] So ∴c1=c2=0 proving their independence.
2. Orthogonality of Copivoted vectors fo Image of A
We have seen that the image is spanned by the basis v1…vk (the rows of the square matrix from which wi's are generated). So, if every copivoted wi is orthogonal to each vj, we will know that wi is in the Orthogonal Complement of A.wi∈imA⊥,∀wi So let us compute the inner product of arbitrary pivoted vj and copivoted wi. Let vi have come from ˉith row of the upper triangular square matrix, and let wj have been generated from ˉith column. This means we can write out its components using ci,j
vj=[0⋮01cˉj,ˉj+1⋮cˉj,n]wi=[¯c1,ˉj⋮¯cˉi−1,ˉi−10⋮0] First, we see that copivoted vector is generated from non-pivoted column of the square matrix, ˉi≠ˉj. This leaves us two cases.
Case 1: ˉi<ˉj
In this case, it is trivial to see that the inner product is zero since wi has all zeroes below ˉith row, and vj has all zeroes above ˉjth row. This means when the inner product is applied since either of the components is zero, the inner product is also zero.
ˉi<ˉj⟹<vi,wj>=0 Case 2: ˉi>ˉj
In this case, let us first expand the inner product
<vj,wi>=n∑k=1(vj)k(wi)k=ˉi∑k=ˉj(vj)k(wi)k,other componets beyond this bound are zeroes=1∗cˉj,ˉi+ˉi−1∑k=ˉj+1cˉj,kck,ˉi+cˉj,ˉi∗−1=ˉi−1∑k=ˉj+1cˉj,kck,ˉi Now we can utilize the two propeties we have outlined above.
ci,i=1⇒∀1≤i≤n,cj,i=0,i≠j ci,i=0⇒∀1≤i≤n,ci,j=0,i≠j For each k, if ck,k=1, then cˉj,k=0, or if ck,k=0, then ck,ˉi=0. In either case, each term in the
summation will therefore be all zeroes. ∴cˉj,kck,ˉi=0 ∴ˉi>ˉj⟹<vj,wi>=0
So in every possible vj,wi combinations, we see that wi is orthogonal to basis of imA.
∴wi∈imA⊥,∀wi QED
Let's see this holding true using the same example as above.
Take the wi=[−10−i−10]. We will see that this is orthogonal to every vj∈{[1i0−10],[001i0],[00001]}
<[1i0−10],[−10−i−10]>=(1∗−1)+(i∗0)+(0∗i)+(−1∗−1)+0=−1+0+1=0<[001i0],[−10−i−10]>=0+(1∗i)+(i∗−1)+0=i+−i=0<[00001],[−10−i−10]>=0+0=0 This holds also true for wi=[−10−i−1], which readers can quickly check for themselves.
Take the wi=[−10−i−10]. We will see that this is orthogonal to every vj∈{[1i0−10],[001i0],[00001]}
<[1i0−10],[−10−i−10]>=(1∗−1)+(i∗0)+(0∗i)+(−1∗−1)+0=−1+0+1=0<[001i0],[−10−i−10]>=0+(1∗i)+(i∗−1)+0=i+−i=0<[00001],[−10−i−10]>=0+0=0 This holds also true for wi=[−10−i−1], which readers can quickly check for themselves.
3. Spanning of Orthogonal Complement by Copivoted Vectors
This final statement easily follows from the previous two lemmas. We have seen that each w1…wm∈imA⊥, and we have seen that they are linearly independent.Because Cn=imA⊕imA⊥ we know their dimensions are
n=dim(imA)+dim(imA⊥)=k+dim(imA⊥) ∴dim(imA⊥)=n−k=m Since we have found m linearly independent vectors w1…wm in m dimensional space, this is enough to show that the vectors span the space.
∴imA⊥=span(w1…wm) QED
Basis for Orthogonal Complement
Finally, combining all three statements 1, 2, and 3 together, we have proof that w1…wm form basis for imA⊥.QED
Basis for Cn
Now that we have found a basis for image and orthogonal complement of the image, combining these two sets form basis for the entire codomain Cn. This proof holds more generally to any basis and basis for its complement, not only for the pivoted and copivoted vectors.Consider v1…vk form a basis for imA and w1…wm form a basis for imA⊥. We frist demonstrate the linear independence of the entire set.
Let aj,bi∈C be constants such that
a1v1+⋯+akvk+b1w1+⋯+bmwm=0 Then, let
v=a1v1+⋯+akvk∈im w=b1w1+⋯+bmwm∈imA⊥ Then,
v+w=0∴<v,w>+<w,w>=<w,w>=0 Similarly, we find that <v,v>=0 By property of inner product, <v,v>=0⟺v=0, so we see that
v=0 w=0 which by linear independence of vj,wi implies aj=bi=0, proving their linear independence.
Now that we have found total of k+m=n linearly independent vectors in n dimensional space, they clearly form a basis for Cn.
{v1…vk,w1…wm} form a basis for Cn QED
Notice the generated copivoted basis is not an orthonormal basis. To generate ONB for each space, it is still simpler to have found this separate basis first, so we can apply, for example, the Graham Schmidt Procedure on this smaller set of basis.
Another method to generating kernel and its orthogonal complemet is to utilize adjoint matrix:
imA∗=(kerA)⊥ kerA∗=(imA)⊥ Where adjoint maps are defined
A∗=ˉA⊺ Now, we can find image of A∗ using RREF the same as above:
RREF((A∗)⊺)=RREF(ˉA)
With these methods, we are able to find all four major subspaces of a matrix.
Let us see this in action with two problems:
A=[31020111−1000−111] Now we simply perform our short algorithm:
RREF(A⊺)=[1010001−10000001] [1010001−100000000000000001] where we finally find the basis for V⊥ to be
{[1−1−100],[000−10]}
Consider a planed defined by
2x+3y−4z=0 First, we can easily anticipate the answer to be [23−4], by looking at the coefficents (this of course is a easy result from calculus 3).
We first have to find at least two independent vectors spanning this plane. This can be easily done. We can, for example, find two vectors:
[402],[043] Now we follow the same steps as problem above.
A=[400423] RREF(A intercal)=[10120134] where we can quite easily see that the generated copivoted vector should be
[1234−1] Notice this matches the expected result, but is simply scaled down.
4[1234−1]=[23−4]
Using this, we can see that the 5 vectors
[1i0−10],[−i−1000],[00100],[−10−i−10],[00001] form basis for Cn.
[1i0−10],[−i−1000],[00100],[−10−i−10],[00001] form basis for Cn.
Notice the generated copivoted basis is not an orthonormal basis. To generate ONB for each space, it is still simpler to have found this separate basis first, so we can apply, for example, the Graham Schmidt Procedure on this smaller set of basis.
Image, Kernel, and their Orthogonal Complements
Furthermore, with this method of generating basis, we can easily compute a basis for image, kernel, and their orthogonal complements at once. By utilizing the method from my previous post, we can compute a pivoted basis for kernel by augmenting A⊺ with identity matrix; then the augmented rows on the zero-row on the original matrix keep track of the combinations yielding a zero vector. Using these pivoted vectors, apply the same method used for the image to produce basis for the orthogonal complement of the kernel.
Let's see this with a quick example:
Cosnider the same matrix A as above example.
A=[1001i00i0101−1i0i−10011] To generate a pivoted basis for the image and the kernel, we take RREF of transpose of A augemented with identity matrix:
RREF(aug(A⊺,I4))=RREF[1i0−10001i0000011i1i−11|1000010000100001]=[1i0−10001i00000100000|0−1−1101000010111−1] So we see that kernel is spanned by a single vector
[111−1] Now that we have a pivoted basis for the kernel, we can apply the same method to find the orthogonal complement of the kernel:
First set up the square matrix:
[111−1000000000000] Now for each of the unpivoted columns, generate the following copivoted vectors:
[1−100],[10−10],[−100−1] These vectors form a basis for the orthogonal complement of the kernel.
Cosnider the same matrix A as above example.
A=[1001i00i0101−1i0i−10011] To generate a pivoted basis for the image and the kernel, we take RREF of transpose of A augemented with identity matrix:
RREF(aug(A⊺,I4))=RREF[1i0−10001i0000011i1i−11|1000010000100001]=[1i0−10001i00000100000|0−1−1101000010111−1] So we see that kernel is spanned by a single vector
[111−1] Now that we have a pivoted basis for the kernel, we can apply the same method to find the orthogonal complement of the kernel:
First set up the square matrix:
[111−1000000000000] Now for each of the unpivoted columns, generate the following copivoted vectors:
[1−100],[10−10],[−100−1] These vectors form a basis for the orthogonal complement of the kernel.
Another method to generating kernel and its orthogonal complemet is to utilize adjoint matrix:
imA∗=(kerA)⊥ kerA∗=(imA)⊥ Where adjoint maps are defined
A∗=ˉA⊺ Now, we can find image of A∗ using RREF the same as above:
RREF((A∗)⊺)=RREF(ˉA)
Let's try with the same example:
RREF¯[1001i00i0101−1i0i−10011]=RREF[1001−i00−i0101−1−i0−i−10011]=[10010101001100000000] where we see that imA∗=(kerA)⊥ is spanned by
[1001],[0101],[0011] Though these vectors may look different from the first basis we found, they indeed span the same space.
Lastly, we can find basis for (imA∗)⊥=kerA by generating the copivoted vector:
[111−1] This indeed is the same vector as previous kernel we have found.
RREF¯[1001i00i0101−1i0i−10011]=RREF[1001−i00−i0101−1−i0−i−10011]=[10010101001100000000] where we see that imA∗=(kerA)⊥ is spanned by
[1001],[0101],[0011] Though these vectors may look different from the first basis we found, they indeed span the same space.
Lastly, we can find basis for (imA∗)⊥=kerA by generating the copivoted vector:
[111−1] This indeed is the same vector as previous kernel we have found.
With these methods, we are able to find all four major subspaces of a matrix.
General Application
This method discussed here can be utilized generally for discovering a basis for any space. For any set of vectors, we produce a matrix A such that its image is the span of those vectors. We can do this easily by defining A where its columns are the desired vectors. Then, trivially, its image is spanned by the column vectors, so the orthogonal complement of the image is the orthogonal complement of the original space.Let us see this in action with two problems:
Subspace
Consider the space V spanned by vectors [3210−1],[10101],[01−101]. To find V⊥, we first setup the matrix A asA=[31020111−1000−111] Now we simply perform our short algorithm:
RREF(A⊺)=[1010001−10000001] [1010001−100000000000000001] where we finally find the basis for V⊥ to be
{[1−1−100],[000−10]}
Normal Vector
Now consider another problem if finding the normal vector to a plane in 3D.Consider a planed defined by
2x+3y−4z=0 First, we can easily anticipate the answer to be [23−4], by looking at the coefficents (this of course is a easy result from calculus 3).
We first have to find at least two independent vectors spanning this plane. This can be easily done. We can, for example, find two vectors:
[402],[043] Now we follow the same steps as problem above.
A=[400423] RREF(A intercal)=[10120134] where we can quite easily see that the generated copivoted vector should be
[1234−1] Notice this matches the expected result, but is simply scaled down.
4[1234−1]=[23−4]
Comments
Post a Comment