User not logged in - login - register
Home Calendar Books School Tool Photo Gallery Message Boards Users Statistics Advertise Site Info
go to bottom | |
 Message Boards » » Linear Algebra Page [1]  
mrfrog

15145 Posts
user info
edit post

I'll be doing nothing but this the entire weekend.

If anyone else is studying this, let's rock it in the library sometime. Like I said, I'll pretty much be doing this the entire time, so you know... anytime works.

Otherwise, any resources or recommendations for Campbell?

4/25/2008 5:24:26 PM

ndmetcal
All American
9012 Posts
user info
edit post

resources, eh?

4/25/2008 5:46:03 PM

Jrb599
All American
8846 Posts
user info
edit post

What level of lin algebra is this?

4/25/2008 5:50:28 PM

mrfrog

15145 Posts
user info
edit post

520

sorry, forgot there were similar things.

4/25/2008 5:51:18 PM

mathman
All American
1631 Posts
user info
edit post

did you guys discuss the Jordon form and the matrix exponential? Just curious.

4/26/2008 8:34:50 AM

mrfrog

15145 Posts
user info
edit post

Jordon - yes
Exponential - no

4/26/2008 12:16:07 PM

mathman
All American
1631 Posts
user info
edit post

Here's why it's interesting. Let x be a vector (x1,x2,...xN)^T then

dx/dt = Ax

is the normal form of a system of linear equations. Any homogeneous system of ordinary linear differential equations can be put in that form (where A is a N x N constant matrix).

The general solution is just exp(tA)c
where c = (c1,c2,...,cN)^T is a vector of arbitrary constants.

Now, that's nice you might say, after all exp(tA) looks simple enough, but recall,

exp(tA) = 1 + tA + (1/2)t^2 A^2 + ...

Generally for most matrices this series is unending. So what good is it to say the solution is just exp(tA)? Well, enter the Jordon form or generalized eigenvectors. There are simple formulas to calculate the matrix exponential of a matrix in Jordon form.

So the idea of the Jordon form has deep connections with finding general solutions of linear ODEs. Also in physics I see great similarity in the ideas of representation theory where one also tries to distill an object into its irreducible pieces. There the physical significance has to do with degerate energy eigenstates.

Anyway, hope you enjoyed the course, I love this stuff.

4/26/2008 4:21:09 PM

mrfrog

15145 Posts
user info
edit post

well alrightey there...

it looks like pretty much every little itty bitty bit of that is a direct analogue of the scalar version. I'm sure it took mathematicians a good number of lifetimes to figure that one out.

Welp, I'll get back to reviewing the Jordon form itself here.

4/26/2008 4:57:11 PM

clalias
All American
1580 Posts
user info
edit post

^^Every engineer knows the best way to solve exp(At) is to use the Cayley-Hamilton Theorem.

Every square matrix must satisfy it's own characteristic equation.

As a consequence, an analytic function of a matrix may be expressed as a polynomial of degree one less than the dimension of the matrix.

So if, for example, A is 2x2 then you can show that
exp(At)=alpha0*I + alpha1*A

notice you only need to consider the powers of A up to one less than the dimension of A. All others are multiples of previous (no new information).

So there you have it. 2 equations in 2 unknowns to find the alphas.
Done.

[Edited on April 26, 2008 at 5:50 PM. Reason : .]

4/26/2008 5:47:21 PM

mrfrog

15145 Posts
user info
edit post

Quote :
"Every engineer"


Oh hells no.

Quote :
"So there you have it. 2 equations in 2 unknowns to find the alphas.
Done."


And wait wait, isn't the point that we don't know the exp(At) 2x2 matrix? What equations would we be solving?

4/26/2008 7:20:52 PM

clalias
All American
1580 Posts
user info
edit post

Quote :
"Oh hells no."

Every [good controls] engineer. Is that better for ya? Besides, I wrote that tongue-in-cheek.

I first learned this in Dr. Hall's Controls class in the AE program, and used it in about every controls class I took in grad school at UMD.

I just think it's interesting that I never learned this as a math major at State. The theorem is pretty profound, but was only mentioned is two questions in the Linear Algebra book by Strang. <I had Dr. Fauntleroy for Lin Alg>

Quote :
"And wait wait, isn't the point that we don't know the exp(At) 2x2 matrix? What equations would we be solving?"

OK let me be more clear

Quote :
"Every square matrix must satisfy it's own characteristic equation."

1.
You can use this fact to show that for a matrix A of dimension n the expression A^m (m>n) is linear combination of the powers up to (n-1) <which is what I said before>

2.
So if you are solving exp(A*t) we can reduce the infinite series to a finite sum, in particular for dim=2,
exp(At)=alpha0*I + alpha1*A*t

so substitute the eigenvalues for the A matrix. => 2 equations in 2 unknowns

exp(lambda1*t)=alpha0 + alpha1*lambda1*t
exp(lambda2*t)=alpha0 + alpha1*lambda2*t

--> solve for the alphas

then compute alpha0*I+alpha1*A*t = exp(A*t)


[Edited on April 26, 2008 at 10:26 PM. Reason : .]

4/26/2008 10:06:01 PM

mrfrog

15145 Posts
user info
edit post

Well isn't that just peachy that you learned it in controls class?

I, on the other hand, will probably at some point have to explain how I first learned this method from TWW.

4/26/2008 11:37:08 PM

mathman
All American
1631 Posts
user info
edit post

Interesting approach. I'm not sure if mine is more or less calculation. I like to use generalized eigenvectors. It all starts with noticing that

exp(tA) = exp(rt)[I + t(A-rI) + (1/2)t^2 (A-rI)^2 + (1/3!)t^3 (A-rI)^3 + ... ]

for any number r. However, what makes this interesting is that is you act on a generalized eigenvector with eigenvalue r then the expression truncates. Suppose,

1. (A-rI)u1 = 0 : eigenvector
2. (A-rI)^2 u2 = 0 : generalized eigenvector of order 2
3. (A-rI)^3 u3 = 0 : generalized eigenvector of order 3

Then

1. x1 = exp(tA)u1 = exp(rt)[I + t(A-rI) + ...]u1 = exp(rt) I u1 + exp(rt) t (A-rI)u1 = exp(rt) u1

2. x2 = exp(tA)u2 = exp(tA) = exp(rt)[I + t(A-rI) + (1/2)t^2 (A-rI)^2]u2
= exp(rt)u2 + texp(rt)(A-rI)u2 + 0

3. x3 = exp(tA)u2 = exp(tA) = exp(rt)[I + t(A-rI) + (1/2)t^2 (A-rI)^2 + (1/3!)t^3 (A-rI)^3]u3
= exp(rt)u3 + texp(rt)(A-rI)u3 + (1/2)t^2 (A-rI)^2 u3 + 0

Now the solution 1. should not surprise anyone who has taken ma 341. While 2. and 3. may or may not be familiar depending on who you took it with. Moreover, usually you can choose a chain of generalized eigenvectors satisfying (A-rI)u3 = u2 and (A-rI)u2 = u1 so the calculations in 2. and 3. are even simpler.

The general solution then looks like x = c1 x1 + c2 x2 + c3 x3 +...

where the +... comes from the other eigenvalues of A. Of course there can be at most n of them so this is a finite calculation.

Up to now I have been assuming that the eigenvalue r was real, if it is imaginary or complex then we need to take real and imaginary parts of the above to get solutions. In particular if r was a complex eigenvalue repeated 3 times then we would have gotten 6 linearly independent real solutions.

x1 = Re (exp(rt)u1), x2 = Im(exp(rt)u2),... , x6 = Im(exp(rt)u3 + texp(rt)u2 + (1/2)t^2 (A-rI)u1

where I assumed the "chain" condition for the x6 solution ( (A-rI)u3 = u2 and (A-rI)u2 = u1 ).

Now clalias I want to believe, but I cannot see how you get exponentials and sines and cosines as coefficients to the vectors solutions in your procedure. Is there some reason that r=0 for you guys? Oh, do the alpha1 and alpha2 solve to give functions?

Btw, I also did not learn much of this with Fauntleroy when I took Deqns from him back in the day.

4/27/2008 9:00:37 AM

mathman
All American
1631 Posts
user info
edit post

For example, suppose

A =
[ 2, 0 ]
[ 0, 2 ]

Then we can calculate,

exp(tA) =
[ exp(2t) 0 ]
[ 0 exp(2t)]

This is not a polynomial in tA, well I guess you allow function coefficients,

exp(tA) = exp(2t)*I

So in this example alpha1 = exp(2t) while alpha2=0?

I'm always interested in new algorthims. Thanks.

[Edited on April 27, 2008 at 9:05 AM. Reason : .]

4/27/2008 9:05:16 AM

clalias
All American
1580 Posts
user info
edit post

Quote :
" but I cannot see how you get exponentials and sines and cosines as coefficients to the vectors solutions in your procedure. Is there some reason that r=0 for you guys? Oh, do the alpha1 and alpha2 solve to give functions?
"


no, no special reason r=0.

when your eigenvalues are complex you get solutions with sin and cos because e^(jt)=cos(t)+sin(j*t).

^for repeated eigenvalues you look at the derivatives of the analytical function which gives you the usual t*exp(labmda*t) that you expect, when you solve for the alphas. This is the only way to get the n-independent equations for the n alphas.

the best way to understand this method is to pick a few examples;
1. pure oscillator
2. over damped
3. under damped

and work it out


OK I found a good short write-up on this method. Enjoy
http://web.mit.edu/2.151/www/Handouts/CayleyHamilton.pdf


[Edited on April 27, 2008 at 2:02 PM. Reason : link]

4/27/2008 2:01:18 PM

clalias
All American
1580 Posts
user info
edit post

whoops, of course I meant e^(jt)=cos(t)+j*sin(t).

4/27/2008 6:20:15 PM

Jrb599
All American
8846 Posts
user info
edit post

of course

4/27/2008 6:27:32 PM

mathman
All American
1631 Posts
user info
edit post

Quote :
"clalias
no, no special reason r=0.

when your eigenvalues are complex you get solutions with sin and cos because e^(jt)=cos(t)+sin(j*t).

^for repeated eigenvalues you look at the derivatives of the analytical function which gives you the usual t*exp(labmda*t) that you expect, when you solve for the alphas. This is the only way to get the n-independent equations for the n alphas.
"


I looked over the link and I understand the method now for the most part. See what tripped me up to begin with is you said that,

Quote :
"As a consequence, an analytic function of a matrix may be expressed as a polynomial of degree one less than the dimension of the matrix."


this is true only if we allow the "polynomial" to have coefficients which are functions of t. This is of course well and fine, I just assume constant coefficients by default.

The devil is in the details of page 5 of the linked pdf.

I like your method, it is quite interesting.

However, here is the downside I see. You have to do algebra with functions of t to solve for the alphas. This is not too bad for the cases presented in the pdf, but I'd wager it could get pretty ugly with higher order cases and especially where the geometric multiplicity is less than the algebraic multiplicity... that is where you have less than n-eigenvectors for A an nxn matrix.

In contrast, my method (which is not really my method, I am not a dead European dude) involves calculations with numbers not functions. The unknowns u1, u2,u3,... are generalized eigenvectors which are straightforward to calculate for a given matrix. The functions appear in the solution in an orderly easy way as dictated through the equation

exp(tA) = exp(rt)[I + t(A-rI) + (1/2)t^2 (A-rI)^2 + (1/3!)t^3 (A-rI)^3 + ... ]

Moreover, if you have the Jordon form I believe you can just straight up read off the generalized eigenvectors.

4/27/2008 7:26:14 PM

mrfrog

15145 Posts
user info
edit post

doesn't Maple solve linear systems of DEs?

Is there really a reason for any of us to think about this sort of thing anymore?

4/27/2008 8:25:17 PM

clalias
All American
1580 Posts
user info
edit post

^^ actually I am quite aware of using the Jordan form to solve this. This is usually the first method we are taught to solve e^(At). But the main reason is because we can learn more about the flow of the system by finding the generalized eigenvectors etc...

for example let v be the eigenvector for lambda and w be left eigenvector.

exp(lambda*t)*v is a "mode" of the state equation
w*X0 describes how this mode is excited by the Initial Condition X0

and by looking at the modes we can see both the temporal behavior exp(lambda*t) and the directional v.

then by looking at the I/O behavior we can see
w_k'*bj describes how jth input drives the kth mode.

and from y=c*x+d*u the output equation

c_i'*v_k describes the amount that the kth mode is visible in the ith output.

Essentially we use the Jordan form to convert to a "modal" coordinate space.

Moreover, ci'*v_k*wk'*bj is response of output i at time t to impulse in the jth channel at time t=0

^^true it is usually not the case that the nullity(lambdaI-A) = multiplicity of eigenvalues so we frequently look at these cases where we need Jordan block forms.

Cayley-Hamilton method can handle this case just fine too, so if I just need a quick calculation for exp(At), Cayley-Hamilton has always been faster for me.

^ a lot of what I talked about answers your question too. You can lose a lot of intuition if you just use maple/matlab all the time.

But yes, we frequently use maple/matlab to compute the Jordan Block Modal Forms by computing the generalized eigenvectors/eigenvalues. Mainly I use it to do a lot of the low level calculations so I don't just make a stupid typo writing out so many exp(lambda*t) expressions, one negative in the wrong place and your screwed.

[Edited on April 27, 2008 at 10:23 PM. Reason : .]

[Edited on April 27, 2008 at 10:23 PM. Reason : .^]

4/27/2008 10:17:12 PM

skokiaan
All American
26447 Posts
user info
edit post

your screwed

4/27/2008 11:03:21 PM

catzor
All American
1749 Posts
user info
edit post

About five posts in this totally turned into a dick measuring competition.

4/27/2008 11:55:32 PM

clalias
All American
1580 Posts
user info
edit post

^^you're

thanks for the grammar check.

^you're a fucking idiot, and judging by your response I'm guessing your dick is pretty small.

[Edited on April 28, 2008 at 12:10 AM. Reason : .^]

4/28/2008 12:08:53 AM

mathman
All American
1631 Posts
user info
edit post

^^^^ could you point me to a good book with this sort of thinking? I never took controls so all my differential equations is mostly just from preparing to teach DEqns.

^^^^^wait, you mean I can just do this all on the computer? Dang, I gotta find a new line of work.

[Edited on April 28, 2008 at 12:14 AM. Reason : .]

4/28/2008 12:14:16 AM

virga
All American
2019 Posts
user info
edit post

campbell's final will rape you.

enjoy!!!

4/29/2008 2:26:35 AM

mrfrog

15145 Posts
user info
edit post

huzzah!

4/29/2008 3:15:00 PM

 Message Boards » Study Hall » Linear Algebra Page [1]  
go to top | |
Admin Options : move topic | lock topic

© 2024 by The Wolf Web - All Rights Reserved.
The material located at this site is not endorsed, sponsored or provided by or on behalf of North Carolina State University.
Powered by CrazyWeb v2.39 - our disclaimer.