Math 22b Spring 2019
22b Linear Algebra and Vector Analysis
Questions
Week 13:
- Q: Is the Schrödinger equation if(t) = Lf or if(t) = -Lf?A: it does not really matter. Both signs can be used. Unlike for the heat equation or wave equation, replacing the sign does not have an impact. If it should appear, just go with the version which is given. The reason why it does not matter much is because one has either e(i n2 t) or e(-i n2 t), which just changes the rotation direction of the wave. In the case of the heat equation e(-n2 t) has a completely different effect than e(n2 t) as the later explodes. Also for the wave equation we need c = sqrt(-lambda) and need this to be real. So, from all three, for the Schrödinger equation, the sign of the eigenvalue does not matter much. Actually, negative eigenvalues are anti-particles.
- Q: What is the most ridiculous statement in a math 22 setting?A: The Banach-Tarski paradox comes in mind which is the theorem that one can decompose the standard sphere in R3 of radius 1 into 5 pieces A,B,C,D,E, rotate and translate them to assemble two disjoint spheres which both have radius 1. One can also just try to be Kafkaesque:
Its your turn to make sense of it or appreciate its non-sense.It might or might not be true that if an asymptotically stable determinant is a non-diagonalizable eigenvector of a PDE, then its QR decomposition row reduces by Fourier to a skew symmetric reflection.
Week 12:
- Q: If we solve a partial differential equation like ft = fxx + sin(t), with initial condition f(0,x)=sin(14x), we first solve the homogeneous problem ft = fxx which gives e-189 t2 sin(14x) then add a particular solution ft = sin(t) which is f(t) = cos(t). Why is this wrong?A: when finding the particular solution, we have to make sure that it is zero at t=0 so that we have at t=0, the correct initial condition sin(14x).
Week 11:
- Q: Why is there a factor 1/(2π) in the definition of the dot product of the complex Fourier series and a 1/π in the case of the real Fourier series.A: we try to minimize the amount of constants which are used to write things. With that choice, we have {cos(nx),sin(nx),2-1/2} as orthonormal basis in the real case and {exp(i n x)} in the complex case.
Week 10:
- Q: Any references for Fourier theory?A: There are many. One of them is Zygmund's Trigonometic Series
There are different conventions with the constant. In Zygmund for example,
one takes 1/2 as a base function not 1/21/2. This is also how
I learned it as such: Here is a page
from my outline from a student outline, I wrote as a student at ETH.
It contains also the Dirichlet proof.
Week 9:
- Q: Why do we write T(f)(x) and not T(f(x))?A: The operator T is a map on functions. We distinguish between functions and function values. If T is the differentiation operator, then T(f)=f' is the derivative function of f. It can be evaluated at a point. We have T(sin) = cos for example. Now we can evaluate this at a point, like T(sin)(0)=1. In some sense, we are doing here what higher level programming languages can do. Primitive languages can only build procedures, which give from some numerical data new data. A more advanced programming language (object oriented is the word) can also take as an input a function and give as an output a function. in Mathematica for example, we can define T[f_]:=f'. Now look what happens if we apply this to a function
T[f_]:=f'; T[Cos]
Gives the output -Sin[#1] & , which is one of the ways to define the function -sin(x) (more intelligable would be Function[x,-Sin[x]]. So, to get back to the question. It is important here to look at functions as objects! The sin function for example is one point in that function space. The zero function which is constant 0 is an other point.
Week 8:
- Q: Do we have to know the Jordan normal form theorem? Why is it useful?A: yes, it is an important theorem. We have used it to justify the stability theorem which tells that the eigenvalues determine whether a point is asymptotically stable or not.
- Q: Can one avoid the Jordan Normal form theorem in the stability theorem by using the Wiggle theorem?A: This is an excellent suggestion. How can one forget the Wiggle theorem? Yes, we know that after wiggling the matrix a bit, keeping still all eigenvalues smaller than that we still have stability. The difficulty is that we have to understand the difference between the orbit of the modified matrix and the orbit of the matrix itself. This could be tricky.
- A nice Lorentz attractor
Week 7:
- Q: How can we compare the orbit of an approximation of a continuous dynamical systems by discrete dynamical systems?A: this is an important point in numerical analysis: If we compute an orbit of a differential equation as with x(t+h) = x(t) + h F'(x(t)), then this discrete dynamical system for small h approximates the real system. There is an error although and this error grows in general with time. There are numerical schemes, higher order approximations which try to compensate as much as possible. The most popular one is the Runge-Kutta scheme.
Week 6:
- Q: is there a better proof to verify that a matrix with simple eigenvalues can be diagonalized?In class, the following proof was presented to show the linear independence of the eigenvectors v1, ... , vn. Assume we have
a1 v1 + .... + a_n vn = 0 .
We need to show that all ak are zero. By multiplying this equation from the left with (A-λ1) we get using (A-λ1) v1=0:a2 (λ2-λ1)v2 + .... + a_n (λn-λ1)vn = 0If we continue to apply (A-ambda;j) from the left except for j=k, we end up with ∏j ≠ k (λk-λj) a_k v_k =0. But this implies that ak=0. As this is true for all k, the linear independence is shown.
Forrest and Yongquan pointed out after class to some other ways to prove this: here is one if the vectors are linearly dependent, then we can express one of them as a linear combinations of the others. This meansvk = b1 v1 + ... + bn vn
Applying A on both sides gives on the left hand sideλk vk = λk (b1 v1 + ... + bn vn)
Setting this equal to the right hand side gives an equation showing that v1, v2 ,...,vk-1,vk+1, ...vn are linearly dependent.
Week 5:
- Q: Where can one find more about the Rising sea?Here is an article by Colin McLarty.
- Q: Where can I find the code for the n-Queen's problem?A: go to the code page. We have put it there, as copy pasting from the PDF won't work. Note that waiting for the solution of the 10-super queen problem needs some patience. There are better backtracking methods to find solutions but the code given here is the shortest and just grinds through all the permutations. The advantage of that simple code is that proof verification is easy.
Week 4:
- Q: How do we see which method is best for solving a determinant problem?A: one can often not say. If the matrix is upper triangular or lower triangular, there is nothing to do. If there is one pattern then this can be seen from the sparse nature of the matrix. Trying row reduction is always a good approach. Sometimes, one can immediately see that the matrix is singular implying that the determinant is zero. Treat it as a game. Sometime, there is a short cut, in the worst case, you have to grind through a full row reduction or a Laplace expansion.
- Q: I ended up with too many columns in A when doing a data fitting.This is often due to a non-simplification of the system of linear equations. Make sure the system has the form A x = b.
Week 3:
- Q: What happens with the QR decomposition if A is not invertible?A: We usually assume A to be invertible. In that case Q has the same shape than A and R is a square matrix. Mathematica does for some strange reason produce the pair {QT,R} and not {Q,R}. Still, it is always true that A= QT R even if A is not invertible. Mathematica does not report back 0 columns. So, if A is a nxn matrix of rank k, then Q will be a n x k matrix and R will be a k x n matrix. But remember, when using Mathematica that it always reports back QT and not Q.
Week 2:
- Q: How can we distinguish between the transformation A and the coordinate change transformation SA: By notation. We always use S for the coordinate change and A,B for the transformation. Remember. Applying S is changing cloths, applying A is to perform dance. The matrix B is what you dance in the new cloths. Remember the bias formula B=S-1 A S .
- Q: in which book did Feynman explain the power of examples?I still have to find the right place but I'm pretty sure to have read it in "Surely you are joking Mr Feynman". It could also have been in What do you care what other people think?.
Week 1:
- Q: In one of the homework, we have the task to solve a matrix equation of the form A X B = X, where A,B are 2 x 2 matrices. How do we solve for X?A: We can not just isolate X so easily. For now, just write X=[[a,b],[c,d]] and write down the matrix equation. This gives 4 equations for 4 variables a,b,c,d.
- Q: Why does the row reduction of [A|I] produce the matrix [I| A-1]? TA: Look at the row reduction process, in which all but the k'th column is deleted. Then we row reduce [A|ek] and end up with [I | xk], where xk solves A xk = ek. But the later equation means xk = A-1 ek= xk. We have learned in the second unit that this means that xk is the k'th column of A-1.
- Q: What is a good book to read besideA: The book of Otto Bretscher is nice. It covers what we do most closely.
- Q: Can I take this course if not taken 22a?A: Check with Oliver. This course also has a cap on 50. I know that not everybody from 22a will continue so that there will be some room for newcomers. This course is more theoretical than 21b. If you are in doubt it is always good advise to take the lower number. By looking the Math 22a course material you can get an idea about the structure of the course. Math 22a is similarly structured: you are expected to work hard and regularly, but you also learn a lot. Mathematics can only be learned by doing it. There are other differences. We have a pretty straight forward lecture course and while doing occasionally some exercises in class, most of the mathematics is done in the homework and proof seminar parts. If you want an experience closer to high school with lots of in class exercises, take 21b.