PROBLEM 1 (a) the dot product is 3/2. We have 16 = 4 * 4 = (v1+v2) . (v1+v2) = v1.v1 + v1.v2 + v2.v1 + v2.v2 and v1.v1 = 2*2 = 4, v2.v2 = 3*3 = 9, v1.v2 = v2.v1; so 16 = 4 + 9 + 2(v1.v2) which we solve for v1.v2 to get (16-4-9)/2 = 3/2. (The problem can also be done geometrically in various ways starting from a triangle of sides 2, 3, 4.) (b) In general a matrix product AB is the table of dot products of rows of A with columns of B. Here the columns of A are v1 and v2, so the rows of the transpose of A are also v1,v2 considered as row vectors, and we know all the dot products from part a, so we find that the product matrix is [v1.v1 v1.v2] [ 4 3/2 ] [ ] = [ ] [v2.v1 v2.v2] [ 3/2 9 ] (c) since A and its transpose have the same determinant, their product's determinant is the square of det(A). The product we computed in (b) has determinant 4 * 9 - (3/2) * (3/2) = 135/4. Therefore |det(A)| is the square root of 135/4. (Equivalent forms of this answer, such as sqrt(135) / 2 or (3/2)*sqrt(15), are naturally OK too.) ------------ PROBLEM 2 (a) Each of V and W is a plane -- this can be seen either geometrically or by applying rank-nullity to the kernel of the one-row matrices [0 3 4] and [3 0 -4]. For V, the standard basis vector e1 is visibly ortogonal to the given vector, so is in V, and we readily find a second generator 4*e2 - 3*e3. These are already orthogonal to each other, and e1 already has length 1, so we don't even need a full-fledged Gram-Schmidt computation to get an orthonormal basis: just normalize the second generator to get the o.n. basis e1, (4*e2 - 3*e3) / 5. likewise W has orthonormal basis e2, (4*e1 + 3*e3) / 5. (b) Once we have an orthonormal basis u1,u2 for a plane U, the projection onto U takes any vector x to (x.u1) u1 + (x.u2) u2. Applying this formula to each of the standard vectors x=e1,e2,e3 yields the columns of the projection matrix. For U=V this yields the matrix [ 1 0 0 ] [ ] [ 0 16/25 -12/25 ] [ ] [ 0 -12/25 9/25 ] and for U=W we get [ 16/25 0 12/25 ] [ ] [ 0 1 0 ] [ ] [ 12/25 0 9/25 ] [This is the analysis used to obtain the formula Q*(Q-transpose) of "Fact 5.3.10" in the textbook; you could also use that formula directly if you remember it. The decimal forms 0.36, 0.48, 0.64 of 9/25, 12/25, 16/25 are also acceptable for full credit.] (c) Since V and W are planes in 3-space they intersect in a line. Any vector v on this line is mapped to itself under the projections to both V and W. Therefore Tv = v + v = 2v. Thus as long as v is not the zero vector it will be an eigenvector with eigenvalue 2. [Moreover these are the only such vectors: the projection of any vector x onto V has length at most |x|, with equality only when x is in V; likewise for the projection onto W. So by the triangle inequality T(x) has length at most |x| + |x| = 2|x|, with equality if and only if x is both in V and in W. But if x is an eigenvector of T with eigenvalue 2 then |T(x)| is |2x| = 2|x|. Therefore the only such vectors are both in V and in W. This further analysis was not required to solve (c), though.] (d) By (c), we need a vector orthogonal to both of [ 0 ] [ 3 ] [ ] [ ] [ 3 ] [ 0 ] [ ] [ ] [ 4 ] [-4 ] so its entries u1,u2,u3 satisfy 3*u2 + 4*u3 = 3*u1 - 4*u3 = 0. We may thus take u3 = 3 and then u1 = 4 and u2 = -4 so u is [ 4 ] [ ] [-4 ] [ ] [ 3 ] (a vector u can also be obtained as a cross product of the the 0,3,4 and 3,0,-4 vectors; one could of course compute the kernel of T-2I directly, but that's an invitation to angst and arithmetic error.) ------------ PROBLEM 3 (a) [let l = lambda] The characterisic polynomial is det(A-l*I). Here we can expand by minors of the third and fourth rows to reduce the determinant to the product of (l-1)^2 with the 2*2 determinant (0.8 - l) (0.6 - l) - (0.2) (0.4). Thus the answer is (l-1)^2 (l^2 - 1.4 l + 0.4) [Note: that's the characteristic *polynomial*, as opposed to the characteristic *equation* (l-1)^2 (l^2 - 1.4 l + 0.4) = 0; I did not deduct any credit for this terminological error but did note the error on the many solutions that contained it.] [Also note: no need to expand the product into l^4 - 3.4 l^3 + 4.2 l^2 - 2.2 l + 0.4, which is also correct but less useful for our purposes and not as simple.] (b) By the quadratic formula or otherwise, we can factor the quadratic (l^2 - 1.4 l + 0.4) into (l-1) * (l-0.4). Thus the characteristic polynomial is (l-1)^3 * (l-0.4), so the eigenvalues are 1 and 0.4 with algebraic multiplicities 3 and 1 respectively. (c) The eigenspaces for l=0.4 and l=1 are the kernels of E-(0.4)I and E-I respectively. We compute that the 0.4-eigenspace is the span of e1-2*e2, while the 1-eigenspace is the span of e1+e2, e3, e4. [Here e1,e2,e3,e4 are standard basis vectors; alternative bases of the same spaces were also accepted as correct, though choices such as the curiously popular e1+e2+e3+e4, e1+e2+e3, e1+e2+e4 for the 1-eigenspace make the computation in (e) longer and more error-prone.] (d) The columns of S are the eigenvectors found in (c), and the diagonal entries of D are the corresponding eigenvalues. For instance, using the above choice of eigenbasis we may take [ 1 1 0 0] [ ] [-2 1 0 0] [ ] [ 0 0 1 0] [ ] [ 0 0 0 1] for S, and the diagonal matrix with entries 0.4, 1, 1, 1 for D. (e) [Let S' be the inverse of S] Since D = S' A S we have A = S D S' and A^t = S D^t S'. Now D^t is the diagonal matrix with diagonal entries (0.4)^t, 1, 1, 1 in the same order as in (d), and S' can be computed directly (see the comment in (c) about choice of basis). The limit as t goes to infinity is recovered by taking (0.4)^t to zero, either in the resulting formula for A^t or (simpler) already in D^t. Either way we find that the limit of A^t is [2/3 1/3 0 0 ] [ ] [2/3 1/3 0 0 ] [ ] [ 0 0 1 0 ] [ ] [ 0 0 0 1 ] Note: many people lost a point or half-point due to arithmetic error(s) en route to the answer. Some of the resulting wrong answers were not even of the form [ a b 0 0 ] [ ] [ c d 0 0 ] [ ] [ 0 0 1 0 ] [ ] [ 0 0 0 1 ] It is clear a priori that the limit, if it exists, must be of that form for some nonnegative values of a,b,c,d, because the same is true for each of A, A^2, A^3, ... (as you can see either computationally or by consider what A does to the standard basis). So some of the computational errors could have been detected on these grounds alone. ------------ PROBLEM 4 (a) T The vector with coordinates x1,x2,x3 is x1(e1+e2) + x2(e1+e3) + x3(e2+e3) = (x1+x2)e1 + (x1+x3)e2 + (x2+x3)e3 and if x1,x2,x3 are all positive then so are x1+x2, x1+x3, x2+x3. (b) T For example because they are diagonalizable with the same eigenvalues 1,2,3, just in different order. (c) F For example, A could be [1 10] [0 -1] which is diagonalizable (different eigenvalues) but clearly not orthogonal (because the second column vector is much too long). (d) T Least-squares solutions minimize the length of the difference Ax-b. If b is in the image then the minimum is zero, attained when Ax=b. (e) F For instance if A is the invertible matrix [ 0 1] [-1 0] (determinant = 1) then the sum of A with its transpose is zero. (f) T Call the rows r1,r2,r3. Then det(r1,r2,r3) = det(r1,r2-r1,r3-r1) but r2-r1 and r3-r1 consist of even numbers, so we can write r2-r1 = 2 r2', r3-r1 = 2 r3', for some rows r2', r3' consisting of integers, and then det(r1,r2-r1,r3-r1) = det(r1, 2 r2', 2 r3') = 4 det(r1, r2', r3') so it's a multiple of 4. (Alternatively, apply Sarrus to det(r1,r2-r1,r3-r1)). (g) T Say Av = a*v, Bv = b*v. Then ABv = A(b*v) = b*Av = b*a v. (h) T It's a basic Fact (7.4.3, page 330) that the diagonalizable matrices are precisely the matrices with eigenbases. (i) F For instance let A = B = the 2x2 identity matrix; then tr(AB) = tr(A) = tr(B) = 2 and 2 does not equal 2*2. (j) T If the rank were less than 2, then the kernel would have dimension 9 or 10. But the dimension of the kernel is just the geometric multiplicity of the eigenvalue zero, which is no larger than its algebraic multiplicity. This would make the total multiplicity at least 2+9 = 11 which is impossible for a 10x10 matrix. (The same argument shows that in general if an NxN matrix has a nonzero eigenvalue of algebraic multiplicity at least m then its rank is at least m.)