Feb.1 More about square designs: BRC theorem, duality, polarity, and Fisher re-proved. First HW due now. Pick up Handout 2 (what you'll need to know about finite fields) and HW2 (due in 9 days -- NB Monday after next is a Univ. holiday; some problems may be postponed if we won't get to the background in time; as usual, collaborate at will but write up separately). Tomorrow: Math Table -- Dmitry Vaintrob on 1+2+3+... = -1/12. Recall: Fisher's inequality (1.14) says a 2-(v,k,\lam) design with k k-\lam is a square [NB *not* "k is square"; that's a typo for "n is square" where CvL define n=k-\lam] ii) v odd ==> there exist nonzero integers x,y,z such that z^2 = (k-\lam) x^2 + (-1)^((v-1)/2) \lam y^2. As you can imagine from the statement, (i) is much easier. We'll show only that case; you might base your final project on one or more of the known proofs of (ii), about which we'll make some remarks to give a bit of context to that strange-looking condition (but you still needn't memorize it for an exam). Proof of (i): Recall that det(xI+yJ) = (x+yn) x^(n-1) [Lemma 1.12]. If xI+yJ = M\T M for a square matrix M then det(xI+yJ) = det M\T det M = det M det M = (det M)^2 and since M is an integer matrix det(M) is an integer. So, a necessary condition is that det(xI+yJ) be a square. Now in our case n = v, and x + yn = r-\lam + v\lam = r + (v-1) \lam, and we already saw (1.9) that (v-1) \lam = r(k-1) so it's rk which for a square design is k^2 (recall 1.15b). So det(M) = k (k-\lam)^((v-1)/2), which is an integer iff v is odd or k-\lam is a square. So if v is even then k-\lam must be a square, as claimed. As for (ii): the rows are vectors in Z^v whose Gram matrix of inner products is the invertible matrix (r-\lam) I + \lam J. So a necessary condition is that such vectors exist in Q^v, which is to say that the quadratic form with Gram matrix (r-\lam) I + \lam J be equivalent to the standard form with Gram matrix I. A necessary condition is square determinant, and the arithmetic theory of quadratic forms gives the additional condition (ii). The names BRC are not in alphabetical order because BR proved it in 1949 for the special case \lam=1 of square Steiner systems (= projective planes of order q=k-1), and Chowla generalized in 1950 to arbitrary \lam. That condition can in turn be checked "locally" modulo powers of primes dividing k-\lam and \lam; for instance if \lam=1, so (v,k) = (q^2+q+1, q+1), we find that (-1)^((v-1)/2) = (-1)^((q^2+q)/2) is 1 if q is 0 or 3 mod 4, and -1 if q is 1 or 2 mod 4; in the former case the condition is automatically satisfied (let x=0, y=z) and in the latter we ask that q = (y/z)^2 + (x/z)^2, which by Fermat is equivalent to q = sum of two integer squares and can be checked from the prime factorization of q. This condition is necessary but not sufficient (q=10, apparently still the only known case where BRC allows parameters that have been proved impossible). Duality and polarity of a square design (1.18, 1.19 on page 6): If \D is a square design and \D\T is isomorphic with its dual then we can call \D *self-dual*, and any such isomorphism is called a *duality* of \D; explicitly, it consists of bijections \sigma: X --> \B, \tau: \B --> X such that x is in B iff \tau(B) is in \sigma(x). [NB the book uses exponential notation B^\tau, x^\sigma, which switches the order of composition of bijections; also there's a typo in the first display of p.6: each \B (the collection of blocks) should be plain B (a single block of \D).] In terms of the incidence matrix M, \sigma and \tau switch rows and columns; alternatively, they are a permutation of the rows and columns that takes M to M\T. Composing two dualities (\sigma,\tau) and (\sigma',\tau') yields an automorphism (\tau' \sigma, \sigma' \tau) of \D = (X,\B); also, a duality can be composed from either side with an automorphism of \D to yield another duality. So the dualities of a self-dual design \D extend Aut(\D) to a bigger group containing Aut(\D) with index 2. A duality that's an involution in this group, i.e. for which \tau\sigma is the identity permutation of X (and thus \sigma\tau is the identity permutation of B), is called a _polarity_. In terms of M, there's a polarity iff we can permute the rows (and columns), to get a symmetric matrix -- this makes the n-th block the image under \sigma of the n-th point. For example, Pi_2 is polar (HW2 hint!): 1 1 0 1 0 0 0 1 0 1 0 0 0 1 0 1 0 0 0 1 1 1 0 0 0 1 1 0 0 0 0 1 1 0 1 0 0 1 1 0 1 0 0 1 1 0 1 0 0 Finally, we can re-prove Fisher without linear algebra as a special case of: Thm. 1.20 (p.6): let B be a block of a 2-(v,k,\lam) design. Then there are at least k(r-1)^2 / ((k-1)(\lam-1) + (r-1)) blocks intersecting B nontrivially [other than B itself]; equality <==> every block other than B meets B in a constant number of points, in which case that constant is 1 + (k-1)(\lam-1)/(r-1). [Note that for a square design, k=r simplifies that constant to \lam as expected. To recover Fisher, show that the lower bound of Thm. 1.20 is at least b-1 using (1.8) bk=vr and (1.9) r(k-1)=(v-1)\lam, see top of page 7.] Happily it's not the specific formula that we'll use (we won't cover chapter 7) -- you certainly need not memorize that result for the exam! -- but you should know the technique (called the "variance trick" by the textbook's authors): obtain the first few "moments" sum(1), sum(i), sum(i^2) (usually by some kind of double counting) to get at the mean and variance, which must be positive (in other words, apply Cauchy-Schwarz); since we have only a 2-design we can get only the zeroth, first, and second moment, but in other contexts one may be able to calculate an exploit higher moments too. Proof of Thm. 1.20: For a block B' other than B, let i(B') be the size of the intersection of B with B'. Let d <= b-1 be the number of B' for which i(B') > 0, so the intersection is nonempty. We'll find the sum of 1, i, i^2 over those B' to get the mean and variance. (The book lets n_i be the number of solutions of i(B)=i, and finds the sum over i of n_i, in_i, and (i^2-i)n_i; that's the same as summing 1, i, i^2-i, and thus yields the sum of 1, i, i^2 respectively). @ The sum of 1 is of course d. @ The sum of i is the number of pairs (B',x) with x contained in both B and B'; there are k choices of x, each giving r-1 choices of B', so the sum is k(r-1). @ Likewise the sum of i^2-i is the number of triples (B',x,y) where x,y are distinct elements of both B and B', so their number is (k^2-k)(\lam-1). Thus the sum of i^2 is (k^2-k)(\lam-1) + k(r-1) = k ((k-1)(\lam-1) + (r-1)). Now Cauchy-Schwarz says sum(1)sum(i^2) >= sum(i)^2, with equality iff all i are equal. Therefore d = sum(1) >= sum(i)^2 / sum(i^2), which is exactly what we claimed; in the case of equality, the constant value of i is sum(i^2) / sum(i) = k ((k-1)(\lam-1) + (r-1)) / (k (r-1)), which is exactly 1 + (k-1)(\lam-1)/(r-1) as claimed. QED