Jan.29 Duality and the incidence matrix of a design; Fisher's theorem Reminder: first HW due next Monday. New: informal lecture notes on course webpage. [pep talk about duality in projective planes and elsewhere] Def. 1.10 (p.4) _Incidence matrix_ of design (or even t-structure): columns <--> points rows <--> blocks entries = 1 if point in block, 0 else. example: B D E O R U Y BUD 1 1 0 0 0 1 0 BYE 1 0 1 0 0 0 1 DOE 0 1 1 1 0 0 0 DRY 0 1 0 0 1 0 1 ORB 1 0 0 1 1 0 0 RUE 0 0 1 0 1 1 0 YOU 0 0 0 1 0 1 1 warning: depends on order of X and \B; also, some authors use a definition with rows and columns switched, resulting in transpose of our matrix. 1.11 (p.4), generalized slightly: If it's a t-(v,k,\lam) design then: @ each block has k points <--> each row dots with all-1's column to k <--> M * (all-1's column) = k*(all-1's column) @ if t>=1: each point is in r=\lam_1 blocks <--> each column dots with all-1's row to r <--> (all-1's row) * M = r*(all-1's row) The fact that rows and columns are interchangeable so far suggests defining the _dual_ \D\T of a design \D = (X,\B) [see p.5, I'm changing the text's order of exposition] so that the incidence matrix of \D\T is the transpose of the incidence matrix of \D (as the notation suggests). Morally speaking \D\T is (\B,X). Since X does not consist of subsets of \B, we interpret this as \D\T = (X\T, \B\T) = (\B, {beta_x : x \in X}) where \beta_x := {B \in \B: x \in B} NB in general this might be only a "structure", not a design. A structure \D is a design if the adjacency matrix has no repeated rows, and \D\T is a design if there are no repeated columns. So, for instance, consider 1100 1100 0011 0011 (\D and \D\T are 1-(4,2,2) structures, both isomorphic with the regular graph-with-multiplicity @=@ @=@), and 110000 001100 000011 (\D is the 1-(6,2,1) design @-@ @-@ @-@, and \D\T is a 1-(3,1,2) structure; that's a degenerate example because it's twice the complete design -- can you find a nondegenerate example?) In any case the dual of a 1-structure is a 1-structure, and -- as ought to be the case with a notion of duality -- (\D\T)\T is canonically isomorphic with \D. OK: what if we actually have a 2-design, such as \Pi_2? Check that for \Pi_2 the dual \D\T is also a 2-design, indeed with the same parameters. This isn't always true, but it does suggest something that is. Let's see: @ if also t=2: each pair of points in \lam blocks <--> any two *different* columns have dot product \lam. The dot product of a column with itself is again r, so ... 1.11 iii) M\T M = (r-\lam) I + \lam J (v*v matrices) where J = all-1's matrix. (Likewise if t>=2, replacing \lam by \lam_2.) Note that this excludes repeated columns unless r = \lam, which is (not surprisingly) impossible except in trivial cases as we soon see. The first two properties in (1.11) can also be expressed in terms of J: 1.11 ii) M J = k J (v*v matrices) 1.11 i) J M = r J (b*b matrices) That might seem like just bookkeeping, but (iii) lets us actually do nontrivial things using linear algebra (and then (i) and (ii) will help too). We need: Lemma 1.12 (p.4): for the identity and all-1 matrices I,J of order n we have det(xI+yJ) = (x+ny) x^(n-1). In particular, xI+yJ has rank n iff x and x+ny are nonzero. [NB it's _a priori_ a homogeneous polynomial of degree n; also det(I+yM) is a polynomial in y of degree rank(M), and since J has rank 1 it had to be linear in y.] Pf.: xI+yJ symmetric <--> spectral theorem applies (orthonormal eigenvectors, det = product of eigenvalues). All-1's has eigenvalue x+ny and its orth. complement has dimension n-1 and eigenvalue x, etc. [Yes, Lemma 1.12 can also be obtained directly by row operations.] In our case x+ny is certainly nonzero: it's r+(v-1)\lam and both terms are positive. What about x=r-\lam? Well we saw that (1.9) r (k-1) = (v-1) \lam so, (r-\lam) (k-1) = (v-1) \lam - (k-1) \lam = (v-k) \lam so as long as k>1 (remember we must have k>=t for nontriviality) and k 0. Hence: Theorem 1.14 (p.5) [Fisher's inequality] In a nonempty 2-design with k=v. Proof: We have written a nondegenerate v*v matrix as a product of two matrices each of rank at most b. QED What if equality holds? First of all M is a square matrix, and (1.8) bk=vr means that b=v iff k=r, so (1.11) means M and J commute. But then M commutes with M\T M since that's a linear combination of I and J, so M M\T M = M\T M M, and since M is invertible we conclude M M\T = M\T M, which is already known (1.11 again) to be (r-\lam) I + \lam J. But this means that every two *blocks* have \lam points in common! That in turn means that \D\T is also a 2-design. To summarize: Theorem 1.15 (p.5) In a 2-design with k b follows from bk=rv (b/v=r/k). We've just shown that these imply c, and c ==> d <==> e are obvious. Finally for e ==> a, if \D\T is a 2-design then it satisfies the hypotheses of Fisher's inequality (notably the condition k= v >= b, whence b=v. QED A 2-design satisfying these conditions is said to be _square_ (because the incidence matrix is square; again the book warns that the literature also contains other terminology for this). You might find problem 2 on the homework easier now.