Feb. 24 -- Moore graphs; two more conditions: "absolute" and Krein (pages 41-42) Recall: the adjacency matrix A of a strongly regular graph with parameters (n,k,\lam,\mu) satisfies A\j = k\j and (p.37, 2.13) A^2 = k I + \lam A + \mu (J-I-A). Thus any eigenvalue \rho of an eigenvector orthogonal to \j is a root of the quadratic \rho^2 u = (k + \lam \rho - \mu(\rho+1)) u whence the eigenvalues are the distinct real numbers r = (\lam - \mu + \sqrt(D)) / 2, s = (\lam - \mu - \sqrt(D)) / 2. where D = (\lam-\mu)^2 + 4 (k-\mu), with multiplicities f, g = (1/2) ( n - 1 + \pm ((n-1)(\mu-\lam) - 2k) / sqrt(D) ) with D as above. Hence Theorem (p.37, 2.16 = integrality condition) f and g are nonnegative integers. Example (see exercise 10, page 46): a Moore graph (of girth 5) is strongly regular with parameters (k^2+1, k, 0, 1). So Type I iff k=2 (seen already), and otherwise D = (0-1)^2 + 4(k-1) = 4k-3 is a square, necessarily odd so 4k-3=(2m+1)^2 and k=m^2+m+1 for some m>1. m=1 and m=2 yield k=3 (Petersen) and k=7 (Hoffman-Singleton) respectively. Any m is allowed by rationality, but integrality limits us to m=1,2,7 so k=3,7, or 57. Here the numerator ((n-1)(\mu-\lam) - 2k is k^2 - 2k = (m^2+m+1)(m^2+m-1), while sqrt(D) = 2m+1, so we need 2m + 1 | (m^2+m+1)(m^2+m-1). We've already seen the technique: write the RHS as a multiple of the LHS plus a remainder, which is a constant of which 2m+1 must be a factor. Trouble is the remainder here is -15/16 (the value at the root m=-1/2). OK, but if we multiply the RHS by 16 then it's still a multiple, and then 16(m^2+m+1)(m^2+m-1) = (8m^3+12m^2+2m-1) (2m+1) - 15 so 15 | 2m+1, whence 2m+1 is 1, 3, 5, or 15, so m = [0,] 1, 2, or 7 as claimed. We can check that in each case the division by 16, and later by 2, introduces no denominators -- indeed the division by 16 had to work because 2m+1 is odd -- so the integrality criterion allows any of 2, 3, 7, 57 and as noted already the first three are known to arise, each uniquely up to automorphisms, and the last is a famous open problem. ====================================================================== Two more conditions. The first arises from the observation that we can project the n standard unit vectors to the r- and s-eigenspaces, obtaining a configuration of vectors of the same length with only two different inner products (other than the squared length), depending on whether the corresponding vertices are adjacent or disjoint. (The text outlines a difference construction.) Indeed if we write each e_x as (1/n) \j + u_x + v_x with u,v in the r- and s-eigenspace we have 1 = = 1/n + + 0 = = k/n + r + s which determines and ; likewise for distinct x and y 0 = = 1/n + + a_{x,y} = = k/n + r + s (where the entry a_{x,y} of A is 1 or 0 according as x,y are adjacent or disjoint), and this determines and in terms of a_{x,y}. Theorem (2.23+, p.14) [Delsarte-Goethals-Seidel 1977]: if there is a set S of unit vectors in some inner product space R^f for which (v,v' distinct) takes on only two distinct values then #S <= f(f+3)/2. Proof: Let b,c the two values of ; necessarily b,c < 1. [The text uses Greek letters \beta, \gamma for our b,c.] Let f_v be the quadratic polynomial functions on R^f: f_v(x) = ( - b) ( - c) + b c ( - 1) on R^f. Thanks to the last term they live in a space of dimension f(f+3)/2 [the constant term vanishes]. But evaluation on v' in S shows that they are linearly dependent. Hence their number |S| is at most f(f+3)/2 as claimed. [Similarly, for m distinct inner products there's an upper bound on |S| that grows as f^m/m!. The best bound of this form requires some finesse with adjustments analogous to "+ b c ( - 1)" but more involved and we won't say more about it here. Yet another variant: suppose we have a set S of n *pairs* {v,-v} of unit vectors in R^f, with only one pair {c,-c} of inner products other than 1 and -1. Then f_v(x) = ^2 - c^2 + c^2 ( - 1) is an *even* quadratic polynomial without constant term (and thus homogeneous) that vanishes on all of S except {v,-v}, so the number of such pairs is at most the dimension (f^2+f)/2 of the space of homogeneous quadratic polynomials in f variables. Here there are more examples of configurations attaining the bound, starting with the 3 opposite pairs of vertices of a regular hexagon (f=2) or the 6 of a regular icosahedron (with f=3). Back to strongly regular graphs...] Corollary ("absolute bound"): in a strongly regular graph n is at most min(f(f+3)/2, g(g+3)/2). Proof: apply 2.23 to the normalized vectors /|u_x| in R^f and /|v_x| in R^g. Example: the pentagon gives an example of equality with f=2 (it really is a regular pentagon or pentagram in R^2 !). The text gives another example where otherwise plausible parameters are excluded. In general equality is rare because f and g are typically both near their average (n-1)/2. I think only two further examples are known: (f,n)=(6,9) [the Schl\"{a}fli graph or its complement, related with the exceptional E6 root system -- coming attraction] and (22,275) [the McLaughlin graph or its complement, related with a sporadic simple group in the Conway/Leech family; that's too exotic for us]. The _Krein condition_ or _Krein bound_ (Thm. 2.26, p.42) exploits: (i) a formula for the projection E_r: x --> u_x as a linear combination of J, I, and A, and (ii) a trick involving Kronecker products. These are individually of interest but I have no special wisdom to impart on the final result, except that it lets us exclude a few more possible parameters as you're doing for Exercise 3 on page 45 (= prob.4). For the record the bound says (r+1) (k+r+2rs) <= (k+r) (s+1)^2 and likewise with r and s switched. (i) since we have Je_x = \j Ie_x = (1/n) \j + u_x + v_x Ae_x = (k/n) \j + r u_x + s v_x and r,s are distinct, we can extract the map E_r: x --> u_x as an explicit linear combination of J, I, A, namely [((k-s)/n) J + (sI-A)] / (s-r). In particular the (v,v') entry again depends only on whether v,v' are the same, adjacent, or disjoint. But this matrix is positive semidefinite: >= 0 for all v (equality iff v in the span of \j and the s-eigenspace). Now use: Lemma 2.25 (p.42): if a matrix M=(m_{ij}) is positive semidefinite then so is M o M = (m_{ij}^2). Proof: M o M is symmetric, and the self-adjoint linear transformation associated to it is the restriction to a coordinate subspace of the tensor square of the transformation M, QED. Now write M o M as a linear combination of J, I, A and compute its eigenvalues (it acts on each eigenspace of A by scalar multiplication), obtaining the Krein bound from the condition that these eigenvalues. Hint / Word to the wise: the decomposition x = (1/n) \j + u_x + v_x and the associated inner-product computations have other uses; e.g. try the sum of u_x or v_x over x in a (co-)clique.