Estimation theory part of statistics with the goal of extracting parameters from noise-corrupted observations. Applications of estimation theory are statistical signal processing or adaptive filter theory or adaptive optics which allows for example image deblurring. +------------------------------------------------------------ | Parameter estimation problem +------------------------------------------------------------ Parameter estimation problem determine from a set L of observations a parameter vector. A paremeter estimate is a random vector. The estimation error epsilon is the difference between the estimated parameter and the parameter itself. The mean-squared error is given by the mean squared error matrix Eepsilon^T epsilon. It is a correlation matrix. +------------------------------------------------------------ | Biased +------------------------------------------------------------ Biased An estimate is said to be biased, if the expected value of the estimate is different than the actual value. +------------------------------------------------------------ | Asymptotically unbiased +------------------------------------------------------------ Asymptotically unbiased An estimate in statistics is called asymptotically unbiased, if the estimate becomes unbiased in the limit when the number of data points goes to infinity. +------------------------------------------------------------ | Consistent estimate +------------------------------------------------------------ Consistent estimate An estimate in statistics is called consistent if the mean squared error matrix Eepsilon^T epsilon converges to the 0 matrix in the limit when the number of data points goes to infinity. +------------------------------------------------------------ | Mean squared error matrix +------------------------------------------------------------ The Mean squared error matrix is defined as Eepsilon^T epsilon, where epsilon is the difference between the estimated parameter and the parameter itself. +------------------------------------------------------------ | efficient +------------------------------------------------------------ An estimator in statistics is called efficient if its mean-squared error satisfies the Cramer-Rao bound. +------------------------------------------------------------ | Cramer-Rao bound +------------------------------------------------------------ Cramer-Rao bound The mean-squared error Eepsilon^T epsilon for any estimate of a parameter has a lower bound which is called the Cramer-Rao bound. In the case of unbiased estimators, the Cramer-Rao bound gives for each error epsilon_i the estimate E epsilon_i^2 geq F^-1_ii . +------------------------------------------------------------ | Fisher information matrix +------------------------------------------------------------ The Fisher information matrix is defined as the expectation of the Hessian F= EH(-log(p)) = Egrad( log(p) ) grad( log(p) )^T of the conditional probability p(r|theta). +------------------------------------------------------------ | maximum likelihood estimate +------------------------------------------------------------ The maximum likelihood estimate is an estimation technique in statistics to estimate nonrandom parameters. A maximum likelyhood estimate is a maximizer of the log likelihood function log(p(r,theta). It is known that the maximum likelihood estimate is asymptotically unbiased, consistent estimate. Furthermore, the maximum likelihood estimate is distributed as a Gaussian random variable. Example. If X is a normal distributed random variable with unknown mean theta and variance 1, the likelyhood function is p(r,theta)= frac1sqrt2pi e^-(r-theta)^2/2 and the log-likelyhood function is log(p(r,theta))=-(r-theta)^2/2 + C. The maximum likelyhood estimate is r. The maximum likelihood estimate is difficult to compute in general for non-Gaussian random variables. This file is part of the Sofia project sponsored by the Provost's fund for teaching and learning at Harvard university. There are 9 entries in this file. COUNT: 9