Mathematics and Computer Studies (Conference proceedings)http://hdl.handle.net/10395/25362019-02-20T06:31:13Z2019-02-20T06:31:13ZMoment estimation of measurement errorshttp://hdl.handle.net/10395/25392019-01-29T15:04:51Z2011-01-01T00:00:00ZMoment estimation of measurement errors
The slope of the best-fit line from minimizing a function of the squared vertical and horizontal errors is the root of a polynomial of degree four. We use second order and fourth order moment equations to estimate the ratio of the variances of errors in the measurement error model and this estimate is used to introduce two new estimators. A simulation study shows improvement in bias and mean squared error of each of these new estimators over the ordinary least squares estimator.
Moment estimation of measurement errors.
2011-01-01T00:00:00ZRevisiting some design criteriahttp://hdl.handle.net/10395/25382019-01-29T15:07:42Z2015-01-01T00:00:00ZRevisiting some design criteria
We address the problem that the A (trace) design criterion is not scale invariant and often is in disagreement with the D (determinant) design criterion. We consider the canonical moment matrix CM and use the trace of its inverse as the canonical trace CA design criterion and use the determinant of its inverse as the canonical determinant CD design criterion. For designs which contain higher order terms, we note that the determinant of the canonical moment matrix gives a measure of the collinearity between the lower order terms and the higher order terms.
Revisiting some design criteria.
2015-01-01T00:00:00ZLimitations of the least squares estimators; a teaching perspectivehttp://hdl.handle.net/10395/25372019-01-29T15:06:00Z2016-01-01T00:00:00ZLimitations of the least squares estimators; a teaching perspective
The standard linear regression model can be written as Y = Xβ+ε with X a full rank n × p matrix and L(ε) = N(0, σ2In). The least squares estimator is = (X΄X)−1XY with variance-covariance matrix Coυ( ) = σ2(X΄X)−1, where Var(εi) = σ2. The diagonal
terms of the matrix Coυ( ) are the variances of the Least Squares estimators 0 ≤ i ≤ p−1 and the Gauss-Markov Theorem states is the best linear unbiased estimator. However, the OLS solutions require that (X΄X)−1 be accurately computed and ill conditioning can lead to very unstable solutions. Tikhonov, A.N. (1943) first introduced the idea of regularisation to solve ill-posed problems by introducing additional information which constrains (bounds) the solutions. Specifically, Hoerl, A.E. (1959) added the constraint term to the least squares problem as follows: minimize ||Y – Xβ||2 subject to the constraint ||β||2 = r2 for fixed r and dubbed this
procedure as ridge regression. This paper gives a brief overview of ridge regression and examines the performance of three different types of ridge estimators; namely the ridge estimators of Hoerl, A.E. (1959), the surrogate estimators of Jensen, D.R. and Ramirez, D.E. (2008) and the raise estimators of Garcia, C.B., Garcia, J. and Soto, J. (2011).
Limitations of the least squares estimators; a teaching perspective.
2016-01-01T00:00:00Z