Mathematics and Computer Studies (Conference proceedings)
Permanent URI for this collectionhttps://dspace.mic.ul.ie/handle/10395/2536
Browse
Recent Submissions
Item type: Item , Symmetric powers of trace forms on symbol algebras(Université D'Artois, 2013) Flatley, RonanItem type: Item , Moment estimation of measurement errors(NEDETAS, 2011) O'Driscoll, Diarmuid; Ramirez, Donald E.The slope of the best-fit line from minimizing a function of the squared vertical and horizontal errors is the root of a polynomial of degree four. We use second order and fourth order moment equations to estimate the ratio of the variances of errors in the measurement error model and this estimate is used to introduce two new estimators. A simulation study shows improvement in bias and mean squared error of each of these new estimators over the ordinary least squares estimator.Item type: Item , Revisiting some design criteria(Athens Institute for Education and Research, 2015) O'Driscoll, Diarmuid; Ramirez, Donald E.We address the problem that the A (trace) design criterion is not scale invariant and often is in disagreement with the D (determinant) design criterion. We consider the canonical moment matrix CM and use the trace of its inverse as the canonical trace CA design criterion and use the determinant of its inverse as the canonical determinant CD design criterion. For designs which contain higher order terms, we note that the determinant of the canonical moment matrix gives a measure of the collinearity between the lower order terms and the higher order terms.Item type: Item , Limitations of the least squares estimators; a teaching perspective(Athens Institute for Education and Research, 2016) O'Driscoll, Diarmuid; Ramirez, Donald E.The standard linear regression model can be written as Y = Xβ+ε with X a full rank n × p matrix and L(ε) = N(0, σ2In). The least squares estimator is = (X΄X)−1XY with variance-covariance matrix Coυ( ) = σ2(X΄X)−1, where Var(εi) = σ2. The diagonal terms of the matrix Coυ( ) are the variances of the Least Squares estimators 0 ≤ i ≤ p−1 and the Gauss-Markov Theorem states is the best linear unbiased estimator. However, the OLS solutions require that (X΄X)−1 be accurately computed and ill conditioning can lead to very unstable solutions. Tikhonov, A.N. (1943) first introduced the idea of regularisation to solve ill-posed problems by introducing additional information which constrains (bounds) the solutions. Specifically, Hoerl, A.E. (1959) added the constraint term to the least squares problem as follows: minimize ||Y – Xβ||2 subject to the constraint ||β||2 = r2 for fixed r and dubbed this procedure as ridge regression. This paper gives a brief overview of ridge regression and examines the performance of three different types of ridge estimators; namely the ridge estimators of Hoerl, A.E. (1959), the surrogate estimators of Jensen, D.R. and Ramirez, D.E. (2008) and the raise estimators of Garcia, C.B., Garcia, J. and Soto, J. (2011).

