Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › peer-review
Construction of regression experiment optimal plan using parallel computing. / Vladimirova, Ludmila; Fatyanova, Irina.
2015 International Conference on "Stability and Control Processes" in Memory of V.I. Zubov, SCP 2015 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2015. p. 361-363 7342140.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › peer-review
}
TY - GEN
T1 - Construction of regression experiment optimal plan using parallel computing
AU - Vladimirova, Ludmila
AU - Fatyanova, Irina
PY - 2015/11/30
Y1 - 2015/11/30
N2 - In this paper, we consider the classical linear regression of the second order, the unknown parameters are usually evaluated by the method of least squares. The distribution of the error of parameter vector estimate depends on the plan choice. This choice is carried out to minimize the generalized variance of unknown parameters estimate or to maximize the information matrix determinant. To solve this extremal problem the random search is used on the basis of on the normal distribution. This method takes into account the information on the objective function by the use of covariance matrix. This method is iterative; at each iteration the search domain is gradually contracted round the point recognized to be most promising at previous iteration. So we have self-training method (named the method with a 'memory'). The algorithm is simple and can be used for large dimension of search domain. In addition, this method is suitable for parallelization by distributing of numerical statistical tests among the processes [1, 2].
AB - In this paper, we consider the classical linear regression of the second order, the unknown parameters are usually evaluated by the method of least squares. The distribution of the error of parameter vector estimate depends on the plan choice. This choice is carried out to minimize the generalized variance of unknown parameters estimate or to maximize the information matrix determinant. To solve this extremal problem the random search is used on the basis of on the normal distribution. This method takes into account the information on the objective function by the use of covariance matrix. This method is iterative; at each iteration the search domain is gradually contracted round the point recognized to be most promising at previous iteration. So we have self-training method (named the method with a 'memory'). The algorithm is simple and can be used for large dimension of search domain. In addition, this method is suitable for parallelization by distributing of numerical statistical tests among the processes [1, 2].
KW - Covariance matrices
KW - Linear programming
KW - Monte Carlo methods
KW - Parallel processing
KW - Physics
KW - Publishing
KW - Search problems
UR - http://www.scopus.com/inward/record.url?scp=84960121716&partnerID=8YFLogxK
U2 - 10.1109/SCP.2015.7342140
DO - 10.1109/SCP.2015.7342140
M3 - Conference contribution
AN - SCOPUS:84960121716
SP - 361
EP - 363
BT - 2015 International Conference on "Stability and Control Processes" in Memory of V.I. Zubov, SCP 2015 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - International Conference on "Stability and Control Processes" in Memory of V.I. Zubov, SCP 2015
Y2 - 5 October 2015 through 9 October 2015
ER -
ID: 11351683