Abstract

As is known, the task of constructing a regression function from observed data is ofgreat practical importance. In the case of additive error of observations at points whosecoordinates are given without errors, we have:yj=f(Xj) +εj, wherej= 1,...,Nis the observation number,yj– the observed value,Xj= (x1j,...,xsj) is the point atwhich the observation took place,εjis the observation error. It is also assumedEεj= 0.The task is to define a function f that is usually considered to be given parametrically,f(X) =f(X,U), whereUare unknown parameters. The problem has obvious connectionswith problems of approximation of functions.The report discusses one of the possible approaches using the idea of machine learning.It is based on the approximation problem for some functionf. LetAbe a linear operatoracting in a linear normed spaceF. IfA∗is an adjoint operator toA, the functionsφjandψjofFsatisfy the conditions:AA∗φj=s2jφj,AA∗φj=s2jψj,j= 1,...,r(A), thenamong all m-dimensional (m≤r(A)) operatorsAmthe operator ̃Am=∑j=1,msj(·,φj)ψjminimizes the norm of||A−Am||.IfK=I−Ais an operator, such thatKf= 0, and in this equality we replaceAwith itsapproximation ̃Am, then we get an approximation tofin the formfˆ=∑j=1,msj(f,φj)ψj.The idea of ””learning” is as follows. LetK(θ) – the parametric family of operators.Using sampled values off, we find an operatorK0=K(θ0),θ0= arg min||K(θ)||, wereθbelongs toθ. AssumingA=I−K0, we find the corresponding functionsφjandψjandconstruct an approximationfˆ for the appropriatem.Real algorithms use spaces of functions defined at discrete points. A well-known particularcase of using this approach is SVD time series analysis. Based on this approach, thereare different generalizations of this analysis. Some examples of generalization are givenin the report.
Original languageEnglish
Title of host publication10th International Workshop on Simulation and Statistics
Subtitle of host publicationWorkshop booklet
Place of PublicationSalzburg
PublisherUniversitat Salzburg
Pages52
Publication statusPublished - Sep 2019
Event10th International Workshop on Simulation and Statistics
- Salzburg
Duration: 2 Sep 20196 Sep 2019

Conference

Conference10th International Workshop on Simulation and Statistics
CountryAustralia
CitySalzburg
Period2/09/196/09/19

Scopus subject areas

  • Mathematics(all)

Cite this

Leora, S. N., & Ermakov, S. M. (2019). On Machine Learning In Regression Analysis. In 10th International Workshop on Simulation and Statistics: Workshop booklet (pp. 52). Salzburg: Universitat Salzburg.
Leora, Svetlana Nikolaevna ; Ermakov, Sergey Michaylovich. / On Machine Learning In Regression Analysis. 10th International Workshop on Simulation and Statistics: Workshop booklet. Salzburg : Universitat Salzburg, 2019. pp. 52
@inproceedings{e93166de354e4e199bb3b0d4b7336964,
title = "On Machine Learning In Regression Analysis",
abstract = "As is known, the task of constructing a regression function from observed data is ofgreat practical importance. In the case of additive error of observations at points whosecoordinates are given without errors, we have:yj=f(Xj) +εj, wherej= 1,...,Nis the observation number,yj– the observed value,Xj= (x1j,...,xsj) is the point atwhich the observation took place,εjis the observation error. It is also assumedEεj= 0.The task is to define a function f that is usually considered to be given parametrically,f(X) =f(X,U), whereUare unknown parameters. The problem has obvious connectionswith problems of approximation of functions.The report discusses one of the possible approaches using the idea of machine learning.It is based on the approximation problem for some functionf. LetAbe a linear operatoracting in a linear normed spaceF. IfA∗is an adjoint operator toA, the functionsφjandψjofFsatisfy the conditions:AA∗φj=s2jφj,AA∗φj=s2jψj,j= 1,...,r(A), thenamong all m-dimensional (m≤r(A)) operatorsAmthe operator ̃Am=∑j=1,msj(·,φj)ψjminimizes the norm of||A−Am||.IfK=I−Ais an operator, such thatKf= 0, and in this equality we replaceAwith itsapproximation ̃Am, then we get an approximation tofin the formfˆ=∑j=1,msj(f,φj)ψj.The idea of ””learning” is as follows. LetK(θ) – the parametric family of operators.Using sampled values off, we find an operatorK0=K(θ0),θ0= arg min||K(θ)||, wereθbelongs toθ. AssumingA=I−K0, we find the corresponding functionsφjandψjandconstruct an approximationfˆ for the appropriatem.Real algorithms use spaces of functions defined at discrete points. A well-known particularcase of using this approach is SVD time series analysis. Based on this approach, thereare different generalizations of this analysis. Some examples of generalization are givenin the report.",
author = "Leora, {Svetlana Nikolaevna} and Ermakov, {Sergey Michaylovich}",
year = "2019",
month = "9",
language = "English",
pages = "52",
booktitle = "10th International Workshop on Simulation and Statistics",
publisher = "Universitat Salzburg",
address = "Austria",

}

Leora, SN & Ermakov, SM 2019, On Machine Learning In Regression Analysis. in 10th International Workshop on Simulation and Statistics: Workshop booklet. Universitat Salzburg, Salzburg, pp. 52, Salzburg, 2/09/19.

On Machine Learning In Regression Analysis. / Leora, Svetlana Nikolaevna; Ermakov, Sergey Michaylovich.

10th International Workshop on Simulation and Statistics: Workshop booklet. Salzburg : Universitat Salzburg, 2019. p. 52.

Research outputpeer-review

TY - GEN

T1 - On Machine Learning In Regression Analysis

AU - Leora, Svetlana Nikolaevna

AU - Ermakov, Sergey Michaylovich

PY - 2019/9

Y1 - 2019/9

N2 - As is known, the task of constructing a regression function from observed data is ofgreat practical importance. In the case of additive error of observations at points whosecoordinates are given without errors, we have:yj=f(Xj) +εj, wherej= 1,...,Nis the observation number,yj– the observed value,Xj= (x1j,...,xsj) is the point atwhich the observation took place,εjis the observation error. It is also assumedEεj= 0.The task is to define a function f that is usually considered to be given parametrically,f(X) =f(X,U), whereUare unknown parameters. The problem has obvious connectionswith problems of approximation of functions.The report discusses one of the possible approaches using the idea of machine learning.It is based on the approximation problem for some functionf. LetAbe a linear operatoracting in a linear normed spaceF. IfA∗is an adjoint operator toA, the functionsφjandψjofFsatisfy the conditions:AA∗φj=s2jφj,AA∗φj=s2jψj,j= 1,...,r(A), thenamong all m-dimensional (m≤r(A)) operatorsAmthe operator ̃Am=∑j=1,msj(·,φj)ψjminimizes the norm of||A−Am||.IfK=I−Ais an operator, such thatKf= 0, and in this equality we replaceAwith itsapproximation ̃Am, then we get an approximation tofin the formfˆ=∑j=1,msj(f,φj)ψj.The idea of ””learning” is as follows. LetK(θ) – the parametric family of operators.Using sampled values off, we find an operatorK0=K(θ0),θ0= arg min||K(θ)||, wereθbelongs toθ. AssumingA=I−K0, we find the corresponding functionsφjandψjandconstruct an approximationfˆ for the appropriatem.Real algorithms use spaces of functions defined at discrete points. A well-known particularcase of using this approach is SVD time series analysis. Based on this approach, thereare different generalizations of this analysis. Some examples of generalization are givenin the report.

AB - As is known, the task of constructing a regression function from observed data is ofgreat practical importance. In the case of additive error of observations at points whosecoordinates are given without errors, we have:yj=f(Xj) +εj, wherej= 1,...,Nis the observation number,yj– the observed value,Xj= (x1j,...,xsj) is the point atwhich the observation took place,εjis the observation error. It is also assumedEεj= 0.The task is to define a function f that is usually considered to be given parametrically,f(X) =f(X,U), whereUare unknown parameters. The problem has obvious connectionswith problems of approximation of functions.The report discusses one of the possible approaches using the idea of machine learning.It is based on the approximation problem for some functionf. LetAbe a linear operatoracting in a linear normed spaceF. IfA∗is an adjoint operator toA, the functionsφjandψjofFsatisfy the conditions:AA∗φj=s2jφj,AA∗φj=s2jψj,j= 1,...,r(A), thenamong all m-dimensional (m≤r(A)) operatorsAmthe operator ̃Am=∑j=1,msj(·,φj)ψjminimizes the norm of||A−Am||.IfK=I−Ais an operator, such thatKf= 0, and in this equality we replaceAwith itsapproximation ̃Am, then we get an approximation tofin the formfˆ=∑j=1,msj(f,φj)ψj.The idea of ””learning” is as follows. LetK(θ) – the parametric family of operators.Using sampled values off, we find an operatorK0=K(θ0),θ0= arg min||K(θ)||, wereθbelongs toθ. AssumingA=I−K0, we find the corresponding functionsφjandψjandconstruct an approximationfˆ for the appropriatem.Real algorithms use spaces of functions defined at discrete points. A well-known particularcase of using this approach is SVD time series analysis. Based on this approach, thereare different generalizations of this analysis. Some examples of generalization are givenin the report.

UR - https://datascience.sbg.ac.at/SimStatSalzburg2019/SimStat2019BoA.pdf

M3 - Conference contribution

SP - 52

BT - 10th International Workshop on Simulation and Statistics

PB - Universitat Salzburg

CY - Salzburg

ER -

Leora SN, Ermakov SM. On Machine Learning In Regression Analysis. In 10th International Workshop on Simulation and Statistics: Workshop booklet. Salzburg: Universitat Salzburg. 2019. p. 52