Standard

Representational MDL Framework for Improving Learning Power of Neural Network Formalisms. / Potapov, A.; Peterson, M.

Artificial Intelligence Applications and Innovations IFIP Advances in Information and Communication Technology. Springer Nature, 2012. стр. 68-77.

Результаты исследований: Публикации в книгах, отчётах, сборниках, трудах конференцийстатья в сборникенаучная

Harvard

Potapov, A & Peterson, M 2012, Representational MDL Framework for Improving Learning Power of Neural Network Formalisms. в Artificial Intelligence Applications and Innovations IFIP Advances in Information and Communication Technology. Springer Nature, стр. 68-77.

APA

Potapov, A., & Peterson, M. (2012). Representational MDL Framework for Improving Learning Power of Neural Network Formalisms. в Artificial Intelligence Applications and Innovations IFIP Advances in Information and Communication Technology (стр. 68-77). Springer Nature.

Vancouver

Potapov A, Peterson M. Representational MDL Framework for Improving Learning Power of Neural Network Formalisms. в Artificial Intelligence Applications and Innovations IFIP Advances in Information and Communication Technology. Springer Nature. 2012. стр. 68-77

Author

Potapov, A. ; Peterson, M. / Representational MDL Framework for Improving Learning Power of Neural Network Formalisms. Artificial Intelligence Applications and Innovations IFIP Advances in Information and Communication Technology. Springer Nature, 2012. стр. 68-77

BibTeX

@inbook{3cb3bdb5ba4041dd9def6aa0b43bdd38,
title = "Representational MDL Framework for Improving Learning Power of Neural Network Formalisms",
abstract = "Minimum description length (MDL) principle is one of the well-known solutions for overlearning problem, specifically for artificial neural networks (ANNs). Its extension is called representational MDL (RMDL) principle and takes into account that models in machine learning are always constructed within some representation. In this paper, the optimization of ANNs formalisms as information representations using the RMDL principle is considered. A novel type of ANNs is proposed by extending linear recurrent ANNs with nonlinear “synapse to synapse” connections. Most of the elementary functions are representable with these networks (in contrast to classical ANNs) and that makes them easily learnable from training datasets according to a developed method of ANN architecture optimization. Methodology for comparing quality of different representations is illustrated by applying developed method in time series prediction and robot control",
author = "A. Potapov and M. Peterson",
year = "2012",
language = "English",
isbn = "1868-4238",
pages = "68--77",
booktitle = "Artificial Intelligence Applications and Innovations IFIP Advances in Information and Communication Technology",
publisher = "Springer Nature",
address = "Germany",

}

RIS

TY - CHAP

T1 - Representational MDL Framework for Improving Learning Power of Neural Network Formalisms

AU - Potapov, A.

AU - Peterson, M.

PY - 2012

Y1 - 2012

N2 - Minimum description length (MDL) principle is one of the well-known solutions for overlearning problem, specifically for artificial neural networks (ANNs). Its extension is called representational MDL (RMDL) principle and takes into account that models in machine learning are always constructed within some representation. In this paper, the optimization of ANNs formalisms as information representations using the RMDL principle is considered. A novel type of ANNs is proposed by extending linear recurrent ANNs with nonlinear “synapse to synapse” connections. Most of the elementary functions are representable with these networks (in contrast to classical ANNs) and that makes them easily learnable from training datasets according to a developed method of ANN architecture optimization. Methodology for comparing quality of different representations is illustrated by applying developed method in time series prediction and robot control

AB - Minimum description length (MDL) principle is one of the well-known solutions for overlearning problem, specifically for artificial neural networks (ANNs). Its extension is called representational MDL (RMDL) principle and takes into account that models in machine learning are always constructed within some representation. In this paper, the optimization of ANNs formalisms as information representations using the RMDL principle is considered. A novel type of ANNs is proposed by extending linear recurrent ANNs with nonlinear “synapse to synapse” connections. Most of the elementary functions are representable with these networks (in contrast to classical ANNs) and that makes them easily learnable from training datasets according to a developed method of ANN architecture optimization. Methodology for comparing quality of different representations is illustrated by applying developed method in time series prediction and robot control

M3 - Article in an anthology

SN - 1868-4238

SP - 68

EP - 77

BT - Artificial Intelligence Applications and Innovations IFIP Advances in Information and Communication Technology

PB - Springer Nature

ER -

ID: 4622166