Progressive Operational Perceptrons with Memory

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperJournal articleResearchpeer-review

Standard

Progressive Operational Perceptrons with Memory. / Thanh Tran, Dat; Kiranyaz, Serkan; Gabbouj, Moncef; Iosifidis, Alexandros.

In: Neurocomputing, Vol. 379, 28.02.2020, p. 172-181.

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperJournal articleResearchpeer-review

Harvard

Thanh Tran, D, Kiranyaz, S, Gabbouj, M & Iosifidis, A 2020, 'Progressive Operational Perceptrons with Memory', Neurocomputing, vol. 379, pp. 172-181. https://doi.org/10.1016/j.neucom.2019.10.079

APA

Thanh Tran, D., Kiranyaz, S., Gabbouj, M., & Iosifidis, A. (2020). Progressive Operational Perceptrons with Memory. Neurocomputing, 379, 172-181. https://doi.org/10.1016/j.neucom.2019.10.079

CBE

Thanh Tran D, Kiranyaz S, Gabbouj M, Iosifidis A. 2020. Progressive Operational Perceptrons with Memory. Neurocomputing. 379:172-181. https://doi.org/10.1016/j.neucom.2019.10.079

MLA

Vancouver

Thanh Tran D, Kiranyaz S, Gabbouj M, Iosifidis A. Progressive Operational Perceptrons with Memory. Neurocomputing. 2020 Feb 28;379:172-181. https://doi.org/10.1016/j.neucom.2019.10.079

Author

Thanh Tran, Dat ; Kiranyaz, Serkan ; Gabbouj, Moncef ; Iosifidis, Alexandros. / Progressive Operational Perceptrons with Memory. In: Neurocomputing. 2020 ; Vol. 379. pp. 172-181.

Bibtex

@article{10e8260831d94e8298e15791dd8e7be9,
title = "Progressive Operational Perceptrons with Memory",
abstract = "Generalized Operational Perceptron (GOP) was proposed to generalize the linear neuron model used in the traditional Multilayer Perceptron (MLP) by mimicking the synaptic connections of biological neurons showing nonlinear neurochemical behaviours. Previously, Progressive Operational Perceptron (POP) was proposed to train a multilayer network of GOPs which is formed layer-wise in a progressive manner. While achieving superior learning performance over other types of networks, POP has a high computational complexity. In this work, we propose POPfast, an improved variant of POP that signicantly reduces the computational complexity of POP, thus accelerating the training time of GOP networks. In addition, we also propose major architectural modications of POPfast that can augment the progressive learning process of POP by incorporating an information preserving, linear projection path from the input to the output layer at each progressive step. The proposed extensions can be interpreted as a mechanism that provides direct information extracted from the previously learned layers to the network, hence the term “memory”. This allows the network to learn deeper architectures and better data representations. An extensive set of experiments in human action, object, facial identity and scene recognition problems demonstrates that the proposed algorithms can train GOP networks much faster than POPs while achieving better performance compared to original POPs and other related algorithms.",
keywords = "Generalized operational perceptron, Neural architecture learning, Progressive learning",
author = "{Thanh Tran}, Dat and Serkan Kiranyaz and Moncef Gabbouj and Alexandros Iosifidis",
year = "2020",
month = feb,
day = "28",
doi = "10.1016/j.neucom.2019.10.079",
language = "English",
volume = "379",
pages = "172--181",
journal = "Neurocomputing",
issn = "0925-2312",
publisher = "Elsevier BV",

}

RIS

TY - JOUR

T1 - Progressive Operational Perceptrons with Memory

AU - Thanh Tran, Dat

AU - Kiranyaz, Serkan

AU - Gabbouj, Moncef

AU - Iosifidis, Alexandros

PY - 2020/2/28

Y1 - 2020/2/28

N2 - Generalized Operational Perceptron (GOP) was proposed to generalize the linear neuron model used in the traditional Multilayer Perceptron (MLP) by mimicking the synaptic connections of biological neurons showing nonlinear neurochemical behaviours. Previously, Progressive Operational Perceptron (POP) was proposed to train a multilayer network of GOPs which is formed layer-wise in a progressive manner. While achieving superior learning performance over other types of networks, POP has a high computational complexity. In this work, we propose POPfast, an improved variant of POP that signicantly reduces the computational complexity of POP, thus accelerating the training time of GOP networks. In addition, we also propose major architectural modications of POPfast that can augment the progressive learning process of POP by incorporating an information preserving, linear projection path from the input to the output layer at each progressive step. The proposed extensions can be interpreted as a mechanism that provides direct information extracted from the previously learned layers to the network, hence the term “memory”. This allows the network to learn deeper architectures and better data representations. An extensive set of experiments in human action, object, facial identity and scene recognition problems demonstrates that the proposed algorithms can train GOP networks much faster than POPs while achieving better performance compared to original POPs and other related algorithms.

AB - Generalized Operational Perceptron (GOP) was proposed to generalize the linear neuron model used in the traditional Multilayer Perceptron (MLP) by mimicking the synaptic connections of biological neurons showing nonlinear neurochemical behaviours. Previously, Progressive Operational Perceptron (POP) was proposed to train a multilayer network of GOPs which is formed layer-wise in a progressive manner. While achieving superior learning performance over other types of networks, POP has a high computational complexity. In this work, we propose POPfast, an improved variant of POP that signicantly reduces the computational complexity of POP, thus accelerating the training time of GOP networks. In addition, we also propose major architectural modications of POPfast that can augment the progressive learning process of POP by incorporating an information preserving, linear projection path from the input to the output layer at each progressive step. The proposed extensions can be interpreted as a mechanism that provides direct information extracted from the previously learned layers to the network, hence the term “memory”. This allows the network to learn deeper architectures and better data representations. An extensive set of experiments in human action, object, facial identity and scene recognition problems demonstrates that the proposed algorithms can train GOP networks much faster than POPs while achieving better performance compared to original POPs and other related algorithms.

KW - Generalized operational perceptron

KW - Neural architecture learning

KW - Progressive learning

U2 - 10.1016/j.neucom.2019.10.079

DO - 10.1016/j.neucom.2019.10.079

M3 - Journal article

VL - 379

SP - 172

EP - 181

JO - Neurocomputing

JF - Neurocomputing

SN - 0925-2312

ER -