Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights

Cecilia Jarne*

*Corresponding author for this work

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperJournal articleResearchpeer-review

Abstract

Training neural networks to perform different tasks is relevant across various disciplines. In particular, Recurrent Neural Networks (RNNs) are of great interest in Computational Neuroscience. Open-source frameworks dedicated to Machine Learning, such as Tensorflow and Keras have produced significant changes in the development of technologies that we currently use. This work contributes by comprehensively investigating and describing the application of RNNs for temporal processing through a study of a 3-bit Flip Flop memory implementation. We delve into the entire modeling process, encompassing equations, task parametrization, and software development. The obtained networks are meticulously analyzed to elucidate dynamics, aided by an array of visualization and analysis tools. Moreover, the provided code is versatile enough to facilitate the modeling of diverse tasks and systems. Furthermore, we present how memory states can be efficiently stored in the vertices of a cube in the dimensionally reduced space, supplementing previous results with a distinct approach.

Original languageEnglish
Article number1269190
JournalFrontiers in Systems Neuroscience
Volume18
ISSN1662-5137
DOIs
Publication statusPublished - Mar 2024

Keywords

  • computational neuroscience
  • dynamics
  • eigenvalue distribution
  • flip flop
  • recurrent neural networks

Fingerprint

Dive into the research topics of 'Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights'. Together they form a unique fingerprint.

Cite this