Aarhus University Seal / Aarhus Universitets segl

Deeply felt affect: The emergence of valence in deep active inference

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperLetterpeer-review

Standard

Deeply felt affect : The emergence of valence in deep active inference. / Hesp, Casper; Smith, Ryan; Parr, Thomas; Allen, Micah; Friston, Karl J.; Ramstead, Maxwell J.D.

In: Neural Computation, Vol. 33, No. 2, 02.2021, p. 398-446.

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperLetterpeer-review

Harvard

Hesp, C, Smith, R, Parr, T, Allen, M, Friston, KJ & Ramstead, MJD 2021, 'Deeply felt affect: The emergence of valence in deep active inference', Neural Computation, vol. 33, no. 2, pp. 398-446. https://doi.org/10.1162/neco_a_01341

APA

Hesp, C., Smith, R., Parr, T., Allen, M., Friston, K. J., & Ramstead, M. J. D. (2021). Deeply felt affect: The emergence of valence in deep active inference. Neural Computation, 33(2), 398-446. https://doi.org/10.1162/neco_a_01341

CBE

Hesp C, Smith R, Parr T, Allen M, Friston KJ, Ramstead MJD. 2021. Deeply felt affect: The emergence of valence in deep active inference. Neural Computation. 33(2):398-446. https://doi.org/10.1162/neco_a_01341

MLA

Vancouver

Hesp C, Smith R, Parr T, Allen M, Friston KJ, Ramstead MJD. Deeply felt affect: The emergence of valence in deep active inference. Neural Computation. 2021 Feb;33(2):398-446. https://doi.org/10.1162/neco_a_01341

Author

Hesp, Casper ; Smith, Ryan ; Parr, Thomas ; Allen, Micah ; Friston, Karl J. ; Ramstead, Maxwell J.D. / Deeply felt affect : The emergence of valence in deep active inference. In: Neural Computation. 2021 ; Vol. 33, No. 2. pp. 398-446.

Bibtex

@article{303912f7c5c84f33bc65be822547eb08,
title = "Deeply felt affect: The emergence of valence in deep active inference",
abstract = "The positive-negative axis of emotional valence has long been recognized as fundamental to adaptive behavior, but its origin and underlying function have largely eluded formal theorizing and computationalmodeling. Using deep active inference, a hierarchical inference scheme that restson inverting a model of how sensory data are generated, we develop a principled Bayesian model of emotional valence. This formulation asserts that agents infer their valence state based on the expected precision of their actionmodel—an internal estimate of overall model fitness (“subjective fitness”). This index of subjective fitness can be estimated within any environment and exploits the domain generality of second-order beliefs (beliefs about beliefs). We show how maintaining internal valence representations allows the ensuing affective agent to optimize confidence in action selection preemptively. Valence representations can in turn be optimized by leveraging the (Bayes-optimal) updating term for subjective fitness, which we label affective charge (AC). AC tracks changes in fitness estimates and lends a sign to otherwise unsigned divergences between predictions and outcomes. We simulate the resulting affective inference by subjecting an in silico affective agent to a T-maze paradigm requiring context learning, followed by context reversal. This formulation of affective inference offers a principled account of the link between affect, (mental) action, and implicit metacognition. It characterizes how a deep biological system can infer its affective state and reduce uncertainty about such inferences through internal action (i.e., top-downmodulation of priors that underwrite confidence). Thus,we demonstrate the potential of active inference to provide a formal and computationally tractable account of affect. Our demonstration of the face validity and potential utility of this formulation represents the first step within a larger research program. Next, this model can be leveraged to test the hypothesized role of valence by fitting the model to behavioral and neuronal responses.",
author = "Casper Hesp and Ryan Smith and Thomas Parr and Micah Allen and Friston, {Karl J.} and Ramstead, {Maxwell J.D.}",
note = "Publisher Copyright: {\textcopyright} 2020 Massachusetts Institute of Technology Copyright: Copyright 2021 Elsevier B.V., All rights reserved.",
year = "2021",
month = feb,
doi = "10.1162/neco_a_01341",
language = "English",
volume = "33",
pages = "398--446",
journal = "Neural Computation",
issn = "0899-7667",
publisher = "M I T Press",
number = "2",

}

RIS

TY - JOUR

T1 - Deeply felt affect

T2 - The emergence of valence in deep active inference

AU - Hesp, Casper

AU - Smith, Ryan

AU - Parr, Thomas

AU - Allen, Micah

AU - Friston, Karl J.

AU - Ramstead, Maxwell J.D.

N1 - Publisher Copyright: © 2020 Massachusetts Institute of Technology Copyright: Copyright 2021 Elsevier B.V., All rights reserved.

PY - 2021/2

Y1 - 2021/2

N2 - The positive-negative axis of emotional valence has long been recognized as fundamental to adaptive behavior, but its origin and underlying function have largely eluded formal theorizing and computationalmodeling. Using deep active inference, a hierarchical inference scheme that restson inverting a model of how sensory data are generated, we develop a principled Bayesian model of emotional valence. This formulation asserts that agents infer their valence state based on the expected precision of their actionmodel—an internal estimate of overall model fitness (“subjective fitness”). This index of subjective fitness can be estimated within any environment and exploits the domain generality of second-order beliefs (beliefs about beliefs). We show how maintaining internal valence representations allows the ensuing affective agent to optimize confidence in action selection preemptively. Valence representations can in turn be optimized by leveraging the (Bayes-optimal) updating term for subjective fitness, which we label affective charge (AC). AC tracks changes in fitness estimates and lends a sign to otherwise unsigned divergences between predictions and outcomes. We simulate the resulting affective inference by subjecting an in silico affective agent to a T-maze paradigm requiring context learning, followed by context reversal. This formulation of affective inference offers a principled account of the link between affect, (mental) action, and implicit metacognition. It characterizes how a deep biological system can infer its affective state and reduce uncertainty about such inferences through internal action (i.e., top-downmodulation of priors that underwrite confidence). Thus,we demonstrate the potential of active inference to provide a formal and computationally tractable account of affect. Our demonstration of the face validity and potential utility of this formulation represents the first step within a larger research program. Next, this model can be leveraged to test the hypothesized role of valence by fitting the model to behavioral and neuronal responses.

AB - The positive-negative axis of emotional valence has long been recognized as fundamental to adaptive behavior, but its origin and underlying function have largely eluded formal theorizing and computationalmodeling. Using deep active inference, a hierarchical inference scheme that restson inverting a model of how sensory data are generated, we develop a principled Bayesian model of emotional valence. This formulation asserts that agents infer their valence state based on the expected precision of their actionmodel—an internal estimate of overall model fitness (“subjective fitness”). This index of subjective fitness can be estimated within any environment and exploits the domain generality of second-order beliefs (beliefs about beliefs). We show how maintaining internal valence representations allows the ensuing affective agent to optimize confidence in action selection preemptively. Valence representations can in turn be optimized by leveraging the (Bayes-optimal) updating term for subjective fitness, which we label affective charge (AC). AC tracks changes in fitness estimates and lends a sign to otherwise unsigned divergences between predictions and outcomes. We simulate the resulting affective inference by subjecting an in silico affective agent to a T-maze paradigm requiring context learning, followed by context reversal. This formulation of affective inference offers a principled account of the link between affect, (mental) action, and implicit metacognition. It characterizes how a deep biological system can infer its affective state and reduce uncertainty about such inferences through internal action (i.e., top-downmodulation of priors that underwrite confidence). Thus,we demonstrate the potential of active inference to provide a formal and computationally tractable account of affect. Our demonstration of the face validity and potential utility of this formulation represents the first step within a larger research program. Next, this model can be leveraged to test the hypothesized role of valence by fitting the model to behavioral and neuronal responses.

UR - http://www.scopus.com/inward/record.url?scp=85100530171&partnerID=8YFLogxK

U2 - 10.1162/neco_a_01341

DO - 10.1162/neco_a_01341

M3 - Letter

C2 - 33253028

AN - SCOPUS:85100530171

VL - 33

SP - 398

EP - 446

JO - Neural Computation

JF - Neural Computation

SN - 0899-7667

IS - 2

ER -