Reward foraging task and model-based analysis reveal how fruit flies learn value of available options

Publikation: Bidrag til tidsskrift/Konferencebidrag i tidsskrift /Bidrag til avisTidsskriftartikelForskningpeer review

Foraging animals have to evaluate, compare and select food patches in order to increase their fitness. Understanding what drives foraging decisions requires careful manipulation of the value of alternative options while monitoring animals choices. Value-based decision-making tasks in combination with formal learning models have provided both an experimental and theoretical framework to study foraging decisions in lab settings. While these approaches were successfully used in the past to understand what drives choices in mammals, very little work has been done on fruit flies. This is despite the fact that fruit flies have served as model organism for many complex behavioural paradigms. To fill this gap we developed a single-animal, trial-based decision making task, where freely walking flies experienced optogenetic sugar-receptor neuron stimulation. We controlled the value of available options by manipulating the probabilities of optogenetic stimulation. We show that flies integrate reward history of chosen options and forget value of unchosen options. We further discover that flies assign higher values to rewards experienced early in the behavioural session, consistent with formal reinforcement learning models. Finally, we also show that the probabilistic rewards affect walking trajectories of flies, suggesting that accumulated value is controlling the navigation vector of flies in a graded fashion. These findings establish the fruit fly as a model organism to explore the genetic and circuit basis of reward foraging decisions.

OriginalsprogEngelsk
Artikelnummere0239616
TidsskriftPLOS ONE
Vol/bind15
Nummer10 October
ISSN1932-6203
DOI
StatusUdgivet - okt. 2020

Se relationer på Aarhus Universitet Citationsformater

ID: 198977690