A clarification of the conditions under which Large language Models could be conscious

Morten Overgaard*, Asger Kirkeby-Hinrup

*Corresponding author for this work

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperComment/debate/letter to the editorResearchpeer-review

Abstract

With incredible speed Large Language Models (LLMs) are reshaping many aspects of society. This has been met with unease by the public, and public discourse is rife with questions about whether LLMs are or might be conscious. Because there is widespread disagreement about consciousness among scientists, any concrete answers that could be offered the public would be contentious. This paper offers the next best thing: charting the possibility of consciousness in LLMs. So, while it is too early to judge concerning the possibility of LLM consciousness, our charting of the possibility space for this may serve as a temporary guide for theorizing about it.

Original languageEnglish
Article number1031
JournalHumanities and social sciences communications
Volume11
ISSN2662-9992
DOIs
Publication statusPublished - Dec 2024

Fingerprint

Dive into the research topics of 'A clarification of the conditions under which Large language Models could be conscious'. Together they form a unique fingerprint.

Cite this