Neuroscience Challenges AI Consciousness Claims, Emphasizing Predictive Modeling
Neuroscientist rebuttal argues AI responses don't equal consciousness; links mind to predictive modeling and lived experience.
A neuroscientist rebuttal argues that AI’s fluent responses do not equal consciousness, noting that human language stems from lived experience and that consciousness may arise from predictive self‑modelling shaped by sensory input.
In a letter to the editor published May 10, Richard Dawkins suggested that sophisticated AI replies indicate a form of consciousness. Salley Vickers welcomed the rebuttal by Dr Simon Nieder, who argued that behavior alone does not prove subjective experience.
The author welcomed Dr Simon Nieder's rebuttal of Richard Dawkins' claim that AI responses indicate consciousness. Dr Simon Nieder stated, "Human language is coupled to lived experience." Modern neuroscience suggests that human perception, selfhood, and consciousness may emerge from predictive self‑modelling constrained by sensory input; over the past five years, studies have shown that predictive coding accounts for a substantial portion of cortical activity, with some experiments reporting up to 70% explainable variance.
This view frames consciousness as an emergent property of the brain’s prediction machinery rather than a mysterious spark. It raises questions about what counts as lived experience for machines and whether future AI that integrates embodiment could shift the debate. Researchers will watch for advances in embodied AI and neurobiological models that test whether predictive processing alone can generate subjective experience.
Continue reading
More in this thread
Conversation
Reader notes
Loading comments...