In a recent episode of RedCast, neuroscientist Miguel Nicolelis and science communicator Sérgio Sacani challenged the common label of "artificial intelligence" for large language models (LLMs). They argue that these systems do not possess genuine intelligence nor are they truly artificial in any meaningful sense. The debate extends beyond semantics, touching on the fundamental differences between human cognition and machine pattern-matching. Nicolelis, known for his work on brain-machine interfaces, emphasizes that LLMs lack consciousness, self-awareness, and the ability to understand context—qualities essential to human intelligence. Sacani adds that calling these models "intelligent" distorts public understanding and sets unrealistic expectations. The discussion calls for more precise terminology to describe what LLMs actually do: statistical prediction based on vast datasets.
Why Scientists Say Large Language Models Are Neither Intelligent Nor Artificial
AI
April 27, 2026 · 2:26 PM