Thursday, January 22, 2026
Health & Fitness
9 min read

Human Brain's Speech Processing Mirrors AI Models, New Study Reveals

ScienceDaily
January 21, 20261 day ago
The human brain may work more like AI than anyone expected

AI-Generated Summary
Auto-generated

New research reveals the human brain processes speech similarly to advanced AI models. Scientists observed brain activity while participants listened to a podcast, finding a structured, layered sequence of neural steps that mirrors how AI models like GPT-2 process text. This suggests the brain builds meaning over time through a contextual, statistical process, challenging older theories.

The research, published in Nature Communications, was led by Dr. Ariel Goldstein of the Hebrew University with collaborators Dr. Mariano Schain of Google Research and Prof Uri Hasson and Eric Ham from Princeton University. Together, the team uncovered an unexpected similarity between how humans make sense of speech and how modern AI models process text. Using electrocorticography recordings from participants who listened to a thirty-minute podcast, the scientists tracked the timing and location of brain activity as language was processed. They found that the brain follows a structured sequence that closely matches the layered design of large language models such as GPT-2 and Llama 2. How the Brain Builds Meaning Over Time As we listen to someone speak, the brain does not grasp meaning all at once. Instead, each word passes through a series of neural steps. Goldstein and his colleagues showed that these steps unfold over time in a way that mirrors how AI models handle language. Early layers in AI focus on basic word features, while deeper layers combine context, tone, and broader meaning. Human brain activity followed the same pattern. Early neural signals matched the early stages of AI processing, while later brain responses lined up with the deeper layers of the models. This timing match was especially strong in higher level language areas such as Broca's area, where responses peaked later when linked to deeper AI layers. According to Dr. Goldstein, "What surprised us most was how closely the brain's temporal unfolding of meaning matches the sequence of transformations inside large language models. Even though these systems are built very differently, both seem to converge on a similar step-by-step buildup toward understanding" Why These Findings Matter The study suggests that artificial intelligence can do more than generate text. It may also help scientists better understand how the human brain creates meaning. For many years, language was thought to rely mainly on fixed symbols and rigid hierarchies. These results challenge that view and instead point to a more flexible and statistical process in which meaning gradually emerges through context. The researchers also tested traditional linguistic elements such as phonemes and morphemes. These classic features did not explain real time brain activity as well as the contextual representations produced by AI models. This supports the idea that the brain relies more on flowing context than on strict linguistic building blocks. A New Resource for Language Neuroscience To help move the field forward, the team has made the complete set of neural recordings and language features publicly available. This open dataset allows researchers around the world to compare theories of language understanding and to develop computational models that more closely reflect how the human mind works.

Rate this article

Login to rate this article

Comments

Please login to comment

No comments yet. Be the first to comment!
    Brain Works Like AI: New Research