Uncategorized

  • Beyond Prediction: Comments on the Format of Natural Intelligence

    New paper published today in Cognitive Neuroscience: A commentary on Parr, Pezzulo & Friston (2025) discussing the architecture of human language and how it poses a unique challenge to modern approaches to cognitive science. PDF available here.

    Read more →

  • New papers

    New chapter published in the volume Biolinguistics at the Cutting Edge that reviews the foundations of linguistic computation and meaning [PDF]. A paper comparing compositional syntax-semantics in DALL-E 2/3 and young children. Lastly, a pre-print in which we assess the compositional linguistic abilities of OpenAI’s o3-mini-high large reasoning model.

    Read more →

  • Summary of Research (2010-2024)

    In this post, I wanted to outline and compress all of my academic research into a streamlined format. My research has focused mostly on compositionality in formal systems, neural systems, and artificial systems, with common themes being the nature, acquisition, evolution, and implementation of higher-order structure-building in the human mind/brain. In 2024, I developed a

    Read more →

  • My conversation with Tim and Keith at the MLST podcast is now live. We spoke about semantics, philosophy of mind, Large Language Models, AI ethics, evolution, metaphysics, and the end of the world.

    Read more →

  • Psycholinguistic theory has traditionally pointed to the importance of explicit symbol manipulation in humans, to explain our unique facility for symbolic forms of intelligence such as language and mathematics. A recent report utilizing intracranial recordings in a cohort of three participants argues that language areas in the human brain rely on a continuous vectorial embedding

    Read more →

  • A three hour break-down of a paper published in Journal of Neurolinguistics, providing some background and context, reviewing the paper section-by-section. Paper link Video link A comprehensive neural model of language must accommodate four components: representations, operations, structures and encoding. Recent intracranial research has begun to map out the feature space associated with syntactic processes,

    Read more →

  • After having a recent conversation with Steven Piantadosi on large language models, I wanted to briefly comment here on some of the themes in our discussion, before turning to additional critiques that we did not have enough time to talk about. When we discussed impossible vs. possible language, Steven seemed to confuse languages with the

    Read more →

  • In a new study to be published in next week’s issue of PNAS, Yuan Yang and Steven Piantadosi’s paper, “One model for the learning of language,” attempts to show that language acquisition is possible without recourse to “innate knowledge of the structures that occur in natural language.” The authors claim that a domain-general, rule-learning algorithm can “acquire

    Read more →