The rise of LLMs has crushed interest in “the why,” but it’s more important than ever
One downside of LLMs, from a cognitive science and AI point of view, is that there were historically two reasons to want to solve the deep enigmas of mind. One was practical: to build thinking machines. The other was scientific: to understand the principles underlying thought itself. A great deal of funding, energy, and public fascination surely came from the first motive. People wanted smart robots. And because that goal seemed to require understanding intelligence, the second motive got pulled along with it.
But now LLMs have arrived, and for many people the first goal feels basically achieved. We have machines that talk, write, reason, code, summarize, advise, and in many settings appear strikingly intelligent. So the old pressure to understand mind at a principled level has weakened. The public sees smart-enough robots and concludes that the mystery is largely solved. Investors and product builders often care less about how intelligence works than about whether the system performs. And so the deeper project — understanding the principles of thought itself — has had the rug partly pulled out from under it. It now risks seeming “just academic.”
That irony cuts exactly the wrong way. We should care more, not less, about discovering the true principles of mind now that these systems are here. More and more of our world is beginning to run on the backs of these new minds, which makes it dangerous to remain content with mere performance while lacking real understanding of what they are doing. And unlike human minds, these minds are alien: built differently, trained differently, opaque in different ways, and not naturally interpretable through ordinary human intuitions. Theory is therefore not some optional philosophical luxury, but one of the only ways to understand what these systems are relative to us, where they genuinely overlap with human cognition, and where they do not.
There is another reason the need for principle has grown. Human minds are products of evolution, and however poorly we may understand them, we cannot redesign them from scratch. AI is different. We are actively building its successors. If we continue progressing mainly by scaling and trial and error, we may keep getting more capable systems without understanding which ingredients are essential and which are accidental. But if we can extract real principles of intelligence — representation, abstraction, reasoning, conceptual structure, agency, interpretability — then those principles could guide the next generation of AI in a far deeper way than brute-force iteration. In that sense, the scientific project has not become less relevant because AI works. It has become more urgent because AI works, because we increasingly depend on it, and because for the first time we may be able to build minds in light of theory rather than in ignorance of it.

