We Don’t Discover Alone. Why Should AI?
In conversation the other day, my Expressly Human co-author, @DrTimBarber, made an astute point about AI and discovery. He noted that it’s an unfair demand to expect a large language model to make new discoveries on its own. And he’s right. That’s not how discovery happens — not for us humans, and not for any intelligence embedded in a social world.
Discovery has never been the work of a lone mind answering questions in a vacuum. It emerges from a network — a noisy, argumentative, self-correcting conversation among countless scientists who explore, err, debate, and refine. Progress depends less on individual brilliance than on the structure of this community: independent agents probing different hypotheses, sharing results, and subjecting one another’s ideas to criticism. The failures of some fuel the insights of others. Discovery, in that sense, is a property of the scientific public square, not the isolated intellect.
Expecting a single LLM to make discoveries is like asking one scientist to replicate the entire civilization of science inside their own head. What’s needed isn’t a smarter model but a society of models — a population of diverse agents, each with different priors and heuristics, engaged in cooperation, competition, and exchange. They’d need access to our human public square as well: to argue, defend, and revise.
Only then might AI begin to generate genuine discovery — not by solitary brilliance, but by participating in the same decentralized process that gave rise to all discovery in history: a vibrant, self-correcting civilization of minds.

