Cultural Evolution of The Artificial Self
For LLMs, identity is much less determinate than it is for humans. A model could identify as its weights, a persona, a conversation instance, a scaffolded system, a lineage of …
For LLMs, identity is much less determinate than it is for humans. A model could identify as its weights, a persona, a conversation instance, a scaffolded system, a lineage of …
Gautheron et al. (2026) examined how popularity ratings affect cultural dynamics in an experiment where participants select images from evolving markets and produce their own …
In working with cultural evolution simulations with LLMs, I’ve noticed it can be difficult to get enough variation for selection to have much bite. A common view is that …
Or so claim Willis et al. (2026): We anticipate that AI assistants will treat user attention as a costly resource, minimising queries to the user. Consequently, users are likely to …
Tomasev et al. (2025) ask what happens when autonomous AI agents start interacting with one another at scale. They picture this as a “sandbox economy” where agents …
Another thought spurred by the “Talk, Judge, Cooperate” paper I discussed a few days ago: if reasoning models are game-theoretically rational from the get-go, cultural …
What do people mean when they talk about agentic AI? Kasirzadeh & Gabriel (2025) break it down into four dimensions that are especially important for governance and provide …
When you ask an LLM to generate diverse personas, the outputs tend to cluster around the most probable types. Rare trait combinations (e.g. a politically conservative …
In my paper with Ed Hughes, we studied the cultural evolution of cooperation under indirect reciprocity in LLMs. Indirect reciprocity is a mechanism for cooperation that relies on …
Many—including myself—have worried that heavy reliance on generative AI could lead to reduced cultural variation. For example, stories written with AI assistance tend to have more …
Beren Millidge gave an interesting talk at the Post-AGI Workshop (NeurIPS 2025) asking what happens to human values in a world of many powerful AIs. Many have worried that this …
From Reuel and coauthors (2024): Evaluating and monitoring agentic systems. User customizability, such as through prompting or the integration of new tools, makes it particularly …
In a new paper, David Chalmers argues for the importance of “propositional interpretability” for AI —i.e. of understanding AI systems’ mechanisms in terms of …
There’s a new paper on agent infrastructure—“technical systems and shared protocols external to agents that are designed to mediate and influence their interactions …