Can Group Selection Protect Against AI-Induced Cultural Collapse?
Many—including myself—have worried that heavy reliance on generative AI could lead to reduced cultural variation. For example, stories written with AI assistance tend to have more in common than stories written by humans alone (Anderson et al 2024; Doshi & Hauser 2024). In turn, reduced variation could slow innovation and cumulative cultural evolution. Just as training AIs on AI-generated data can lead to “model collapse” (Shumailov et al. 2023), widespread AI use could produce an analogous “cultural collapse.”
In a new paper, Zhong et al. (2026) model the effects of two different types of generative AI use—complement and substitute—on cultural variation. When generative AI is used as a complement, humans remain in control but rely on AI for guidance. When used as a substitute, humans offload most of the production to the AI, contributing little of their own.
They find that under individual-level selection, AI-substitute users prevail. Even though AI-substitute use reduces variation in ways that are detrimental long-term, it yields higher short-term payoffs, so substitute users outcompete complement users.
However, when populations have group structure, AI-complement users benefit their groups by preserving the variation that fuels exploration. Therefore, under cultural group selection, AI-complement users can prevail—though only when group boundaries are fairly strong. So to protect innovation, the authors argue that we should incentivize AI-complement workflows at the organizational level.
Of course, this is just one model but I found it an interesting proposal for dealing with the risk of AI-driven cultural homogenization. And as the authors note, faster iteration with more advanced AI might eventually compensate for the loss in variance. Still, this seems worth exploring further.