Does Cultural Evolution Matter for Reasoning Models?

Another thought spurred by the “Talk, Judge, Cooperate” paper I discussed a few days ago: if reasoning models are game-theoretically rational from the get-go, cultural evolution may not affect their behavior much, at least in simple games with a clear equilibrium. In actual deployment, though, the situation will be much messier—e.g. agents won’t always know the payoff structure. So plausibly there is still room for cultural evolution to have a meaningful impact.