AI assistants will treat user attention as costly
Or so claim Willis et al. (2026):
We anticipate that AI assistants will treat user attention as a costly resource, minimising queries to the user. Consequently, users are likely to be questioned regarding their preferences at a high level, but are unlikely to be queried about minutia. This covers scenarios where interactions occur too quickly to query users or where users have delegated enough tasks that querying becomes infeasible.
This seems right to me. If so, much of alignment becomes front-loaded, with the initial high-level preference elicitation carrying a lot of weight. There is a risk that assumptions compound silently. Still, this seems like a problem we’ll iterate our way through. Over time, assistants will learn enough about their users that the assumptions get better.