Aron Vallinder

Hi! I'm Aron. I'm a philosophy PhD student at the London School of Economics, working primarily in formal epistemology and decision theory. My broader interests include existential risk and effective altruism.

Before coming to the LSE, I took the BPhil at Oxford University. As an undergraduate, I studied philosophy and mathematics at Lund University in Sweden, where I also did research in Bayesian social epistemology with the LU-IQ group. I found this to be lots of fun, especially as it allows one to approach many traditional questions via the novel route of computer simulation. You may find some of my papers in this area below.

My e-mail address is vallinder at gmail dot com. If you share any of my interests or just want to say hi, please do get in touch!


  • 1. Do Computer Simulations Support the Argument from Disagreement?
    Synthese 190(8):1437-1454, 2013 (w/ Erik J. Olsson)

    According to the Argument from Disagreement (AD) widespread and persistent disagreement on ethical issues indicates that our moral opinions are not influenced by moral facts, either because there are no such facts or because there are such facts but they fail to influence our moral opinions. In an innovative paper, Gustafsson and Peterson (2010) study the argument by means of computer simulation of opinion dynamics, relying on the well-known model of Hegselmann and Krause (2002, 2006). Their simulations indicate that if our moral opinions were influenced at least slightly by moral facts, we would quickly have reached consensus, even if our moral opinions were also affected by additional factors such as false authorities, external political shifts and random processes. Gustafsson and Peterson conclude that since no such consensus has been reached in real life, the simulation gives us increased reason to take seriously the AD. Our main claim in this paper is that these results are not as robust as Gustafsson and Peterson seem to think they are. If we run similar simulations in the alternative Laputa simulation environment developed by Angere and Olsson (Angere,forthcoming and Olsson, 2011) considerably less support for the AD is forthcoming. [pdf]

  • 2. Norms of Assertion and Communication in Social Networks
    Synthese 190(13):2557-2571, 2013 (w/ Erik J. Olsson)

    Epistemologists can be divided into two camps: those who think that nothing short of certainty or (subjective) probability 1 can warrant assertion and those who disagree with this claim. This paper addressed this issue by inquiring into the problem of setting the probability threshold required for assertion in such a way that that the social epistemic good is maximized, where the latter is taken to be the veritistic value in the sense of Goldman (1999). We provide a Bayesian model of a test case involving a community of inquirers in a social network engaged in group deliberation regarding the truth or falsity of a proposition p. Results obtained by means of computer simulation indicate that the certainty rule is optimal in the limit of inquiry and communication but that a lower threshold is preferable in less idealized cases.[pdf]

  • 3. Trust and the Value of Overconfidence: A Bayesian Perspective on Social Network Communication, Synthese 191(13):1991-2007, 2014 (w/ Erik J. Olsson)

    The paper presents and defends a Bayesian theory of trust in social networks. In the first part of the paper, we provide justifications for the basic assumptions behind the model, and we give reasons for thinking that the model has plausible consequences for certain kinds of communication. In the second part of the paper we investigate the phenomenon of overconfidence. Many psychological studies have found that people think they are more reliable than they actually are. Using a simulation environment that has been developed in order to make our model computationally tractable we show that in our model inquirers are indeed sometimes better off from an epistemic perspective overestimating the reliability of their own inquiries. We also show, by contrast, that people are rarely better off overestimating the reliability of others. On the basis of these observations we formulate a novel hypothesis about the value of overconfidence.[pdf]

Theses, etc

  • Solomonoff Induction: A Solution to the Problem of the Priors?, MA thesis

    In this essay, I investigate whether Solomonoff’s prior can be used to solve the problem of the priors for Bayesianism. In outline, the idea is to give higher prior probability to hypotheses that are "simpler", where simplicity is given a precise formal definition. I begin with a review of Bayesianism, including a survey of past proposed solutions of the problem of the priors. I then introduce the formal framework of Solomonoff induction, and go through some of its properties, before finally turning to some applications. After this, I discuss several potential problems for the framework. Among these are the fact that Solomonoff’s prior is incomputable, that the prior is highly dependent on the choice of a universal Turing machine to use in the definition, and the fact that it assumes that the hypotheses under consideration are computable. I also discuss whether a bias toward simplicity can be justified. I argue that there are two main considerations favoring Solomonoff’s prior: (i) it allows us to assign strictly positive probability to every hypothesis in a countably infinite set in a non-arbitrary way, and (ii) it minimizes the number of "retractions" and "errors" in the worst case.[pdf]