But Have They Engaged with the Arguments?
SHAREphiliptrammell.com/blog/46  ·  29 Dec 2019  ·  #46
Some smart people, including some of my friends, believe that advanced AI poses a serious threat to human civilization in the near future, and that AI safety research is therefore one of the most valuable uses, if not the very most valuable use, of philanthropic talent and money. But most smart people, as far as I can judge their behavior—including some, like Mark Zuckerberg and Robin Hanson, who have expressed their thoughts on this explicitly—do not believe this. (I, for whatever it's worth, am agnostic.) In my experience, when someone points out the existence of smart skeptics like these, believers often respond: “Sure, those people dismiss AI risk. But have they engaged with the arguments?”

If the answer is no, it seems obvious that those who have engaged with the arguments have nothing to learn from these skeptics' judgment. If you aren't worried about rain because you saw a weather report that predicts sun, and I also saw that but also saw an updated weather report that now predicts rain, I should predict rain—not update on your rain skepticism, however smart you may be. Likewise, if Mark Zuckerberg dismisses AI risk because his one exposure to the idea was a Paul Christiano blog post from 2015 with a mistake in it, which a 2016 blog post corrects, then it seems that we who have read both should not update our beliefs at all in light of Zuckerberg's opinion. And when we look at the distribution of opinion among those who have really “engaged with the arguments”, we are left with a substantial majority—maybe everyone but Hanson, depending on how stringent our standards are here!—who do believe that, one way or another, AI development poses a serious existential risk.

But something must be wrong with this inference, since it works for all kinds of mutually contradictory positions. The majority of scholars of every religion are presumably members of that religion. The majority of those who best know the arguments for and against thinking that a given social movement is the world's most important cause, from pro-life-ism to environmentalism to campaign finance reform, are presumably members of that social movement. The majority of people who have seriously engaged with the arguments for flat-earthism are presumably flat-earthers. I don't even know what those arguments are.

What's going wrong, I think, is something like this. People encounter uncommonly-believed propositions now and then, like “AI safety research is the most valuable use of philanthropic money and talent in the world” or “Sikhism is true”, and decide whether or not to investigate them further. If they decide to hear out a first round of arguments but don't find them compelling enough, they drop out of the process. (Let's say that how compelling an argument seems is its “true strength” plus some random, mean-zero error.) If they do find the arguments compelling enough, they consider further investigation worth their time. They then tell the evangelist (or search engine or whatever) why they still object to the claim, and the evangelist (or whatever) brings a second round of arguments in reply. The process repeats.

As should be clear, this process can, after a few iterations, produce a situation in which most of those who have engaged with the arguments for a claim beyond some depth believe in it. But this is just because of the filtering mechanism: the deeper arguments were only ever exposed to people who were already, coincidentally, persuaded by the initial arguments. If people were chosen at random and forced to hear out all the arguments, most would not be persuaded.

Perhaps more disturbingly, if the case for the claim in question is presented as a long fuzzy inference, with each step seeming plausible on its own, individuals will drop out of the process by rejecting the argument at random steps, each of which most observers would accept. Believers will then be in the extremely secure-feeling position of knowing not only that most people who engage with the arguments are believers, but even that, for any particular skeptic, her particular reason for skepticism seems false to almost everyone who knows its counterargument.

The upshot here seems to be that when a lot of people disagree with the experts on some issue, one should often give a lot of weight to the popular disagreement, even when one is among the experts and the people's objections sound insane. Epistemic humility can demand more than deference in the face of peer disagreement: it can demand deference in the face of disagreement from one's epistemic inferiors, as long as they're numerous. They haven't engaged with the arguments, but there is information to be extracted from the very fact that they haven't bothered engaging with them.

46
This seems like a good model to keep in mind in these kinds of situations, but how do you distinguish between a world like the above and one in which the issue actually IS a complex situation which only a few people follow the necessary inferential steps for? It seems like they would look pretty similar from the outside. Maybe run tests like "Show 100 random intelligent people the full argument and provide them with a knowledgeable partner to answer any questions" and try this on a few candidates - AI risk, climate change, the moral case for not eating factory-farmed animals, the EMH argument against stock-picking, flat earth, Singer-esque arguments for donating lots to charity, complex bits of feminist theory, a mathematical proof, Jared Diamond's case for geographic influence on longterm civilizational trajectory, etc. See which ones convince the most initial skeptics?
RavenclawPrefect  ·  30 Dec 2019 5:19 PM
I totally agree. I just mean to say that when we come across a claim which is endorsed by most of the people who have engaged with its arguments, we can't infer that it's probably right, not that we can infer that it's probably wrong. As for how we *can* infer whether a given claim is probably right or wrong... well, I suppose that's sort of the whole problem of epistemology! But yeah, "showing random groups of people all the arguments" seems like it would be a good test, if you can pull it off somehow. This is already basically how we determine legal disputes, of course--but maybe it would be good if there were a common law tradition of being able to conscript people now and then for other important disputes. (Or other RCTs more generally...)
pawtrammell  ·  30 Dec 2019 5:38 PM
This seems right to me. Another way of putting it would be that 'object level' epistemic superiority/inferiority is distorted by a selection effect on sympathy to a particular object level take. This can also be tricky in trying to adjudicate between opposed camps of experts: the Christian community spends a lot more aggregate effort on Christian apologetics than the Atheist community does on the opposite, so one may have to reweigh one's impression of the 'balance of argument' after seeing a few Christian/Atheist debates by this factor. (Per RP) the gold standard would be some trial exposure to the arguments to some epistemically virtuous group, but there may be some imperfect observational indicators. One would be if one camp has more epistemic virtue in factors orthogonal to the object level disagreement, although these are tricky to find (e.g. smart people will be more likely to come across more exotic beliefs, thus believers of these will tend to be smarter than sceptics). Another might be differential migration over time: if there's a trend of initially sceptical/believing folks changing their mind, each person is loosely acting as their own control.
Gregory Lewis  ·  2 Jan 2020 12:23 PM
Btw, there is a discussion of this post happening on LessWrong right now: https://www.lesswrong.com/posts/EyS5xzuLzGrTJd9bn/the-epistemology-of-ai-risk#msetJ2Z6WtDus7rsW
evhub  ·  28 Jan 2020 11:13 PM
POST
1