BARBELITH underground
 

Subcultural engagement for the 21st Century...
Barbelith is a new kind of community (find out more)...
You can login or register.


The Simulation Argument

 
 
Glandmaster
12:48 / 23.11.04
ABSTRACT. This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed. Source.
 
 
Lurid Archive
13:07 / 23.11.04
We had a discussion in the Lab on this here, but it might get a different treatment in the Headshop.

As I said in the other thread, I don't think you can plausibly use probability in the way he does, but it is still fun to think about.
 
 
Mirror
21:41 / 24.11.04
This sort of argument is only really interesting if you postulate that knowledge of the simulation by the simulated beings could affect the outcome of the simulation. In much the same way, arguments about the existence of God are only useful if you accept the notion that petitionary prayer can result in an objective effect in the phenomenal world.

In both cases, I think that the fascination with these sorts of questions boils down to a desire to directly control systems that are generally beyond human ability to manipulate, and a hope that there is some sort of universal justice or order.

We don't like the arbitrary.
 
 
Tom Coates
07:49 / 25.11.04
My problem with this approach to the world is that it seems to be based on entirely chaotic assumptions about the future. That is to say, we - by definition - have no idea of how a posthuman might think about our time, or how they might wish to contextualise or interact with it. We can make assumptions, but there seems to be no way of determining how likely our assumptions are. This stuff reminds me of Bill Gates' comment that we always overestimate change in the next ten years and underestimate the next twenty. If we're talking about people hundreds / thousands / tens of thousands of years then what chance do we have to realistically model what their behaviour might be or the context in which they're operating?
 
 
Glandmaster
20:41 / 25.11.04
I took Bostroms argument to be an extension of the philosophical problem of self, as for example Descartes second meditation is examined by Strawson in Self, Mind and Body. Strawson asks us to consider the implication of a 'brain in a vat' to highlight his problem with Descartes argument that because of his ability to raise doubts he must exist. Descarte himself has trouble with this in the first meditation but he uses a malicious demon as a devicethat could be feeding him with input.

Descartes lived at a time when you were careful about what you said because of the church, Strawson if still alive would be about 65 and Bostrom at or close to his peak thus the difference in device.

Sorry I think I expected you all to guess that but thats why I posted it in the headshop (not that I looked for it in the lab...).
 
 
Glandmaster
20:51 / 25.11.04
I will take the time to check out the other thread in the lab as you peeps are obviously of a technical bent which may lead this non tech monkey to new ground but to reinforce the fact that this is a philosophical argument see below:

My [Bostrom's] reply to Weatherson's paper. I argue he has misinterpreted the relevant indifference principle and that he has not provided any sound argument against the correct interpretation, nor has he addressed the arguments for this principle that I gave in the original paper. There also a few words on the difference between the Simulation Argument and traditional brain-in-a-vat arguments, and on so-called epistemological externalism.
 
 
Lurid Archive
18:04 / 28.11.04
I thought I'd write a few more thoughts on this.

OK. Lets grant substrate independence and assumptions about computing power and the possibility of AI and ancestor simulations. But having granted them and in some sense forgotten them, lets remember that we have agreed to forget them.

So the first part of Bostrom's argument says (more or less) that either lots of ancestor simulations get run, because if you can run one you can run lots, or hardly any get run. Fair enough.

He then goes on to state a "bland indifference principle" that says, assuming lots of these simulations get run, we should believe that we are simulations.

(Actually, Bostrom is only trying to justify a disjunction in the first instance, which I think is more or less ok. But clearly, the fuss is about arguing that we should believe we are sims, given that we are not going to die out.)

Now here is where I think there is a trivial sounding problem that actually puts the argument in the same class as brain in a vat type scenarios. The thing is, the indifference principle is only true if you assume that we have no reliable information about whether we are sims or not, as Bostrom notes.

But in order to argue that we have no knowledge about whether we are sims or not, our experiences must be, to a certain extent, deceptive. However, if our experiences are deceptive, there is no good reason to assume that they are only deceptive, or potentially deceptive, about this one thing. As a for instance, we could return to the assumptions on computing power we had at the start and posit some alternative scenarios. Perhaps AI is possible but extremely difficult both in terms of time and resources, so that very few ancestor simulations could ever be run. What credence should we attach to this possibility? Well, if we accept Bostrom's arguments at the start, not much. But if we accept that we are likely to be simulations, and our experiences are manufactured, then how can we suppose that the simulation is a faithful representation of the physics in the "real world"? I think we would be forced to admit that our doubt at our being "genuine" would have to be coupled with doubt about physical constraints on computer power. (You might try to argue about the probable nature of an ancestor simulation, but any basis of plausibility simply begs the question.)

Ironically, I think the argument is only convincing if you believe it isn't valid.

The weak link for me is the claim that our experiences give us no evidence which lets us decide if we are sims or not. The problem is that we can never have any idea about this sort of question and that the only sensible response to this kind of skepticism (and I think this is where Bostrom is envatting brains) is to act as if it isn't true, even though the possibility exists that it is true and we are being routinely deceived.

Actually, I think Bostrom is a step away from realising this. He argues that as the proportion of sims approaches 1, so should our credence that we are sims approach 1, because at the number 1, we would be sure to be a sim. Except that, at the number 1, we have the situation where nothing came before posthumans except simulations. But this is clearly nonsense. As we approach that number 1, we should be increasingly convinced that Bostrom's argument is invalid.

(This is more or less the point Weatherson develops, I think, though he uses a lot more symbolic manipulation.)
 
 
Simulacra
14:08 / 14.02.05
There is no such thing as a simulation. There is only hypersimulation.
 
 
Spaniel
09:18 / 15.02.05
Um, please to explain of what you are talking?
 
  
Add Your Reply