# Getting commented out might be worse than getting rebooted.

You do know that other people can read your thoughts, right? (What? I can see by what you are thinking that no one told you! Oh dear.)

Did you also know that Dark Matter is really just the sides of the jar that someone keeps our universe in? If you’ve ever worked with certain kinds of computer simulation then you’ll know what I mean. If you create a two dimensional world for simulated creatures to move around in, there is a problem with the edges. If, for practical reasons, the world you create is a big square matrix of possible spots something can “live” on, then there must be an outer edge, and whatever interactive processes or movements you have your simulated creatures doing won’t work properly at an edge. One way to handle this is to have anything that goes “off” the edge to simply re-appear around the other side of the matrix (some old-style video games do this), but that creates a whole other set of problems. You could also just delete any of your simulated creatures that get too near the edge, but then you lose longitudinal experience which is a bummer if that is part of your research (like learning or aging or long term cumulative effects of decision making).

Yet another way to handle edges in simulations is to introduce a force. Normally, you would have calculations that determine the direction and distance that a simulated creature moves in a given iteration. One element of the calculation can be a vector that can be as strong or week as you want it, that is summed into the calculation. Near the middle of your matrix-world, the vector is of a random direction and magnitude of zero, so when summed into the equation it has no effect. As a simulant approaches the edge, the direction becomes non-random and biased towards the middle of the matrix, but with a low magnitude (or strength). This way, creatures bias their movement away from the edges and many (but not all) possible edge encounters are avoided. But eventually they will blunder towards the edge anyway, so very near the edge of the matrix, you set the vector to point straight back towards the middle of the simulated universe and with a high value, so no matter what other factors are involved in the calculation, the CRAVE (central reorientation additive vector effect, or whatever you call it) is overwhelming.

To the creatures in your simulation, this would be like Dark Matter.

There is a theory that we are all part of a simulation or a game being run on a computer in some “other” universe. The person who suggested that (can’t find the reference, sorry) also suggested that we not react to this idea too strongly or whoever is running the simulation or video game may think something is wrong and reboot.

What that theory does not include, if I recall correctly, is the idea that the video game or simulation is not about us. It’s about something else. Bacteria. Giraffes. Snow. Whatever. The humans were added along with a bunch of other elements for some reason or another, for reasons we can’t possibly know, but that are not too important. If we are in a Beta version of the simulation, we might well get commented out in the next run, which is a lot worse than just being rebooted.

So be careful. Don’t look. He might be looking. Just keep your head down and act like nothing is wrong…

1. #1 Stew
March 4, 2011

Ha ha ha ha!

[but you knew I was thinking that]

2. #2 Penelope
March 4, 2011

Catcode, version 2:

Define Bugs;
Define mouses;
Define Sky;
Define Water;
# Define Humans;
# Define Dogs;
Define Cats;

3. #3 twpenn52
March 4, 2011

You’re talking about Bostrom’s Simulation Argument.

ABSTRACT: This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.

It’s interesting to think about. There’s no reason that this has to be the “real” universe I guess.

March 4, 2011

twpenn, thanks, that’s it.

Mums the word.

5. #5 Tree
March 4, 2011

Our limited senses give us only a tiny glimpse of objective reality. We reach (the scientific method is still one of our best tools) but mostly we live in a consensual hallucination.

The edges are there; we just agree not to see them.

(evil grin)

6. #6 Benton Jackson
March 5, 2011

Jack Chalker wrote a few stories along that line. The shortest is the single-book “The Devil Will Drag You Under”. The longest is the Well World series.

7. #7 howard.peirce
March 5, 2011

It gets even shorter than that, Jackson, if you want it.

8. #8 Andrew G.
March 5, 2011

This argument is played with (not as a major theme) in Banks’ The Algebraist, in which the official state religion of the Mercatoria has as its overt objective the disruption of any such simulation by getting everyone to believe in its existence.

March 5, 2011

I’d like to write science fiction some time but there are two problems. 1) I’m told that you can’t have the same theme twice. So, I could never write a story about a tentacled ancient alien beomoth thingie that hibernates under the south Pacific but taps into the vast human subconsciousness even though it (the beast) existed before humans existed. Etc. and 2) I assume that one must be much more read up in a genra than I am to write in it.

And I don’t see how number 2 does not cause number 1 to happen, yet without number 2, it would be impossible to systematically avoid nuber 1.

Of course, I question the validity of assertion 1. There is almost nothing in Harry Potter that is original yet it is brilliant and successful fiction. Perhaps writing for a juvi audiences relaxes the second requirement.

10. #10 Timberwoof
March 6, 2011

I fool about on Second Life form time to time. (Okay, just about every day to unwind from work.) It is a simulated 3D space with different kinds of matter (phanton, monphysical, physical, avatars, particles) and a very strange flow of time. I happen to know that interaction between objects is mediated by scripts or by the physics engine … but how would an inhabitant of Second Life know that?

Second Life has a Turing Complete programming language that has no arrays or data structures, just lists. The Wikipedia article on Turing Completeness says that anything that can implement a Turing Machine is one … since the universe can implement a Turing Machine, it is reasonable to say it is one.

Implementing a Second Life simulator in LSL would probably crash the sim. What would happen if you tried to simulate the universe in the universe? Would it crash?

On science fiction: Good science fiction has to be good science (with the expected willing suspension of disbelief for certain common tropes) and good fiction. That last bit means it has to be abut people—or, at any rate, characters the reader can identify with and care about. So go ahead and write about your tentacled behemoth … but write about how it would affect the sanity of a fundamentalist preacher from Nebraska, and how certain investigators would inevitably stumble upon some very weird facts.

March 6, 2011

Timberwolf: Actually, on a separate but related matter, I was thinking how cool it would be to simply re-write Lovecraft’s Cthulhu stories and removing (and totally redefining) the deeply offensive racist trope. His problem is not like Twain’s “N-word” but rather the total denigration of a huge portion of humanity. It could be done so differently…. It is very tempting. The stories are not that long. Its just a text file sitting there on my hard drive screaming out for anthropological reinterpretation. It is calling me. Calling . Ever calling .

12. #12 Keith Harwood
March 6, 2011

Many years ago I wrote a posting to sci.physics on this regard. IIRC it included the following observations.

Finite speed of light: A programming trick to avoid overflow in calculating velocities.

Uncertainty principle: Position and momentum held in the one finite data structure. The more precision required for one, the less available for the other.

Black holes: Underflow not properly handled; a bug.

Hawking radiation: Rounding errors when numbers become denormalised near a black hole.

Clairvoyance: Restored from checkpoint earlier in the simulation; some global variables not properly cleared and contain information from later.

There was more besides, but this is all I can remember.