Fractals, fractals everywhere.
In 2003, Swedish philosopher Nick Bostrom published a paper called
Are you living in a computer simulation?. He wasn't the first one to have that thought. Notably, sci-fi author Philip K. Dick riffed on the theme of people who are—both knowingly and unknowingly—living in simulations, and did it a decade before Bostrom was born. You could make a good argument that Dick rang just about every change on this theme, many of which don't exactly take an optimistic view of our ability to deal with having the alternative of facing everyday life or escaping to candy-colored fun land (how is
your collection of Perky Pat accessories?).
Even Dick was a newcomer. If you step away from computers, people have been having thoughts about the unsatisfactory realness of reality for a long time. Just look over at that cave wall. Is that Plato's shadow?
Anyway, Bostrom neatly condensed the current thinking about "the simulation argument."
We are almost certainly living in a computer simulation.
Wait. What? Almost certainly? Are you sure? Well, no, but almost.
Just take your simulated hand, move your simulated mouse, and click on that other simulated hand to read more.
The arguments that we're not living in a simulation fall in three broad categories. 1) This feels real. 2) Computers aren't that good. 3) Who would want to simulate me when I spend all my time trying to forget my cruddy life in the first place? Let's tackle them in that order.
Yes! It does feel real, doesn't it? Sometimes not so much, when you're all cuddled up and the bed is just so warm and it's cold outside and gee mom couldn't you just give me a few more minutes? Sometimes much more so. I've had a kidney stone. It felt real.
Still, there's something about this real that we all kind of accept without thinking about it. That is, none of it's real, and we know it. By that I mean that we know that nothing we experience, day in or day out, is actually experienced at our fingertips, or at our naughty bits, or even in our who-the-hell-thought-pain-sensors-in-there-was-a-good-idea kidneys. Sure, there may be something out there, but our reality begins and ends with signals generated in about three pounds of pinky-gray greasy stuff encased in bones. Everything. Everything. Everything. Is absolutely, certainly, without a doubt a simulation at its most basic level, one we generate for ourselves based on programming that's wired (for want of a better term) into our skulls. We even generate ourselves while we're at it. I'd say get your mind around that one, but you can't, because ... well, your mind is stuck inside the concept.
We can never know how accurate or complete this simulation of reality is, and with rare exceptions we have a very hard time determining how comparable our own simulation is with that generated by the next three-pound calculating lump (Assignment: give me 200 words on why your perception of blue is exactly like mine. Go!).
The simulation we build for ourselves is assumed to be based on the inputs of the various sensors attached to the skull-blob we call home, but ... what if it's not? You already know that what you're experiencing is only the results of electrons and chemical cascades tripping around about 100 billion neurons. You're not reading these words on a screen, you're interpreting signals from a visual processor attached to a photo-chemical light sensor. Or at least, you think you are. The input source could be simulated. The interpretation of the input could be simulated. The whole brain they're attached to could be simulated.
Only computers aren't good enough to do that! Are they? No. No, they're not. Or ... well. Hmm. After all, when you go to the movies these days, the whole screen is rendered by a little processor, specialized to that task, and about the size of your thumbnail. That's not covering your whole field of vision, of course, or showing the world with all the detail that your eyes can distinguish. To do that it would take, oh, maybe $10 worth of silicon. Get back to me next year. It'll be cheaper.
That's recorded imagery, of course, "pre-rendered" in computer speak. It's a lot easier to show a realistic world than it is to create it on the fly in good ol' real time. Generating a convincing visual world with full interaction where you can zig left instead of following a camera to the right is a much tougher task. Computationally intensive. Enough so that what's possible for the chips in your phone, game machine, or desktop is kind of ... unrealistic. You can look at a frame of Call of Duty 4 and see right away that it's not real. There's just not enough detail, and the way light scatters and the way the textures work. It's real-ish, but not quite right.
Wolfenstein 3D (1992), Battlefield Vietnam (2004), Call of Duty 4 (2015)
The best computers, and the best written simulations, can push the boundaries, but only in limited ways. I have a developer's version of Occulus Rift, one of a several options in a new wave of virtual reality gear set to drop on the public next year, attached to a couple thousand bucks' worth of processors and graphics cards. But even with a good computer and the latest in unfashionable tech eye wear, the world in which you're immersed is still several steps away from being a convincing substitute for the everyday. Even with the simulation limited to a small room with a few objects, you can see the problems if you look carefully.
Still, the fact that you have to look carefully to see the wrongness of the image is itself a huge improvement that's happened in an extremely short time. Take a look at the triptych of game images above. Those are images from games that were regarded as cutting edge in their time. (I apologize that they're all games concerned with shooting people. I didn't make them. In fact, with the exception of the one on the left, I never even played them. Shooting people seems to feature heavily in the category of things we find amusing, which is a fact that makes me cheer for the idea that we're only a simulation. Anyway ....)
Part of this is the inevitable march of improving silicon. In 1991, the best chip you could grab for your PC zipped along at 50 MHz with the smallest components being about 1,000 nanometers across. Go shopping today, and for about the same cash you can grab a processor that hums at close to 4 GHz, based on parts that are a teensy 14 nanometers. It's a rate of improvement that, despite many warnings of limits ahead, has no sign of actually slowing. Advancements in much of the hardware specific to generating visuals and sound has also come along just as quickly. Give it another 10 years, and people will be wondering why anyone ever thought that generating a realistic environment was such a big deal.
...since the maximum human sensory bandwidth is ~108 bits per second, simulating all sensory events incurs a negligible cost compared to simulating the cortical activity. We can therefore use the processing power required to simulate the central nervous system as an estimate of the total computational cost of simulating a human mind.
— Nick Bostrom
What Bostrom is saying here is that it's actually quite easy to do the simulation of everything you experience. It's the simulation of
you, that little cogitating piece at the middle, that's the tougher problem. Only it's less tough all the time. Both because of all that improving hardware, and because of another important factor: computer programmers are tricksy bastards.
Even as processor power has grown exponentially, programmers have also been racing forward in their ability to cheat. To make something look much more realistic than it is. Mostly, it's a matter of realizing that the more distant or obscured something is from the viewer, the less it matters and the more you can simplify how it's represented.
Take a simple example. You're standing on one side of a building and see a car go behind it on the other side. The car is completely obscured by the building. Maybe you can still hear it. Maybe you can even see a few bits of reflection in the windows of another building. To you, being more than a year old and having a pretty good handle on this whole object perisistence thing, the car is still there, only hidden. For a sneaky computer bastard, trying to save every cycle and byte available, the situation is a bit different. Your average SCB looks at this situation and says, "Hey, there's no point rendering every follicle on the driver's head. This car is too far away to make out that kind of detail. Instead, I'll substitute a very simple model, using only enough data to give viewer the impression that the car continues." That kind of substitution happens at every level. It's not just simpler images, but a looser interpretation of physical responses, simplified models for the AI of characters, the simple removal of things unseen.
Ever sat in a traffic jam on a long stretch of highway and looked at multiple lanes of cars extending off to the horizon? It's kind of frightening to think that all the occupants of all those other cars are at the center of their own lives and their own stories, in which you feature not at all. It's even more frightening to think ... maybe that's not the case. Maybe there's only enough "code" in the distant cars to keep them looking more or less like cars. (Of course, you'd have to occasionally have those cars throw on their brakes for no damn reason at all, or suddenly decide they needed to be in the right lane for an exit 20 feet away. Otherwise it wouldn't be realistic.) You have no evidence that the streets aren't rolling up behind you. You don't know about history either. Everything up to now. Or now. Or right NOW. Could be merely coded in. That's the nature of realistic simulations. They're hate-able that way.
But let's be generous. Let's assume that the people simulating this world for you, and simulating you along with the world, aren't just out to aggravate your most persistent paranoia by making you the only fully realized person in a world of low-resolution sketches. Let's say all those cars are full of first rate people-sims, all rushing home to their fully aware and self-realized sim-families. The sim-stars don't go out when you're not looking. The sim-ocean depths really are full of sim-creatures, and sim-Wales is still filled with sim-people spraying Ws and Ys at each other.
Bostrom puts a computer that can generate that level of reality, one which features not only every electron of reality but all the little quirks and ticks of our personalities, at a not-too-distant point in the future. But he's probably wrong. It's quite likely that we're a lot closer to generating a convincing reality than even Bostrom believes.
Technicolor beasties and foliage on a plant from No Man's Sky
Later this year, a very small gaming company will release a game called
No Man's Sky. This game simulates exploration of a galaxy containing 18,446,744,073,709,551,616 planets. For those keeping score, that's about 315,000 percent bigger than the actual galaxy in which we live. Every one of the game's planets will be unique, with its own mountains and canyons and other features. Some will have oceans and clouds. Some will have lifeforms, each one of them also unique. If you visit one of those worlds, you'll be able to map it's continents and shores, and explore its caverns and ocean trenches. You can even carve your initials in the crust. Fly a few zillion miles across this simulated universe, put it in reverse and return. Everything is still there.
To know the placement on every pebble and blade of grass in a galaxy many times larger than our own seems like quite an accomplishment, especially for a program built by half a dozen guys that can be delivered in a download smaller than the average movie on Netflix. It's yet another Sneaky Computer Bastard trick.
In this case, the trick is called procedural generation. Rather than store every single atom of their universe as a pre-rendered image, the team behind No Man's Sky does the math to calculate where everything should be. That includes where the sea level should be, and also how tall are the local mountains and/or trees should be, and how does an animal that looks like this one move, and what color is that rock, and how many blades of grass are there in this field and how do they react to a strong wind. It's not all described in detail. It's all described in laws. Like physical laws. More math.
Which is sort of the point. The universe—our universe, as well as the one in No Man's Sky—operates on a big heap of math. We don't know all the math, but you can be sure, it's math. Just as everything in our universe was implicit in the tiny proto-universe dot that existed in the nanosecond after the Big Bang, everything in the NMS giga-universe is defined by a set of mathematical rules. If you know the math, you don't have to build a thousand thousand different wood textures and carefully apply them to each surface the way some non-procedural programmer might do. You don’t have to hand place every tree. The math predicts how the tree will grow and in what environment. The color of the leaves. The season of the fruit. The math predicts what the wood looks like when it's split. How it will take a stain. How it will hold up as a table or chair.
Know the math, and the universe simply follows.
The universe of No Man's Sky, colorful as it is, is very simplistic in many ways. There are limits to the fineness of the detail that make it a lot "coarser" than our universe. But it shows that designing a whole universe doesn’t require Slartibartfast and company slaving away to put every notch in the glaciers. Other procedural programs will follow, simulating things at a finer detail. Silicon will continue to improve. Tricksy programmers will continue to get tricksier.
We're not a century away from being able to simulate a universe as detailed as our own. We may not be a decade.
Yeah. Gulp.
But what about that last part of the "uh-uh, this is too real" argument. What about the question of who in holy &$@k would want to simulate this place? Bostrom has a suggestion on that point. He thinks it's us. Or rather, a kind of post-us. Some form of future humanity with a (let's not say morbid) interest in re-imagining their own past, not just once, but many times.
“One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations. Suppose that these simulated people are conscious… Then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race.”
—Nick Bostrom
Bostrom is quite convinced that “ancestor simulation” is the most likely source of the pixels of our lives. As it happens, I think that Bostrom may be wrong on this point. To see why, let's go visit the guy who invented the Air Crib.
(Want to yell at me properly for wasting your time? You have no choice but to simulate your presence next week where your algorithm can encode the final installment in this simulated essay.)
Look how many planets were on our message to the stars. Suck on that, Tyson!
Author's Note—This is a piece about the universe, aliens, computers, bastards and why there may not be, but probably are, alien computer bastards. It's long, long enough that this is only part two of three. Part two? Did you already miss part one? Probably. Plus, part three isn't until next week, and that's the one with all the answers. So... yeah. Right now you're justifiably irritated and saying to yourself "why didn't he put this at the top so I'd realize all this and skip reading this mess?"
Well, it's because 1) nobody reads things that start with "part two" and 2) Nobody, but nobody reads things that start with "part two of three", and 3) I have something in common with those alien computer bastards (hint: it's not being alien. Or computers).