In January GQ ran an article by Dan Halpern entitled “Are You Ready for the Singularity?” It chronicled the author’s misadventures at the Singularity Summit held annually in New York City. For the uninitiated, the Singularity (the technological kind, not the physics kind) is the “technological creation of smarter-than-human intelligence”. Now this can occur in many ways, including a super-advanced Artificial Intelligence (AI) kind of like the kind Haley Joel Osment played in the movie AI: Artificial Intelligence.
The Fall of Man?
The Singularity website offers some more suggestions: "... direct brain-computer interfaces, biological augmentation of the brain, genetic engineering, ultra-high resolution scans of the brain followed by computer emulation". Certainly intense stuff and, in theory, completely plausible.
That is, if you know what consciousness is (we don't know). And understand what exactly it means to learn (we aren't sure). And can feel (we can do this, but outside of Mr. Osment up there, robots cannot - yet). The reason for the Singularity Summit is a mixture of (unfounded) fears and extremely high hopes. As Mr. Halpern points out, there "are very smart men and women who have spent a lot of time doing a very creditable job at finding the right answers to whatever difficult questions they have pursued, and they have a lot of data that says they're right about this, no matter how strange it sounds."
But are they right? Really? Really really? No, seriously, I want you to be sure. OK... if you say so.
I think the main problem with believing in the singularity as an event that spells the end of mankind is hubris. People are actually cocky enough to believe that they can create an intelligence greater than their own. Part of the fantasy, in some sick, masochistic, pour more hot wax on my nipples you dirty, dirty scientist because I forgot our safe word, way is the idea that we humans who have already conquered nature (thanks evolution) can create something so much better than us that it can destroy us, and take our place at the top of the food chain - not that they'd need food.
Robots Hate Pancakes... It's A Trick!
But I digress, let's talk about how plausible all of this is shall we? Because of Moores Law, which states that however many transistors a chip has will double in about 18 months, people expect computers to have as much or more processing power than the human brain as soon as 2030. At this point, it is assumed, machines (for lack of a better word) will become smart enough to self-replicate and improve on themselves. This will eventually lead to said machines becoming conscious, and realizing that the world would be a much better place without those pesky humans running around causing global warming and deforestation! At least, I hope that's their motivations for genocide and not something like... I don't know... money? Would robots like money?
Here's the problem, at least on the outset: Machines are more than just hardware. They're software too, and usually pretty crappy software at that (I'm looking at you Windows Vista, I'm looking at you hard...). Jaron Lanier summed it up perfectly in his "One Half of a Manifesto" in 2000. He said, "For computers to design their own successors, someone has to write the initial software. Humans have given no evidence of this ability." He also says, along the same lines, "As processors become faster and memory becomes cheaper, software becomes correspondingly slower and more bloated, using up all available resources." One more quote, for the road: "... processing power isn't the only thing that scales impressively; so do the problems that processors have to solve." Really, the article should just be read. It's quite the argument, and covers a lot more territory than the three tethered quotes I chose to reproduce.
"Now hol' on there kiddo," I imagine you're saying right now, "Imma not good at the whole arithmetic thing, but by my figerin', that was 9 years ago that article you gon and mention there's been written. Somethin' musta gotten better since then." To this I would respond, "My good clearly southern based on your accent sir, I only need to point to the aforementioned Windows Vista, or maybe the dreaded RROD."
The Savior of Man?
So there's that. There is also the fact that consciousness is hard to define, let alone understand on such a fundamental level as to recreate it in a lab through code or processing power. The neuroscientist Gerald Edelman defines it in this way: "It is a process, and it involves awareness." Wow, way to be specific there Mr. Edelman. Based on that I can safely say I don't know many people that can be considered conscious. Hey-oh! Digressing from that awesome burn, that definition doesn't seem to be too far off from Merriam-Websters definition that I won't repeat here due to laziness.
So if we can't even really, accurately define the term, how will we know when machines have it? I suppose since we know we have it, and we know that animals seem to have it, we'll be able to tell through careful observation. If machines can think, and plan, then they'll be attributed consciousness. This is only fair. But what type of consciousness? Edelman, in the interview with Discover Magazine linked above, briefly mentions two types of consciousness: Primary Consciousness and Self-Consciousness. The biggest difference between man and beast is man's self-consciousness. I mean, how often do you see a cat checking itself out in the mirror before a date?
He's Ready for his Scuba Date!
Besides not even knowing their own reflection, animals aren't aware of their own impending dooms (especially from robot apocalypses). So how will we know which consciousness our machine overlords have? Maybe some sort of a symbol test? "The human brain is capable of symbolic reference, not just syntax." says Mr. Edelman. While a machine might be able to learn, and it might be able to remember, will it associate things without meaning with meaning? Will this lead to art? Religion? Anything like that? Doubtful, as any software a machine would run would be based off algorithms, and algorithms are pretty straightforward.
Besides that, if a machine is mass produced with the same components, and is given or achieves or however-you-want-to-look-at-it consciousness, will it share a consciousness with it's brethren? Actually, on a simple level, that's a very stupid question. Twins, for instance, are born via the same components but don't share a consciousness. Edelman says, "Every single brain is absolutely individual, both in its development and in the way it encounters the world." Although these twin machines may not be different in their development, they will no doubt become different via different experiences, just like any animal or person. But people and animals have something machines don't (and probably won't ever have): Organic materials.
Which brings me to my final, and my biggest reason, I don't believe the robot uprising will actually happen. Any real, organic brain is more than just electrical signals running through neurons. We, more than any other example, aren't just ones and zeroes as computers are. The neural code is far more complicated than that, so much so that we aren't even close to understanding it. The way the brain communicates within itself, and with parts of our bodies, has as much to do with hormones and chemicals as it does with electrical signals.
For example, the hypothalamus is a tiny part of your brain that releases hormones that control things in your body such as growth hormones (for puberty) and sex hormones (for women being bitches once a month). Let's focus on the sex hormones for a second. Besides the obvious joke already made about how it plays with women's emotions, how many guys in the room have had their thinking all clouded up due to some sex hormones being released at a bad time? Considering I'm the only guy in the room as I'm writing this, I'm going to raise my hand and say, "Amen brother!"
Sometimes You Just Have to Remind Yourself
How about the amygdala, which controls not only positive emotions such as sexual responsiveness (which shows that parts of the brain work together, my amygdala sees a bangin' hot chick and tells my hypothalamus, which then releases the appropriate hormones to the, uh, less reputable parts of my body). Emotions are mostly a hormonal response to a stimulus, which the amygdala controls. Will computers have that? It's been said that if you experience a strong stimulus, or something emotionally scarring, that your brain will scar from it. New neural connections are formed on a daily basis. Unlike a motherboard, an organic brain is not a static thing. It's constantly evolving, constantly remaking itself in response to stimuli.
Now does that mean that the singularity in it's most base form won't happen? No, I do believe eventually we'll make machines that are smarter than us in terms of processing power. Hell, that's an inevitability. It is also completely possible that once technology advances that far some evil, evil men will want to use that power for evil, evil purposes. And, I mean, cyborgs are a possibility. However, cyborgs would still be human on some level. And with that humanity would come some sense or morality. There is a huge worry over what constitutes human, and while I won't get into that here, I think it's deeper than just our hardware. We like to see the trees in the forest, but not the forest itself. Our bodies, our consciousness, is a whole. To use a sports metaphor only 250,000 people at best will understand, consciousness in people is a lot like the Buffalo Sabres; There aren't any great individual players but the team as a whole is great.
In conclusion, suck it HAL 9000.
Nice job- and it was funny too!
ReplyDelete