Imagine watching TV without a screen or communicating with friends without a phone or facebook. Would you have an implant to have virtual sex with anyone you wanted — or to be stronger or smarter? What’s the status of the science? When do humans become obsolete?
It seems natural that someday we’ll make machines that’ll think and act like people. However, for a machine or computer there’s no other principle but physic, and the chemistry of the atoms that compose it.
It’s not a matter of if, but rather when it’s going to happen. We already know how to clone entire organisms — for instance, our team has cloned herds of cows and even the first human embryos and endangered species (Science 294, 1893, 2001), we’ve reversed aging at the cellular level (Science 288, 665, 2000), and we’ve made progress growing replacement tissues for every organ system of the body, including the heart and kidney (Nature Biotechnology 20, 689, 2002). However, there’s one organ that’s a far greater challenge: the brain.
I remember a journey I took with my dog Shepp. I’d wandered miles, when from the trees came the sound of a train. Clatter-clatter-rap-rap! To Shepp, still a puppy and a few days out of the pound, it’s possible an extraterrestrial would look not unlike the steel caterpillar that rounded the corner, thunder billowing out of its nostrils. It seemed so alive. Shepp let out a yelp. You can scarce imagine his expression as it rushed toward us rattling the earth. “It’s not alive,” I said, more to myself than to Shepp. How could I convey that it was only a lump of metal and quite unconscious — that it was only a machine with sliding bars and wheels hauling TV sets into the city? A loud whoosh and it vanished into the trees.
When the vibrations ceased, Shepp crawled out from the bushes. For myself, I stood there for some minutes, picturing the metal caterpillar moving beneath the tree-tops. As a biologist I could easily list the differences between a machine and a living organism. The anatomy of a train is not unlike the human body. There are moving parts, and within its huge round body, a carburetor that takes in air and fuel, and wires sending electrical impulses to the spark plugs.
It seems natural that someday we’ll make machines that’ll think and act like people. Already, there are scientists at MIT who say the interactions between our neurons can be duplicated with silicon chips. As a boy I worked in the laboratory of Stephen Kuffler — the pre-eminent neurophysiologist and founder of Harvard’s Neurobiology department — watching scientists probe the neurons of caterpillars. Kuffler was the brilliant author of From Neurons to Brain, the textbook I used later as a medical student. In fact, so intrigued was I by the sensory-motor system that I returned to Harvard to work with psychologist B.F. Skinner. However, I’ve since come to believe that the questions can’t all be solved by a science of behavior. What is consciousness? Why does it exist? There’s a kind of blasphemy asking these questions, a personal betrayal to the memory of that gentle, yet proud old man who took me into his confidence so many years ago. Perhaps it was the train, that insensate machine rolling down the tracks.
“The tools of neuroscience,” cautioned David Chalmers “cannot provide a full account of conscious experience, although they have much to offer.” The mystery is plain. Neuroscientists have developed theories that help explain how information — such as the shape and smell of a flower — is merged in the brain into a coherent whole. But they’re theories of structure and function. They tell us nothing about how these functions are accompanied by a conscious experience. Yet the difficulty in understanding consciousness lies precisely here, in understanding how a subjective experience emerges from a physical process at all. Even Nobel physicist Steven Weinberg, concedes that there’s a problem with consciousness and that its existence doesn’t seem to be derivable from physical laws.
Physicists believe the “Theory of Everything” is hovering around the corner, and yet I’m struck that consciousness is still a mystery. We assume the mind is totally controlled by physical laws, but there’s every reason to think that the observer who opens Schrödinger’s box has a capacity greater than that of other physical objects. The difference lies not in the gray matter of the brain, but in the way we perceive the world. How are we able to see things when the brain is locked inside a sealed vault of bone? Information in the brain isn’t woven together automatically any more than it is inside a computer. Time and space are the manifold that gives the world its order. We instinctively know they’re not things, objects you can feel and smell. There’s a peculiar intangibility about them. According to biocentrism, they’re merely the mental software that, like in a CD player, converts information into 3D.
And this brings me back to the train hauling TVs into the city. I suspect that in some years there might even be a robot in the conductor’s seat, blowing the whistle that warns pedestrians to get off the track. In the 1950’s, neurologist William Walter built a device that reacted to its environment. This primitive robot had a photoelectric cell for an eye, a sensing device to detect objects, and motors that allowed it to maneuver. Since then robots have been developed using advanced technology that allows them to “see,” “speak,” and perform tasks with greater precision and flexibility. Eventually we may even be able to build a machine that can reproduce and evolve.
“Can we help but wonder,” asked Isaac Asimov, “whether computers and robots may not eventually replace any human ability? Whether they may not replace human beings by rendering them obsolete? Whether artificial intelligence, of our own creation, is not fated to be our replacement as dominant entities on the planet?” These are the questions that I pondered along the railroad tracks that day, and that trouble me when I see cyborgs on TV.
However, for an object — a machine, a computer — there’s no other principle but physics, and the chemistry of the atoms that compose it. Unlike us, they can’t have a unitary sense experience, or consciousness, for this must occur before the mind constructs a spatial-temporal reality. Eventually science will understand these algorithms well enough to create ‘thinking’ machines and enhancements to ourselves (both biological and artificial) that we can’t even fathom. And after over 200,000 years of evolution, Homo sapiens, as a distinct species, may go extinct, not by a meteor or nuclear weapons, but by our desire to achieve perfection.
Robert Lanza has published extensively in leading scientific journals. His book ‘Biocentrism’ lays out the scientific argument for his theory of everything.
Link to article in Huffington Post (244 Comments)