Can Machines Really Think?
UNLIMITED DATA | BY JAMES KULICH | 5 MIN READ
A few years ago, when autonomous car technology was just emerging, I heard an interesting description from a speaker at a Gartner Symposium of his first experience of a ride in a self-driving car. The first 15 minutes were a sheer terror as the vehicle made its way on its own down a busy California highway.
He recalled the next 15 minutes being really interesting, as others riding in the car with expert knowledge explained how things work. For example, the car’s sensors not only eliminated a driver’s blind spots but also anticipated the blind spots of other drivers and directed the car to avoid them.
The last 15 minutes were totally boring as the car followed every speed limit and lane change rule to a “T” and steadfastly refused to become emotional even when cut off by someone in another car. (The speaker’s actual comment was that the car had no middle finger.)
Is this car a thinking machine? I think the answer is, in some ways, yes. Without additional direction, the car uses real-time input from its larger environment to generate new information, which, in turn, guides its actions. The car appears to be “thinking” more clearly about good driving than many of us do at times!
Turing’s Vision of the Future
The question of machine intelligence is not new. In 1950, Alan Turing, famous for his central role in cracking the code behind Nazi war communications, published a paper with the title Computing Machinery and Intelligence in which he asked: “Can machines think?”
Turing offered a way of answering the question via a well-known parlor game of the time in which a person tries to guess the gender of two others based on their responses to questions the interrogator poses. Turing proposed replacing one of the two individuals with a computer that must answer the interrogator’s questions—and then observing whether the outcome of the game changes.
While Turing knew that this was impossible given the state of 1950 technology, he believed a computer could do the job at some point in the future, systematically challenging the commonly offered reasons why it could not. Turing covered theological objections, fears of what a thinking machine might do, mathematical considerations, notions of consciousness, and even an 1842 critique by Lady Lovelace of Charles Babbage’s concept of the Analytical Engine. Ultimately, Turing concluded that the best answer to the question of thinking machines was not yet, but certainly not never.
Turing would probably be amazed by today’s capabilities in machine learning. In an article on the use of artificial intelligence in the financial markets, Alasdair Macleod defines machine learning as “the ability of a computer to learn and improve from experience without further programming.”
We all know examples of this in action. Very recently, there was a story on the news about how artificial intelligence-based fall detection in an Apple Watch saved the lives of some hikers who had fallen from a cliff. The device notified 911 without prompting, because a fall had been detected and no human response had followed.
The Terminology of Today
Under the hood, you will see algorithms like neural networks at work. A simple neural network replicates, in a loose way, the function of a few neurons in a human brain. Digital inputs from several sources are combined. Binary outputs are generated based on whether or not thresholds defined by the artificial neurons are reached. These outputs can then drive some form of action. Think sensors and motors.
Deep learning refers to a neural network with many interconnected layers. Deep learning algorithms can do astonishing things, like recognize and classify visual images with high accuracy. One price you pay, though, is you cannot see how a deep learning network is “thinking.” It’s a black box.
Deep learning is no panacea. Deep learning, when improperly tuned, makes mistakes. As the world for which a particular application of deep learning was built changes, deep learning algorithms can become stale.
Reinforcement learning goes a step further, building its own representation of the data it sees. A good example is playing a complex game. A reinforcement learning algorithm watches two master players compete and determines, for itself, the patterns of a winning game. There is no explicit programming.
Where Humans and AI Diverge
So, where does this leave us? Macleod makes the distinction in his article between intelligent behavior and the appearance of intelligent behavior. Today’s AI certainly imitates intelligent human behavior like never before. It will only get better.
But AI can never be the same as human intelligence. Our biology is inextricably interwoven with our thinking in ways that are simply impossible for machines to attain. Indeed, our brain cells refresh themselves in ways no silicon chip can. A machine always tries to make a good decision. Sometimes we simply choose to be capricious. That is undeniably human!
Turing’s question about the potential for machine intelligence is interesting. I think, though, that the better question is the extent to which we can combine our human strengths with the best that powerful AI can offer. This is a sure path to creating opportunities for better lives and better societies, even as we wrestle with all of the challenges this approach entails.
This is the philosophy that guides our approach to data science at Elmhurst College.
Learn More About Elmhurst College
Request information today!