This article was originally published at Psychology Today and posted to the Reddit philosophy board, where it generated over 1000 comments. See the discussion here.
Some of today’s top techies and scientists are very publicly expressing their concerns over apocalyptic scenarios that are likely to arise as a result of machines with motives. Among the fearful are intellectual heavyweights like Stephen Hawking, Elon Musk, and Bill Gates, who all believe that advances in the field of machine learning will soon yield self-aware A.I.s that seek to destroy us—or perhaps just apathetically dispose of us, much like scum getting obliterated by a windshield wiper. In fact, Dr. Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.”
Indeed, there is little doubt that future A.I. will be capable of doing significant damage. For example, it is conceivable that robots could be programmed to function as tremendously dangerous autonomous weapons unlike any seen before. Additionally, it is easy to imagine an unconstrained software application that spreads throughout the Internet, severely mucking up our most efficient and relied upon medium for global exchange.
But these scenarios are categorically different from ones in which machines decide to turn on us, defeat us, make us their slaves, or exterminate us. In this regard, we are unquestionably safe. On a sadder note, we are just as unlikely to someday have robots that decide to befriend us or show us love without being specifically prompted by instructions to do so.
This is because such intentional behavior from an A.I. would undoubtedly require a mind, as intentionality can only arise when something possesses its own beliefs, desires, and motivations. Continue reading