If Time’s Person of the year for 2006 could be “you” then, with respect to Angela Merkel, the person of the year for 2015 should have been “it.”
The collective power of faux artificial intelligence had a big year in 2015, and while true AI won’t exist until one of these thinking machines becomes self-aware, once they do they’ll be quite impressed with their achievements.
After all, 2015 was the year they learned how to drive. Google was one of many companies testing the idea of putting the operator shotgun and letting the car handle navigation on its own. It didn’t come without a few hiccups, of course. Within its first year on the road it managed to get in a little fender bender, and allegedly cut off another autonomous auto. We’ve all done it.
The AI collective also managed to land a pretty great job in a Hitachi factory. Its first annual review is probably going on as we speak, but we expect the system, whose job was to increase efficiency, performed well and didn’t make any of its officemates uncomfortable with loud phone conversations or awkward water cooler stories.
And you made friends with AI in 2015, if you hadn’t already. Personal assistant Cortana from Microsoft became available on more platforms. Amazon released Echo’s Cortana competitor, Alexa, a home smart hub that does dozens of things you ask it too--and probably does them faster and better than your roommate/spouse ever does. We’re still waiting on it to learn to take out the trash.
The point is that artificial intelligence did more than look at algorithms this year, and while we’ve heard about super computers and quantum computing for years, this is the first time that any of that lightning-fast, thinking-out-an-answer tech started sharing the roads, the roofs, and the responsibilities with you and me. And people are split over whether that was a good thing.
Part of the question is the unknown nature of things taking over for us. When a driverless car hits another car, who is at fault? Volvo was first to make a statement, saying that they’d pony up when their cars were doing the driving.
But there are decisions those cars are making, like who to endanger in no-win crash scenarios.
This is all, of course, a symptom of our bigger distrust in machines. We’ve seen enough “Terminator” movies to know what could go wrong. The so-called “singularity” terrifies smart people, dumb people, and makes a huge profit for moviemakers.
Marvel and Disney were spent this year raking in big dollars playing on that fear, when the Avengers finally battled one of their greatest foes, Ultron. “Avengers: Age of Ultron” had all kinds of box office success, and while we’ll chalk that up in part to the Marvel machine’s success rate, nothing really got the philosophers going like an opportunity to talk about the plausibility of artificial intelligence’s first thoughts being about how to eradicate mankind.
Maybe we’re giving too much control to machines in calculated attempts to make our lives easier. It’s a liability to structure a population around eliminating certain everyday tasks from the collective experience, the same way young people today (myself included) have trouble reading paper maps, or seem more competent at digital conversation than what is commonly referred to as IRL.
But these aren’t new fears. Asimov wrote “I, Robot” more than half a century ago. Back then you needed to tune your TV with a dial. Now you can do it with a little remote, or your voice, or a mouse click.
Maybe the control we’ve given over this year it’s the first of the imminent age of the singularity. Maybe it’s just another period in a long evolutionary ascent toward what happens in “WALL-E,” and that seems like it’d be okay. Those robots are far more compassionate and selfless than the ones in “Terminator.”
Maybe the bigger picture is that when true artificial intelligence arrives, we’ll have to treat it like what it’s meant to be: another person—a brilliant one, sure—who could be good or evil.