The Search for the God Algorithm

So far no one has discovered the Master Algorithm that can reap and sort all knowledge, but not for want of trying. A new book details the hunt for the modern philosopher’s stone.

10.21.15 5:00 AM ET

Tech executives are not generally known for their modest ambitions. Google’s Sergey Brin, for instance, has expressed this vision for his company: “We want Google to be the third half of your brain.” Ray Kurzweil, a futurist and inventor who is a director of engineering at Google, anticipates everything from personal flying vehicles to the uploading of human minds to computers within a few decades.

It’s tempting to dismiss such claims as overheated hype meant to project an aura of brilliant innovation rather than to predict the future accurately. Zooming out from Google to technological predictions in general, a whole graveyard of unrealized projects and dreams comes into focus.

Harvard’s Steven Pinker is an eloquent skeptic not only of a technological singularity—a scenario in which computer intelligence eclipses our own—but also of a broader genus of wishful thinking: “The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived.”

Then again, self-driving cars, real-time language translation software, and speech recognition programs are only some of the many technologies that probably appeared outlandish a generation ago and have now been essentially realized. Because examples exist in both directions, any projected technological advance can be framed as either a soon-to-be-perfected marvel or a soon-to-be-forgotten fantasy.

This is part of what makes computer science professor Pedro Domingos’s new book, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, so interesting. It’s an impressive and wide-ranging work that covers everything from the history of machine learning to the latest technical advances in the field. He’s equally comfortable discussing the philosophy of David Hume and the intricacies of Markov chaining and Bayesian statistics. But the book is not simply an overview, it’s also an argument for the following hypothesis:

All knowledge—past, present, and future—can be derived from data by a single, universal learning algorithm.

Domingos is not talking about creating “revolutionary” and “disruptive” new apps for efficiently ordering pizza or rapidly locating purveyors of craft beer. If his master algorithm is discovered, the hyperbolic vocabulary of tech-industry cheerleading would actually become justified. He predicts that this algorithm would a) Cure cancer b) Eliminate all jobs, freeing everyone to enjoy a life of leisure and making employment just another vestige of humanity’s primitive past, and c) Invent everything that can be invented.

Whether this is attainable is an open question. Domingos clearly thinks it is both possible and imminent, but he’s refreshingly undogmatic in his belief. He admits that the Master Algorithm may belong in the same chimerical category as the philosopher’s stone and the perpetual motion machine, inventions often dreamed of but never realized. Yet even if the Master Algorithm itself is not found, the quest to discover it would be worthwhile as an intellectual exercise—teaching machines to learn requires scientists to be very explicit about how learning works—and would yield many valuable practical implementations.

Domingos devotes a great deal of space and ingenuity to explaining the intricacies of the five major intellectual “tribes” of machine learning: the Symbolists, the Connectionists, the Evolutionaries, the Bayesians, and the Analogizers. Each school of thought has a “master algorithm” of its own, but the ultimate Master Algorithm would combine elements of all five approaches, thus eliminating the drawbacks of each.

Symbolists program machines to learn by using a process called inverse deduction and encoding ideas from formal logic. This approach creates algorithms that are good at reasoning about mathematical universals, but less effective when it comes to probabilistic thinking. Bayesian algorithms, by contrast, are good at modeling uncertainty and making probabilistic inferences. Connectionists essentially attempt to reverse-engineer the brain, creating neural networks with connections of variable strength that change as a result of feedback loops.

Evolutionaries see natural selection as the master algorithm and use genetic programming to mate and evolve computer programs that become increasingly “fit” for a given task, such as determining whether an email is spam. Analogizers recognize similarities between types of objects and are useful for everything from face recognition (think of how Facebook “recognizes” the friends you tag in a photo) to book recommendations.

Domingos uses strategies and features from all five approaches to design his own candidate for the Master Algorithm. He calls the program Alchemy to remind himself and others that machine learning is still closer to alchemy than chemistry on a spectrum of scientific progress. Alchemy has already learned more than 1 million patterns by extracting facts from the Web. These patterns are semantic networks of linked concepts, such as planets, stars, Earth, and sun. It discovered the concept of a “planet” on its own, and learned that planets orbit stars and that the Earth orbits the Sun.

Get The Beast In Your Inbox!
By clicking "Subscribe," you agree to have read the Terms of Use and Privacy Policy
Thank You!
You are now subscribed to the Daily Digest and Cheat Sheet. We will not share your email with anyone for any reason

Alchemy is not yet the omnipotent program that will cure cancer, but Domingos establishes a lucid conceptual roadmap for how to design such a machine. One passage describes how a Master Algorithm could print out a customized drug to kill any particular cancer based on an overarching and constantly evolving model of living cells, patient histories, and experimental data from the biomedical literature. Extraordinary as this seems, he makes it sound less like science fiction than a glimpse into the nature of medical care in the near future.

Occasionally he overestimates the accessibility of the subject to non-experts. Sentences like this are not exactly transparent: “The unified learner we’ve arrived at uses MLNs as the representation, posterior probability as the evaluation function, and genetic search coupled with gradient ascent as the optimizer.” But given the technical complexity of the material, most of the book is remarkably clear and comprehensible.

Domingos doesn’t follow Stephen Hawking and other scientists down the rabbit hole of envisioning scenarios in which sufficiently advanced computers acquire autonomous desires and opt to enslave humanity. His reason for not worrying about this possibility is a truism: “Unlike humans, computers don’t have a will of their own … Even an infinitely powerful computer would still be only an extension of our will and nothing to fear.” This is less comforting than it might be. It not inherently implausible to think that consciousness might be either a necessary condition or an inevitable byproduct of a degree of complexity sufficient to exhibit human-level intelligence.

He concedes that certain dangers exist, but he thinks these stem from human psychology rather than the malevolence of machines. “Any sufficiently advanced AI is indistinguishable from God,” he writes. This places control in the hands of the priesthood of scientists who are programming this aspiring deity and deciding to what ends its powers are used. Domingos’s book is a rare chance to glimpse the inner workings of this priesthood as they seek to create something greater than themselves.