Machine Takeover

Facebook’s Robot Philosopher King

Yann LeCun, the NYU professor Facebook just hired to run its new artificial intelligence unit, promises to do something the social media giant has never done before.

Status update: Artificial friendships can be a whole lot more intelligent.

Last week, Facebook created a stir in the artificial intelligence and deep learning world by hiring Yann LeCun, a professor at New York University’s Courant Institute of Mathematical Science and its Center for Neural Science, to head up its newly minted artificial intelligence unit. The move, which accelerates an arms race in the nascent field of deep learning for tech companies, comes as Facebook settles into its status as an “old” and profitable publicly traded tech company. And LeCun is thinking big. His goal, he said during an interview at his office, is to “essentially make it the world’s most prominent lab in AI research.”

Affable and far more conversationally dexterous than expected from the highlight of the recent Conference on Neural Information Processing Systems, LeCun can go from rifling off stories of mind-numbing breakthroughs in his field to waxing poetically about how philosophers affect his work. His office, which features a poster titled “Highlights of the Jazz Story in USA” as well as one of John Coltrane, reflects his love of the genre. There is also the requisite whiteboard full of Beautiful Mind­-esque symbols and numbers.

Thanks to Watson, IBM’s Jeopardy-winning robot, audiences are generally familiar with artificial intelligence. LeCun is an expert in the field of deep learning, a branch that is still relatively small, and is concerned with designing computer systems to be trained to process information similar to the way a human brain does. This system of multiple layers (thus the “deep”) of algorithms are intended to work like neural networks to solve complex processes like recognizing handwriting or images. As LeCun puts it, deep learning is essentially, “How do you come up with learning algorithms that can train an entire learning system from raw input to systems to ultimate output?” Thinks of how a plane models itself after a bird. It uses the same principles of aerodynamics, but it won’t have feathers, need to eat, or claw somebody’s eyes out.

Born in 1960, just outside of Paris, LeCun has been drawn to computers, robotics, and artificial intelligence since a young age.

“I grew up in the 60’s, and there were a lot of things about, you know, space travel and robots, so I was always fascinated by this,” he said. While others might have had their nose in The Hardy Boys, LeCun was picking up books on evolution, astronomy, and rocket science. Then, at age nine, he saw 2001: A Space Odyssey. Filled with intelligent computers and space travel, LeCun admits, “it’s difficult to say this looking back, but it probably changed my life. It’s very strange to say this, it’s just a movie but it had a big influence.” In his last two years of high school, LeCun bought his first computer and taught himself programming.

Although he now teaches at a mathematical institute, LeCun admits he was not a great math student. After high school, he went to Ecole Supérieure d'Ingénieurs en Electrotechnique et Electronique (ESIEE) in Paris for an electrical engineering degree. While there, LeCun was allowed to dabble in some of his offbeat interests, including philosophy. “There was one book, that really had a big influence on me, which was the proceedings of the debate between Noam Chomsky, the famous linguist, and Swiss psychologist Jean Piaget,” he said. (For the record he deems Chomsky “all wrong” on AI).

One of the intellectuals arguing on Piaget’s side was MIT’s Seymour Papert, who co-wrote Perceptrons with Marvin Minsky. A watershed in exploring artificial neural networks for computer learning, Perceptrons also laid out what it saw as the limits of such attempts. The end-result was that by the time that LeCun was in his graduate studies, this branch of AI barely existed. LeCun, however, was hooked. By the early 1980s, he became convinced that the future of AI was a machine “that could learn, could be taught, instead of programmed.”

And so while his degree was in circuit design, LeCun pursued research in machine learning, and published a paper on the topic that caught the attention of one of the leading minds in the field, Geoffrey Hinton, a University of Toronto computer scientist who now works at Google.. LeCun also pioneered an early adaptation of the back-propagation algorithm (a way of training artificial neural networks), and caught the eye of Bell Labs.

While at Bell Labs, LeCun worked on the idea of training a convolutional neural network (which simulates neuron processes found in biology) with a back-propagation algorithm for image-recognition models. That, in turn, led to LeCun’s big success at Bell Labs, which was a model that could read handwriting on checks.

Fate, however, had other plans. The very day in 1996 that LeCun and his team celebrated the rollout of machines featuring their work, AT&T announced it was spinning off Bell Labs as well as National Cash Register (which made ATM’s). AT& T also sold another one of his projects, DjVu, an image compression technology which he thinks could have competed with PDF.

LeCun continued on at AT&T until 2002. After a brief stint at NEC Research Institute, he joined NYU in 2003.

Get The Beast In Your Inbox!

Daily Digest

Start and finish your day with the top stories from The Daily Beast.

Cheat Sheet

A speedy, smart summary of all the news you need to know (and nothing you don't).

By clicking “Subscribe,” you agree to have read the Terms of Use and Privacy Policy
Thank You!
You are now subscribed to the Daily Digest and Cheat Sheet. We will not share your email with anyone for any reason.

Of course the part of his bio that currently has tongues wagging is just unfolding.

Last week, LeCun posted to his Facebook page that he was taking the position as director of Facebook’s new artificial intelligence lab, and that the tech giant would be also be teaming up with NYU’s Center for Data Science for research projects in data science, machine learning, and AI.

“This is going to be a research lab. It’s not something that Facebook has done so far,” said LeCun. As a growing brand trying to turn profits, Facebook in the past has had to focus on short-term technologies and products, rather than have five or ten-year research projects to coincide with technological breakthroughs. Now, as it looks further ahead, the creation of a research arm makes sense, particularly in the field of artificial intelligence. Why? “Things like machine learning and AI, and image recognition, speech recognition are really at the core of many services of all web companies but particularly Facebook,” said LeCun.

Three months ago, Facebook hired a former student of LeCun’s, Marc-Aurelio Ranzato, away from Google Brain to begin to apply deep learning to its products. “Marc-Aurelio, in the space of a very short time got really good results on some problems that Facebook was interested in,” and so Mark Zuckerberg, “was kind of really impressed by how this worked, and decided maybe it was time to create a research lab, and maybe it should be focused on AI,” said LeCun.

So now, with one of the godfathers of deep learning on board, what is next?

As most consumers of Facebook well know, the social media giant has recently introduced pretty effective facial recognition software. Observers have tended to view LeCun’s hiring as a way to help the social network develop new ways to organize photos by content, or by conducting more sophisticated sentiment analysis that could enhance advertising effectiveness. LeCun and his team would be seen as furthering the company’s stated goal of cleaning up News Feed. The popular section of Facebook, which is essentially a newspaper compiled by friends for friends, has left a lot to be desired. Its populist tendencies have led to headlines like the recent “Facebook’s News Feed Suffers From the Banality of the Crowd,” in Businessweek.

Facebook, on the other hand, reportedly wants the News Feed to be, well, classier. And a planned rollout of the more personalized News Feed has reinforced that desire because users seem to be filling up the feed with clicky posts from BuzzFeed and Upworthy, rather than the New Yorker and Washington Post. And so one pressing problem that LeCun and his team would likely need to address is how to develop a News Feed that recognizes “better” content.

And LeCun was upfront about those various goals. He puts “ad placement [and] messaging” in the short term, and for the medium term, “things that concern content analysis of images and video and audio.” Those might allow Facebook to finally allow GIF’s if it could figure out which were “worthy” of filling the News Feed. And then for the longer term, “things that relate to better understanding the users.” For instance, he explains, “if you know the users’ interests and aspirations and goals in life and things like this, you might be able to do a better job at picking out the right news to display.” Oh, and for those of us with international friends, a better translation service.

Facebook, as any consumer advocate well knows, has loads of data and information in terms of human behavior. But it has not unlocked the “perfect” way to service its most basic function—sharing information between people. And so the hope is that deep learning and artificial intelligence work can improve Facebook’s functionality and make it seem more relevant to users.

LeCun is not wearing rose-colored lenses when it comes to privacy concerns and all that data. “It raises some important questions,” he admits, but also points out that with artificial intelligence the question becomes less what you want individual people to have access to in terms of your information, and more “whether you want to allow machines to have access to that information.”

In Silicon Valley—whether it is driverless cars, drones delivering pizza, or ratification intelligence—there’s a great deal of optimism about the capacity of machines to replace humans. But LeCun, who will continue to work at NYU, downplays the capacity of robots to become our new overlords.

As he put it: “Any neural nets we can simulate now are incredibly small compared to any kind of animal brain you can imagine.”