A dog by any other name

In the Future, We’ll Love Our Robot Pets, But Will They Love Us Back?

A robo-dog might be the most efficient, cleanest, responsible pet of the future. But will they completely replace our current good boys?

Humans are obsessed with robots. Leonardo da Vinci attempted to build one in the 16th century, and the Jetsons were served by Rosie the robot maid. Today’s pop culture robots are indistinguishable from living, breathing humans (some examples: Blade Runner, Westworld, Ex Machina, and Black Mirror).

We’re obsessed with the pursuit of replicating or replacing ourselves. But strangely, the same obsession hasn’t really been applied to pets.

aibo (stylized in all lowercase letters, as opposed to its all-caps predecessor AIBO) might change that. The company’s iconic robotic dog was originally introduced in the early 2000s. At the time, Sony timed AIBO’s release with the only two research papers written by a group of computer scientists that dived into understanding how A.I. could simulate animal intelligence, which is to say our understanding here is pretty scant. The two papers detailed how the company used studies on animal behavior (ethology) to program the bots. One paper described how the team essentially broke down basic animal behavior into a series of modules that the robo-pup could simulate, like whining for attention, and the other described how the team modeled AIBO’s complex emotional system to match predictable, relatable dog behavior that humans could form a connection to.

AIBO wasn’t the only robo-dog of the early aughts—the far less expensive Poo-Chi toys were wildly popular in the same time frame, and the instinct to raise a robotic creature fed the popularity of digital critters like Neopets to Pokemon.

Despite their introduction in the early 2000s,  tangible robotic pets remain a novelty—until now. Sony had previously discontinued production on AIBO in 2006. But on November 1, the company announced that it would be reviving the robotic dog. The new aibo, available exclusively in Japan in January, will be packed with A.I., including software that allows it to “learn” in a rudimentary fashion by repeating behavior that gets positive feedback from its owners, according to the New York Times. aibo’s novelty is that it’s a device that actually needs your input—it’s specifically made to be interacted with, played with, and talked to, unlike other now-ubiquitous connected devices.

That need for human care frankly scares the crap out of experts like Sherry Turkle, a psychologist at MIT who has written extensively about human beings' interactions with “sociable” computers. The danger in forming a bond with a robot or nurturing it like a living creature, Turkle said, is in assuming that the bond goes both ways.

“When a computer or robot seems to ask for our help we treat it as though it cares about us,” Turkle told The Daily Beast via email. “We are vulnerable here. We are vulnerable to feeling that objects that have no care for us do have care for us.”

Turkle said that “synthetic pets” still wouldn’t be capable of feeling emotion. Our  living, breathing pets today do, albeit in slightly different ways (a 2017 study, for example, found that dogs have strong brain responses to the smell of familiar humans and to emotional cues in verbal speech, a testament to the two species’ 30,000-year bond). Turkle said that people turn to a synthetic pet, which has “no capacity for a relationship with us” for the emotional gratification we typically reserve for something that can“love” us back, it puts “fake emotion” into our lives.  “Developmentally, I can see only harm,” she said.

But as A.I. advances, it may get harder and harder to tell the difference between “real” and “synthetic.” Turkle’s opinion is that A.I. will always remain artificial, and any emotions it presents are simulated. In humanoid A.I., of course, we wrestle with this definition: If a simulation of consciousness, emotion, and humanity becomes indistinguishable from the real thing, who’s to say it’s not real?

A.I. researchers have proposed a number of well-defined processes or tests for determining whether or not a robot is conscious. One of the oldest and most rudimentary tests to proving animal consciousness is the Turing test, a procedure designed to figure out whether or not an A.I simulate consciousness and intelligence well enough to fool a human being into thinking it’s one of them.

But there isn’t any such “Turing Test” for pets. In fact, we still aren’t sure what makes an animal “conscious” or not; performing the same tests on computers is even more difficult. Dr. Manuel Blum, a professor of Computer Science at Carnegie Mellon University who originally studied under Marvin Minsky, one of the godfathers of A.I., told The Daily Beast that he’s still trying to formulate a good set of qualifications that would test for “consciousness” in a machine.

In animals, Blum explained, researchers can perform a very rudimentary test to determine whether or not a creature is self-aware. In the “mirror test,” an unsuspecting animal is marked with some sort of paint, on a part of their body they cannot see, like their forehead. The animal is then shown a mirror. If they see their reflection, with the paint on their forehead, and attempt to wipe it off, they pass the test -- they can recognize themselves in the mirror, and connect that the paint they see in the mirror’s reflection is on them in real life. (Dogs, interestingly, don't pass the test. Elephants and other smarter animals do.)

Get The Beast In Your Inbox!

Daily Digest

Start and finish your day with the top stories from The Daily Beast.

Cheat Sheet

A speedy, smart summary of all the news you need to know (and nothing you don't).

By clicking “Subscribe,” you agree to have read the Terms of Use and Privacy Policy
Thank You!
You are now subscribed to the Daily Digest and Cheat Sheet. We will not share your email with anyone for any reason.

But Blum said trying to apply a similar test of consciousness to A.I. quickly falls apart. It’s very easy to code a program to pass the mirror test, and consciousness has to require more than that, like some form of inner thought process that can choose actions beyond knee-jerk reactions to stimuli, for one. Still, he said we’re probably approaching the time when these conversations become necessary.

“I’m very optimistic about what computers can do,” Blum said in an interview. “I’m very optimistic about A.I.” This barrier—when simulated intelligence becomes nigh-indistinguishable from the real thing, either a dog or a human—is close. “I think that these machines are very close to achieving it.”

Blum is optimistic, and seems to regard the coming singularity—when a computer can simulate your pet, or your fellow man—with curiosity. For Turkle, it’s more of an existential threat. “The simulation of thinking,” she said, in reference to a Turing test, “may be enough for us to be content to take it as thinking. But the simulation of feeling is not feeling, the simulation of love never love.”

A robotic dog may be able to simulate love. It may even be able to simulate waking you up at 5 a.m., whining for food that it does not need. But ultimately, it’s up to us to decide if that makes it real or not.