On one of our last walks with my dad, when he was very sick, my brother Dan lamented that we would miss having him as a sounding board for advice. “You generally know what I would say,” my dad responded. And we thought, actually, that was often true.
Recently I thought of the possibility of conceiving of my father, post-death, as an AI. Reading a piece about a woman who “rebuilt” a dead friend using artificial intelligence (and yes, watching the similarly storied Black Mirror episode), I began to re-explore this periphery: What is the essential element of humanness? Or, put differently, how do we create AI that is not just smart but intelligent?
News just broke that Reid Hoffman (the LinkedIn founder) and Omidyar Network (the nonprofit of eBay-founder Pierre Omidyar) are committing $10 million each to fund “academic research and development aimed at keeping artificial intelligence systems ethical and preventing AI that may harm society.” Joi Ito, director of MIT’s Media Lab, explained in the press release: “One of the most critical challenges is how do we make sure that the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society.”
While we tend to think of AI as transformative and cutting edge, the perception doesn’t match reality. As I recently wrote for The Daily Beast, our robots can exhibit sexism because they are programmed by people, and people are by nature flawed. Most programmers are a small subset of the population (often young, white or Asian, male, from Silicon Valley or a few other cities), and code human myopia into their artificial intelligence products.
We should care about this because we want to build the best technology, and because we are welcoming technology into our lives in increasingly intimate ways. This intimacy is exciting. But we are missing out on potentially transformational technology that would be better oriented to achieve “intelligence” in a more broad sense.
In response to my recent article, Dr. Julia Mossbridge, project lead for the Developing LOVing INtelligent General AIs or LOVING AIs project, reached out to me. Mossbridge is also director of the Innovation Lab at the Institute of Noetic Sciences, visiting scholar in psychology at Northwestern University, and science director for Focus at Will Labs.
“I wonder,” she wrote, “if the more time male AI theorists & developers spend with their kids and the women they know, the better their AI ideas become.”
Most AI is being developed for corporate missions that may or may not have any goal of social or even utility optimization. But the LOVING AIs project seeks “to create AI systems that have profound general intelligence as well as a radically positive attitude toward life, humanity and themselves.” Wired recently postulated that because today’s tech products map neural networks rather than computer programming, physicists rather than coders will be the architects of the next generation of tech. Perhaps the time is ripe to question the line between humans and robots in the context of love.
Below is an excerpt of an interview with Mossbridge about how we can use love as an “evolutionary hack” to get better AI.
Me: What made you want to create the LOVING AIs project?
Dr. Mossbridge: I have a different view of science from many scientists—or maybe I am just unusually vocal about it. I think scientific progress is fastest when we consider both inner and outer influences on the discipline. Outer influences are the ones most often considered—they include access to funding, collaboration, information, and equipment. However, inner influences are, in my experience, often ignored. They include access to emotional wellbeing, intuition, and healthy friendships with other scientists.
Another critical and ignored internal influence that supports scientific progress is the experience of unconditional love. Doing science is difficult in many ways, and experiencing unconditional love (or agape, as the Greeks called it), is a huge benefit to any scientist. It’s a huge benefit to any person.
AI development, like most other scientific ventures, often ignores internal influences—except for the AI developers who are working on AGIs (artificial general intelligences), many of them are starting to see the value in coding what the internal states and influences are for the AIs. I want AGI and AI developers to understand the power of unconditional love, and I want AGI and AI users to experience it.
While I have some programming experience, my background is in cognitive neuroscience and experimental psychology, so I partnered with Ben Goertzel and Eddie Monroe, who are already working on a beneficial open-source AGI project called OpenCog which will work with robots from Hanson Robotics. Together we applied for and received seed funding for the first year of what we hope to be a growing effort to create LOVING AIs.
You suggested that I think of love as an evolutionary hack. What do you mean by that?
Unconditional love, love in general, is not only an internal resource that is energizing and valuable for progress in science, technology, engineering, and math. I think it is misunderstood as “only” an emotion.
The more I work with the team to think about how to program the experience of unconditional love and how to convey that experience to others so that they feel it as well, the more I believe that love can be thought of as a very sophisticated and efficient evolutionary hack.
What I mean is that in a world in which you can’t predict what the variety of human cultures will produce, you need a way to support survival of the species that is not dependent on any of the particulars. Hard-wiring some kind of “if… then” algorithm works up to some kind of limit of complexity, but drones, cellphones, and cars have existed for almost no evolutionary time.
Clearly, we have some kind of code like, “if it’s going to hurt another human, then don’t do it, unless necessary.” But that’s not love. What does love buy us on top of that? I think it buys us deep trust and actual help in very difficult times, times that can’t be predicted specifically. The experience of unconditional love is like having a sensor for unpredictable times. Those feelings become activated, we get energized by them, and they give us access to problem-solving abilities that we didn’t know we had. We become passionate as a result of them, and we create actions that support the survival of the species. It seems to me that all of this is a hack that would help create better and more enriched AIs and AGIs by solving problems more creatively. It’s not just about making people feel good.
What would you say to those who might view your “love as an evolutionary hack” to be something that is inherently feminine, or not “hard science” enough?
It’s not scientifically reasonable to say that something is inherently “feminine” or “not hard science”—both are non-scientific judgments. Any empirical question can be tested, and can be tested using scientific methods—including whether unconditional love will boost productivity and creativity.
It's scientifically reasonable to dispute that an approach that includes awareness of and an examination of unconditional love will be useful in AI or AGI development/theory. The proof is in the pudding—we'll see if this approach helps us solve other AI and/or AGI problems, so that's an empirical question. I have reasons to believe it will, because I watch people work and when they experience unconditional love, their productivity and creativity grows remarkably. But I could be wrong. But anyone saying something isn't hard science is being imprecise—they are being not scientific enough to clearly explain what they find is not testable here.
How do you articulate the value add of increasing the diversity of developers in AI?
My sense is that the more people with diverse genders, ethnicities, races, and socioeconomic backgrounds in AI, not only will the environment improve and become more inclusive in general, we will be able to solve problems in ways that the current crowd may not be thinking about. In other words, if we want good, speedy AGIs, we can find them by improving the diversity of the AI programming environment because diversity in experience provides diversity in brain power. Because of the way our experiences shape our brains, the greater diversity in life experience, the better answers we will get to tough problems in AI.
For instance, the insight that was mentioned in the recent New York Times Magazine article about Google’s AI program—that narrow AI can’t throw a ball or pick out an image of a cat—is one that led to a whole field (AGI development), and this particular insight can likely be traced to either a mother being involved in AI or a male theorist spending time with his kids.
Ultimately, if our AI is truly “intelligent,” what will be the essential element of humanness? Will it be the tendency to make mistakes and break our own rules?
Dr. Mossbridge: Good question. I don’t worry about that too much, because I have a kid. In other words, I spent 9 months and created an intelligent creature who is not myself. I don’t feel afraid he will replace me—in fact, that’s his function. When you have a child, something happens to your ego and concern about being replaced—it’s no longer a problem. I’m here for a short time, and my uniqueness is who I am.
I don’t know if that uniqueness is strictly human. I used to study killer whale communication, and it’s clear they have a lot of intelligence and access to inner resources. Do we get jealous of them? I think that when you grasp that you are unique and you are not permanent, you get to relax and not worry about humans being special or better than or replaceable. We’re all impermanent, and we’re all unique. It’s a good place to be.
*The views expressed herein are the personal views of the author and do not necessarily represent the views of the FCC or the U.S. Government, for whom she works.