The dystopian television show Black Mirror has begun to feel less like fantasy as Elon Musk’s brain-implant startup, Neuralink, gears up for human trials. Soon the coin-shaped devices could allow patients to operate computers using only their thoughts.
But in spite of the never-ending momentum for the world’s richest man, scientists are worried about the company’s oversight, the potential impact on trial participants, and whether society has meaningfully grappled with the stakes of fusing Big Tech with human brains.
“I don't think there is sufficient public discourse on what the big picture implications of this kind of technology becoming available [are],” said Dr. Karola Kreitmair, assistant professor of medical history and bioethics at the University of Wisconsin-Madison.
“I worry that there's this uncomfortable marriage between a company that is for-profit… and these medical interventions that hopefully are there to help people,” she added.
The five-year-old startup’s initial aim is to help alleviate certain disabilities, like enabling paralyzed people to control their computers and mobile devices through brain activity. Musk has signaled far larger ambitions down the road, however. He previously outlined his vision to help humans achieve “symbiosis” with artificial intelligence to avoid being “left behind” by machines.
Experts worry about every step of Neuralink’s trajectory—starting with the trials themselves.
“These are very niche products—if we’re really only talking about developing them for paralyzed individuals—the market is small, the devices are expensive,” said Dr. L. Syd Johnson, an associate professor in the Center for Bioethics and Humanities at SUNY Upstate Medical University.
“If the ultimate goal is to use the acquired brain data for other devices, or use these devices for other things—say, to drive cars, to drive Teslas—then there might be a much, much bigger market,” she said. “But then all those human research subjects—people with genuine needs—are being exploited and used in risky research for someone else’s commercial gain.”
In interviews with The Daily Beast, a number of scientists and academics expressed cautious hope that Neuralink will responsibly deliver a new therapy for patients, though each also outlined significant moral quandaries that Musk and company have yet to fully address.
Say, for instance, a clinical trial participant changes their mind and wants out of the study, or develops undesirable complications. “What I’ve seen in the field is we’re really good at implanting [the devices],” said Dr. Laura Cabrera, who researches neuroethics at Penn State. “But if something goes wrong, we really don't have the technology to explant them” and remove them safely without inflicting damage to the brain.
There are also concerns about “the rigor of the scrutiny” from the board that will oversee Neuralink’s trials, said Dr. Kreitmair, noting that some institutional review boards “have a track record of being maybe a little mired in conflicts of interest.” She hoped that the high-profile nature of Neuralink’s work will ensure that they have “a lot of their T’s crossed.”
The academics detailed additional unanswered questions: What happens if Neuralink goes bankrupt after patients already have devices in their brains? Who gets to control users’ brain activity data? What happens to that data if the company is sold, particularly to a foreign entity? How long will the implantable devices last, and will Neuralink cover upgrades for the study participants whether or not the trials succeed?
Dr. Johnson, of SUNY Upstate, questioned whether the startup’s scientific capabilities justify its hype. “If Neuralink is claiming that they’ll be able to use their device therapeutically to help disabled persons, they’re overpromising because they’re a long way from being able to do that.”
Neuralink did not respond to a request for comment as of publication time.
Should the trials go well, experts said they will still be uneasy—perhaps doubly so.
“This technology has the potential to be life-changing for people who are paralyzed,” said Dr. Kreitmair, from the University of Wisconsin-Madison. But if the implants are successful in that use case, “then there will be [an] appetite for consumer uses of this technology,” like reading email using only one’s mind, or operating an autonomous vehicle. “And that raises such a slew of ethical concerns.”
One scientist, Dr. James Giordano, of Georgetown University, argued that commercial brain implants could create a “medical tourism market” as the global population competed for access to the technology, which would introduce risks of poor oversight and quality control.
Other experts noted the dangers of hacked implants or computer chip viruses that could leave patients unstable—or worse.
“Our brain is our last bastion of freedom, our last place for privacy,” said Dr. Nita Farahany, a scholar on emerging technologies at the Duke University School of Law who, like some of her peers, found elements of Neuralink’s progress “exciting.”
The more distant commercial applications of its technology, however, carry “risk of misuse by corporations, by governments, by so-called bad actors,” Farahany added. “So when you have a company like Neuralink, [which] is talking about going into human trials, I think it really should put the world on alert that the time to be developing robust concepts like cognitive liberty is now.”
These concerns aren’t just science-fiction anymore. Neuralink is one of multiple startups racing to perfect the brain-technology, with competitors like Synchron and Neurable making huge strides as well.
Musk’s firm is notable, though, for “insufficient engagement with ethical issues”—even if it deserves credit in other ways—said Veljko Dubljević, who researches the ethics of neurotechnology and artificial intelligence at North Carolina State University.
If the Tesla CEO can pull off his vision and outfit large swaths of humanity with Neuralink tech, there will likely be issues of equity over who gets to turn their brain into a supercomputer.
But that assumes he isn’t overselling.
“With these companies and owners of companies, they’re kind of [showmen],” said Dr. Cabrera, of Penn State. “They’ll make these hyperbolic claims, and I think that's dangerous, because I think people sometimes believe it blindly.”
Musk’s history of controversies in particular, she added, “makes us worry about his other claims. So I'm always cautious about what he says.”