This is actually happening. Elon Musk has formed a company – NeuraLink – to bring telepathy (more prosaically, “brain machine interfaces”) to the masses. And do it in less than 10 years.
Should we take Musk seriously?
Tech journalism is good at pumping up excitement, or derision, but it doesn’t provide much guidance behind the glitz. Techno-optimists, who get the most press, adore Musk for fearlessly doubling down just as technology is at the cusp of transformation, and succeeding time and time again.
Yet medicine is different. In building a battery, or a solar panel, or a space ship, doing the same thing in the same way a second time will yield the same results. Biology is rarely so kind. Scientists understand how the knee works in intimate detail, yet the medical device industry still has trouble designing a reliable knee replacement. We have no idea how the brain works; to ask Musk to build a transformative brain-machine interface in just a few years seems hopelessly naïve.
Which side is right?
Having walked through the logic myself, I started as a NeuraLink skeptic, but now I think that Musk’s bet, while emphatically risky, is not as crazy as it sounds. I’ll argue below that each side is making good assumptions about technology development, based on solid experience, that turn out to be wrong when applied in this new context.
But strangely, my concordance with Musk hinges less on anticipating radical improvements in technology, and more on anticipating radical changes to society. It’s demand, not supply, that will determine Musk’s success.
Musk is not just betting that his technology will work. He is betting that the regulatory and cultural environment around medical devices will change drastically in the next decade. So drastically that things that are considered immoral if not illegal now will instead be commonplace and desirable.
Will NeuraLink surprise the experts? To answer this question, let’s start with a story of someone else who made a bet that biologists would never believe could pay off, and see what that teaches us about upending conventional wisdom.
A DNA “tile”, which self-assembles into nanometer-scale grids. From Strong M: Protein Nanomachines. PLoS Biol 2/3/2004: e73, Image: Wikipedia
An argument from analogy: The DNA computer
Professor Leonard Aldeman, a computer scientist famous for helping create the RSA encryption algorithm, was out of his element when he stepped into a molecular biology laboratory in 1993. He was exploring experimental science to gain a better understanding of modern biology, so that he could effectively communicate to mainstream AIDS researchers about his ideas. But one night in bed, while reading a genetics textbook, he had a startling realization: If DNA served as Nature’s carrier of information, could we, through clever design, harness it for computation?
Aldeman did not sleep that night, and in the coming weeks he thought deeply about what tools molecular biologists had to offer, and what useful problems he might solve with them. And when he settled on a problem – something called a Hamiltonian path, which computes an efficient route through multiple destinations – it took him a mere seven days in the lab to execute the experiment. Aldeman started with nothing more than a clever idea, and within a week he had taught soup to do math.
Biologists found the concept to be an interesting parlor trick, but little more. The computation was absurdly slow, so the problems it could conquer were “trivial” in the mathematical sense, and there was no immediate application to jump on. But more to the point, Aldeman’s work felt like a parlor trick, as real biology experiments did not take a week. A biology graduate student was lucky to make a scientific contribution in less than six years. Something felt amiss.
You see, almost nothing works easily in biology. Proteins are impure. Enzymes denature. Frog eggs contract viruses and die. You think that you understand a biological mechanism until you try to test it a second time, and get wildly different results. A biochemist friend of mine once drew me a graph to explain to me how understanding worked in her field, and it looked something like this:
How biological research progresses
But DNA is not like other biological materials. DNA has evolved to be extraordinarily reproducible, following precise, predictable rules – if DNA did not behave with such precision, it could not be the carrier of genetic information. In that way, DNA indeed shares much in common with computer code, and the instinct of the computer scientist turned out in this narrow case to be far more valuable than the instinct of a biologist.
So when I critically assess the chance’s of NeuraLink, I don’t want to ask whether medical experts are “too conservative”. That question is devoid of anything actionable. Instead, we should ask whether medical experts are making the same sort of errors in thinking about brain-computer interfaces as biologists did with DNA computing.
Is there something special about brain-machine interfaces that makes them much easier to create than normal medical devices?
Could brain machine interfaces be easier than we expect?
Neuralink’s brain machine interface does not appear to actually exist yet.
That’s the only conclusion I can come to after skimming the ‘announcement‘ of the technology, which arrived via unapologetic Musk fanboy Tim Urban and his blog Wait but Why? Urban’s exclusive certainly was breathless, if you can hold your breath for 36,000 words interleaved with copious drawings of cavemen, monkeys and frogs. It lacked, however, any details into what Musk’s brain-machine interface actually was.
But his lack of a technical platform doesn’t mean that Musk is wrong to want to invest in the field. In fact, just a few weeks ago the startup Synchron closed a $10 million investment round to build something very similar.
And while Synchron and its founding scientist Prof. Thomas Oxley of the University of Melbourne don’t have Musk’s Twitter following, they have actually published data. So let’s consider Synchron for a moment, and see what their published work can teach us about what Musk wants to do.
The Stentrode. Image: DARPA
The picture above is a ‘Stendtrode’, an FDA-approved brain stent that has been modified to carry small platinum electrodes (circles). Oxley’s team inserted these stents into the brains of 5 sheep, not by drilling holes in the sheep’s heads, but by snaking the devices through their blood vessels, lodging them into the veins that serve the brain. The stents thus could get physically close to brain tissue without breaching the skull. And they remained in the sheep for 190 days, quietly collecting data on brain function without degrading in performance.
This brain machine interface, funded by DARPA, is more than just science fiction. And it has real medical applications: There is a condition called Locked-In Syndrome, where a patient is alive and aware but cannot move due to near-complete paralysis. Many more conditions exist where there is an impairment between executive command the body’s ability to respond, from cerebral palsy to ALS to stroke.
Any communication of thoughts starts with muscles; solve any muscle control problem, and you potentially have a broader platform for ‘telepathy’. No one would think it odd for a startup to develop a remedy for paralysis, but the implications of such a treatment are far broader than the implications of any other medical device. And there are two ways in which solving brain communication is very different from solving any other medical problem.
The first is that for brain-machine interfaces, the target of treatment – the executive function – is not actually diseased. Patients may have a failure at the lower brain or the spinal cord, but their thinking is still intact. This matters because there are many more ways that tissue can be diseased than healthy. Take out the variability inherent to disease, and the problem of treatment becomes easier to solve. A brain machine interface doesn’t fix biology; it goes around it.
The second critical difference is that the brain is evolved to be plastic – to respond to feedback. Most tissue will simply react to a stimulus with whatever intrinsic capabilities it has. The brain can learn and adapt in surprising ways, whether it requires a new language, or in the case of Helen Keller and Anne Sullivan, an entirely new mode of communication.
And the brain-machine interface can learn too. The interface can be trained – it can wait for strong signals from a user, and then assign that thought pattern to an activity that may be completely unrelated. No one cares if that, in order to walk, you have to imagine petting a cat – not when the alternative is paralysis. A patient will train the interface, which in turn will train the patient, reinforcing the neural connections that lead to success. The process has more in common with learning to type than getting a hip replacement.
Weirdly, with brain-machine interfaces, the squishiness and complexity of the brain is not a defect, but a feature.
This model lends credence to the idea that a brain-machine interface should be thought of very differently from normal medicine. In a conventional medical device the patient must adapt to the implant. With a brain-machine interface, the implant can adapt to the patient.
Following this logic, it is not nuts to believe that NeuraLink will be easier to build than medical device professionals fear. But there are still plenty of reasons to wonder if Musk is being exceptionally naïve.
The FDA rates 71% of new medial devices (“PMAs”) as providing inadequate data to demonstrate safety and efficacy on first submission. Go back and try again. Image: Rock Health
The case for the skeptics
Medical devices are hard to build because life is slow, precious, and staggeringly confusing.
There is a sense among some software startup investors – Peter Theil comes to mind – that government is slowing medical innovation by applying an unnecessary regulatory burden. If medical technology were regulated like software – think Yelp for drugs – we would get software-speed results.
But that belies the biggest challenge in medical devices and pharmaceuticals: There is no way to run an accelerated test. If you are examining whether your drug causes birth defects, you can’t make women gestate faster. If you want to know if a drug increases the risk of cancer, you can’t make subjects age faster. You have to wait.
Programmers would feel a lot less smug about the superiority of their methods if each revision of code took a year to compile.
Actually, the problem is worse than this, because in biology in general, and in the brain in particular, we have no idea how the system is wired. Contrary to the beliefs of creationists, biology uses no particular design. To wit: I am one of the 20-30% of the US population that suffers from the “photonic sneeze reflex” – when I exit my dimly lit house for the bright sunlight of the outdoors, I sneeze. No one has a clue why this reflex exists, and that should give anyone who is playing with the brain a moment of pause. We have no idea what problems might occur, which makes them extraordinarily difficult and expensive to find.
Let’s return to our Stentrode-modified sheep for a moment. The Nature Biotechnology article describing the Stentrode’s successes also notes “artifacts resulting from chewing muscle activity” – the device could not get a good signal while the sheep chew (something they do almost all the time). How many activities do humans engage in that might also confound the signals, or trigger an adverse reaction? It would be horrible to receive an implanted brain-machine interface for controlling paralyzed legs, only to discover after several months of use that the system activates during particularly vigorous sex, or when straining due to constipation. If that sounds absurd, consider the case of the antidepressant that caused a small percentage of female users to orgasm when they yawn.
Imagine everything that could possibly go wrong, and multiply it by 10, and that becomes a starting point for the number of tests that might be required before a company can claim their device suitable for general use.
Oh, and did I mention that in the Stentrode testing, one of the five sheep “developed unexpected generalized whole-body convulsions 16 h following implantation.” Even with a nominally safe technology – and the Stentrode is built on FDA-approved stents – the risk of adverse effects is real. If something goes wrong with just a small percentage of the population, what do you do?
Medical device professionals are objecting vociferously to Musk’s announcement for very good reasons: There is no way to speed up testing, no way to model biology on a computer, people’s lives are on the line, and a seemingly infinite number of things could go wrong. This is why the average new medical device requires $94 million of development, and why medical device startups who promise fast times to market are also famous for failing to deliver. Something always misbehaves. Murphy’s Law is not with you.
Elon Musk, currently worth $15B, can afford to make a nine figure bet, and his bold timeline is primarily intended to motivate his team. And if Musk fails to meet his goals, he will at least not suffer the indignity of explaining his miscalculation to funders. He’ll own it.
Perhaps Musk will get a brain machine interface to work in an initial test faster than we expect. But there is no evidence that Musk’s audacity can fundamentally bend the reality that biology is both slow and pernicious. Which means that Musk is either delusional, or he is serious about bringing a product to market, despite these risks.
And that second option may be more realistic than you’d think.
The greater the risk your cancer poses, the greater the chance you will choose chemotherapy, despite its side effects. From BMJ Open.
What if Musk doesn’t think that normal rules about risk will hold?
Simply put, in today’s regulatory structure, it is impossible for Musk to meet his goals.
Musk has put forward a four year timeline to reach his first market, and an 8-10 year time frame for delivering a brain machine interface to the masses. This is, by any normal analysis, sheer folly.
It is folly first because it dramatically underestimates the scope of work needed to adequately lower risk. Medical devices have to be tested on live humans, without harm. If a device is rushed to market and an “adverse event” occurs, regulators will bring the entire program to a screeching halt. And once a device gets approved it is only slowly changed – the regulatory burden is too high to allow frequent upgrades to be economic. It is for good reasons that both medical device companies and the regulatory agency play a very conservative game, and try to get things right the first time.
Worse for Musk, FDA policy is guided by the centuries-old medical dictum Primum non nocere, “First, do no harm.” Being merely human is not, in itself, a treatable medical condition, and the FDA’s current stance is that it cannot approve treatment in such cases. A device released or promoted without the FDA stamp of approval is at far greater exposure to litigation. And while an implantable device could be repurposed for “off-label” use, it is unpredictable how regulators would react. In fact, those who have thought deeply about brain-machine interfaces suggest they be regulated more tightly than conventional medical devices, precisely because their use is driven by choice and not need.
For Musk to succeed, he must either get around today’s laws and regulatory structure. Or change them entirely.
Offering the carrot: Who wants to challenge me to a trivia contest?
If NeuraLink can be made to technically work – and I want to be clear, that is still a big “if” – the first person with speech to receive it would become gifted with Watson-like skills at Jeopardy.
The on-stage demo would be very impressive. This will drive demand. And that demand would step up pressure on the world’s regulatory bodies to reconsider bans on brain-machine interfaces for “non-medical” use.
This presents an opportunity where Musk shines. Musk has skillfully played states against each other in the US to drive subsidies for SpaceX and Tesla facilities, and he could offer the world’s regulators a similar deal: Change your laws, and your country will be the first to receive brain-machine interfaces. Musk’s ethical and legal challenges diminish substantially if he can shop for his own regulations, the way the rest of us shop for price. And if he can get support of a mainstream regulatory body – an established brand, if you will – medical tourists and their spending will follow.
But of course a lower regulatory burden means less testing, which inherently means greater risks. In fact, a recent study found that medical devices approved first in the EU – where the regulatory burden is lower than the US – were three times more likely to have safety alerts or recalls. Will people with a healthy brain and body really be willing to shoulder such a risk? Are regulatory agencies really willing to encourage this?
Perhaps they will, if in ten years a healthy brain is considered unsatisfactory.
What risks would you take to maintain your job, and sense of self-worth? From Watson For President.
The stick: The artificial intelligence threat to humanity
A future of brain-machine interfaces may conjure up an Ayn Randian utopia, where gains accrue to those willing to work hard and shoulder risk. Or it may elicit every supervillain ever cast in a Marvel film, greedy for power and willing to do whatever it takes to capture it.
But what if the driver is not desire, but fear?
It is clear from his many statements on the subject that Musk believes that artificial intelligence is coming, and that it represents a danger to humanity. And as self-driving vehicles render our entire transportation infrastructure obsolete, and white collar jobs fall to the prowess of algorithms, the general public may begin to share in his alarm.
I wish I was overstating this, but Musk’s in 8-10 years timeframe for NeuraLink to be successful, we may see the idea of a brain-machine upgrade not as a frivolous pursuit, but as central to our self-worth. How will the rules of society change if we are faced with a choice: Give up the pretense of work, or give up being solely human?
Today’s FDA exists because of the avarice of hawkers of patent medicine and adulterated foods a century ago. Regulation is designed to protect us from the risks of unthinking action, whether from greed or desperation. But what if the greater risks are from inaction? What value is health, if we are left with misery?
If the threat of AI proves real – a threat that should be at least taken seriously – even the cautious among us may be willing to let volunteers try the technology, and de-risk it for the rest. Early adopters of Musk’s incompletely tested brain interface technology would perhaps be seen similarly to the great explorers of history: Much of Columbus’s crew would eventually die of scurvy, but was that not their choice to make? Would we stop people from choosing to colonize space, simply because it is risky to their lives?
Why is a biological frontier so different from a physical one?
It is odd, and more than a little uncomfortable, to write that NeuraLink looks like a bet that only pays off if mankind feels threatened by an existential crisis. Yet the truth is that we do not understand our own intelligence or expertise or consciousness, and we have no real conception of how fast AI will grow to challenge us in these fields.
So how does one even begin to consider a question that is simultaneously both impenetrable and dire?
Back in the 1600s, the French philosopher Blaise Pascal found himself in a similar quandary when he recognized that all humans wager with their souls that God exists, or does not. He wrote:
“God is, or He is not.” But to which side shall we incline? Reason can decide nothing here…
Let us weigh the gain and the loss in wagering that God is. Let us estimate these two chances. If you gain, you gain all; if you lose, you lose nothing. Wager, then, without hesitation that He is.
Pascal argued that the prudent man would wager belief in an unknowable God, for if he is wrong and God does not exist, no harm will have come. But if God does exist, he will be eternally rewarded for his belief, and damned for his doubt.
Those who bet on the future of humanity share a lot in common with Pascal. I once interviewed the leaders of Alcor, whose customers have each paid tens of thousands of dollars to have their heads frozen after death, so that they may be revived in an indeterminate future when the appropriate technology exists. Alcor offered their own wager, very similar to Pascal’s: the prudent non-believer chooses Eternity not in heaven, but in liquid nitrogen. If you desire eternal life then a tithe must be paid, to the God of your choice.
Musk is on record with his belief that AI is coming. That the timetable is roughly 10-20 years. And that its arrival will inexorably alter human history. Those who judge Musk’s investment in Neuralink by today’s market and regulation are missing the point. Musk is not trying to develop new physics. He is developing new metaphysics.
Musk’s audacious investment in NeuraLink makes sense only if we imagine that soon the world will be very different from what we experience today. Musk’s bet may be wrong, but he is not making the errors his critics imply. He may instead be betting what for him is a small cost, for an eternal reward.
This is an entirely different way of looking at technology investment. And despite its long odds, it’s not a wager I’m ready to dismiss.
4 thoughts on “How to think critically about Elon Musk’s NeuraLink”
Please follow up if his latest announcement (see linked video) brings up new insight for you: https://www.youtube.com/watch?v=lA77zsJ31nA