Book Review: Safe Enough?

Hi everyone! If are subscribing to me at perspicacity.xyz, I have also moved over to Substack because the admin is easier. It’s been a long, long time since I have posted, and I am hoping to migrate over there permanently, eventually. But for now I will post to both places.

I will be rebooting the blog with this book review about nuclear risk, and why the public debate on nuclear safety has lasted so long, without resolution. This was first posted to the Astral Codex Ten blog as a finalist in its book review contest. (I didn’t win; I’m fine with that). I want to publicly thank the blog’s writer, Scott Alexander, for broadcasting the incredibly diverse set of finalists (including mine) to his audience. And thank Scott’s readers for their often-thoughtful comments.

I’ll follow up with a few more essays on risk in general, something I have been thinking a lot about both personally and professionally.

And with that prologue complete, on to the post.


The date is June 9, 1985. The place is the Davis-Besse nuclear plant near Toledo, Ohio. 

It is just after 1:35 am, and the plant has a small malfunction: “As the assistant supervisor entered the control room, he saw that one of the main feedwater pumps had tripped offline.” But instead of stabilizing, one safety system after another failed to engage. 

Over the next twenty minutes there were twelve separate equipment malfunctions, including several common-mode failures, and operator errors… [The] steam-driven main and auxiliary feedwater systems tripped offline and could not be restarted from the control room…. the reactor coolant started to heat up. The reactor operators had their hands full as the primary system temperature rose four degrees per minute and pressure soon exceeded 2400 psi. 

The Davis-Besse reactor was a near-twin of reactors at the Three Mile Island nuclear plant. If you, like me, grew up in the 1970s and 1980s you might have heard of Three Mile Island, as in 1979 it made some news. The next failure at Davis Besse exactly followed that script: “the pilot-operated relief valve cycled open and closed three times to relieve primary coolant pressure before it stuck open, just like at Three Mile Island.”  

Two plant operators would have to improvise and run down several levels to the locked basement auxiliary feedwater rooms to reset the trip valves and start the pumps. Despite a “no running” policy, a pair of men bounded down the stairs. One operator was fleeter than the other, and the lagging operator threw him the key ring as he sprinted ahead. Once they removed the padlock, they had to descend a ladder, remove more chained locks, reset the pump’s trip valve, and reposition other valves. 

The quotes are from the book “Safe Enough? A History of Nuclear Power and Accident Risk“, by Thomas Wellock. In his day job, Wellock is the official historian of the U.S. Nuclear Regulatory Commission (NRC), an organization whose official responsibilities include screaming ‘Yes!’ to anyone who broaches this question. A coarsely cynical reader might thus expect Wellock to sidestep damning details of nuclear risk at the behest of his employer. This cynicism does a disservice to Wellock’s ambition. 

Nuclear energy was quite popular in the early 1970s, with support in the US in the range of 70-80%. That changed after Three Mile Island, when support plummeted below 40%. But then, weirdly, in the 1990s support stabilized. Despite Davis-Besse, despite Chernobyl and Fukushima, in the US support for nuclear has stayed roughly in the bound of about 40-60% in the three decades since. Nuclear energy is perhaps unique as a technology, in that no amount of experience seems to change society’s comfort with it. The topic is forever radioactive.

Wellock sets out to tell this history, how the US public went from nuclear-lovers in the 1960s to suspicious in the early 1970s, hostile in the 80s, and ambivalent today.. Wellock does not try to hoodwink us with happy talk – he makes clear what the stakes are in nuclear energy, that in the case of Davis-Besse there was not simply a power plant at risk, but the potential to release radiation across America’s industrial heartland. Wellock tracks regulatory victories for the nuclear industry, and expensive defeats at the hands of activists, and, always, political posturing over its future.

Yet “Safe Enough?” is less of a history of events than a biography of an idea, the birth of “Probabilistic Risk Assessment” as the guiding principle for understanding and mitigating risks in complex systems. The heroes of Wellock’s book are not nuclear plant night shift assistant supervisors, or the Nuclear Regulatory Commission training and assessment specialists, though they each make important cameos. The city of Toledo, Ohio is not safeguarded by watchful superheroes. It is protected by a methodology. 

There is a school of thought that sees nuclear risk assessment as a synonym for run-away civil service. Nuclear regulation is a monster that serves only itself, justifying increases in its budget by enforcing ever more draconian requirements, in defiance of reason. 

Wellock’s history offers an instructive counterpoint. As becomes evident from story after story of nuclear ‘events’ like Davis-Besse, the sprawling bureaucracy of the NRC was the only rational response to the mathematics of risk itself. 


The first fifteen years of operation of commercial nuclear power were relatively benign, at least in the sense that there were no major accidents. 

In Wellock’s telling, by the 1970s the essential dullness of nuclear energy was causing the industry a problem. If nuclear plants were to malfunction at some measurable rate, the industry could use that data to anticipate its next failure. But if the plants don’t fail, then it becomes very difficult to have a conversation about what the true failure rate is likely to be. Are the plants likely to fail once a decade? Once a century? Once a millennium? In the absence of shared data, scientists, industry, and the public were all free to believe what they wanted. 

At its birth, the nuclear industry focused on imagining big risks, striving to prevent something called a Design Basis Accident. This was the kind of accident that made a manager feel important, protecting civilization from meltdowns with steel plates as thick as a bicep and concrete walls as wide as a Cadillac. Experienced engineers would concoct the worst event they could reasonably imagine, and if the nuclear design could contain it, well, it should be able to handle just about anything that life could throw at it. 

By the early 1970s, after the civil rights movement and Vietnam and with Watergate in full swing, the public was becoming jaded with Big People waving away concerns with Big Promises on the basis of little more than self-proclaimed expertise. And thanks to the Freedom of Information Act, the public was close to accessing the details of what industry leaders actually knew. Big People recognized that this would not be an entirely good look. 

So the Atomic Energy Commission did what industry and government always do in times of crisis: It formed a commission. It proposed to unveil to the public a better risk assessment tool, not so much for use by industry (since nuclear power was, to them, obviously safe), but as a particularly intense form of content marketing:

The AEC tried to reassure the public by answering what had been so far an unanswerable technical question: What is the probability of a major reactor accident? It was a tall order. How could engineers quantify the probability of an accident that had never happened in a technology as complex as nuclear power?

The leader of this effort to reinvent nuclear risk assessment was MIT engineering professor Norman Rasmussen, who was tasked with developing quantitative risk measures in terms easily understood by the public. Rasmussen recommended a radically sophisticated approach to risk assessment, leveraging a new technique called Probabilistic Risk Assessment.

The solution proposed by Rasmussen was to calculate the probabilities for chains of safety-component failures and other factors necessary to produce a disaster. The task was mind-boggling. A nuclear power plant’s approximately twenty thousand safety components have a Rube Goldberg quality. Like dominoes, numerous pumps, valves, and switches must operate in the required sequence to simply pump cooling water or shut down the plant. There were innumerable unlikely combinations of failures that could cause an accident… the potential for error was vast, as was the uncertainty that the final estimate could capture all important paths to failure.  

In private, the fix was in, just as a cynic would expect. AEC Commissioner James Ramey was leery of an academic exercise he could not easily control, stating in 1973 “If it just shows one human life [lost], I’m against [publishing] it.” But despite the public relations risk of a negative result, the project went forward. 

Rasmussen delivered. In January of 1974, after 60 person-years of effort, the Chair of the AEC reported to Congress that the odds of a significant meltdown were less than one in a million. Congress and the public could rest assured that nuclear energy was far safer than comparable electrical generation methods such as coal, or a hydroelectric dam. The risks were astonishingly small, akin to getting hit by a meteor falling from the sky. Commissioner Ramey had nothing to worry about. The academics showed that nuclear energy was plenty safe enough. 

Probabilistic Risk Assessment grew to become the dominant language for analyzing nuclear risk, and launched a set of practices that changed the culture of the industry forever. Yet in 1974, nearly all of US nuclear generating capacity was less than 5 years old. Unsurprisingly, this first implementation of Probabilistic Risk Assessment was too simplistic. 

The real world began to undermine Rasmussen’s rosiest, most headline-grabbing predictions almost immediately. 

10 
1/10 
Earthquakes 
1/100 
1/1000 
1/10,000 
1/100,000 
Meteors 
100 Nuclear 
Power Plants 
1/1 
1/10,000,000 
10 
100 
1000 
10,000 
100.000 
Fatalities

A plot from the Rasmussen Report estimating the likelihood of deaths from nuclear power as orders of magnitude less probable than dying from common natural disasters, closer to being killed by a meteor. There have been no known meteor deaths since this curve was published in 1974, though there is historical evidence that this is not impossible!


Let me put Wellock and Rasmussen aside for a moment, and try out a metaphor. The process of Probabilistic Risk Assessment is akin to asking a retailer to answer the question “What would happen if we let a flaming cat loose into your furniture store?” 

If the retailer took the notion seriously, she might systematically examine each piece of furniture and engineer placement to minimize possible damage. She might search everyone entering the building for cats, and train the staff in emergency cat herding protocols. Perhaps every once in a while she would hold a drill, where a non-flaming cat was covered with ink and let loose in the store, so the furniture store staff could see what path it took, and how many minutes were required to fish it out from under the beds.

“This seems silly – I mean, what are the odds that someone would ignite a cat?”, you ask. Well, here is the story of the Brown’s Ferry Nuclear Plant fire, in March 1975, which occurred slightly more than a year after the Rasmussen Report was released, as later conveyed by the anti-nuclear group Friends of the Earth.

Just below the plant’s control room, two electricians were trying to seal air leaks in the cable spreading room, where the electrical cables that control the two reactors are separated and routed through different tunnels to the reactor buildings. They were using strips of spongy foam rubber to seal the leaks. They were also using candles to determine whether or not the leaks had been successfully plugged — by observing how the flame was affected by escaping air.  

The electrical engineer put the candle too close to the foam rubber, and it burst into flame. 

The fire, of course, began to spread out of control. Among the problems encountered during the thirty minutes between ignition and plant shutdown: 

  1. The engineers spent 15 minutes trying to put the fire out themselves, rather than sound the alarm per protocol; 
  2. When the engineers decided to call in the alarm, no one could remember the correct telephone number; 
  3. Electricians had covered the CO2 fire suppression triggers with metal plates, blocking access; and 
  4. Despite the fact that “control board indicating lights were randomly glowing brightly, dimming, and going out; numerous alarms occurring; and smoke coming from beneath panel 9-3, which is the control panel for the emergency core cooling system (ECCS)”, operators tried the equivalent of unplugging the control panel and rebooting it to see if that fixed things. For ten minutes.

This was exactly the sort of Rube Goldberg cascade predicted by Rasmussen’s team. Applied to nuclear power plants, the mathematics of Probabilistic Risk Assessment ultimately showed that ‘nuclear events’ were much more likely to occur than previously believed. But accidents also started small, and with proper planning there were ample opportunities to interrupt the cascade. The computer model of the MIT engineers seemed, in principle, to be an excellent fit to reality. 

As a reminder, there are over 20,000 parts in a utility-scale plant. The path to nuclear safety was, to the early nuclear bureaucracy, quite simple: Analyze, inspect, and model the relationship of every single one of them.


“Safe Enough?” was not written as a defense of the Nuclear Regulatory Commission’s regimented style. Yet as an outsider reading about the math for the first time, it became clear to me that once the NRC chose to implement Probabilistic Risk Assessment, an intrusive bureaucracy became its destiny.

A cursory look at the math explains why. Our flaming cat needs only one path past our defenses for the fire to spread. Our flaming cat will test them all. This means that it does no good to be right, on average, about individual safety measures. Being overly optimistic about any single one of the paths to disaster is functionally equivalent to being wrong about all of them. 

Asteroid deaths are rather easy to estimate – calculate the number of rocks falling from space, the density of people on the ground, and the average blast zone. Three parameters, done. It’s straightforward to explain.

The public is wary of nuclear risk, in part, because it is not at all like this. Neutrons don’t simply bounce around like billiard balls. A neutron colliding with an unstable nucleus generates more neutrons.[1] There are feedback loops. The public may struggle to articulate to an expert exactly what is bothering them (“I think you have failed to account for exponential growth mechanisms, and thereby truncated the uncertainty values in your estimates,” says Morgan Jacobs, nurse). But lay people are familiar with cats, and familiar with flames. The public possesses a rough intuition about Probabilistic Risk Assessment math that, while technically wanting, nonetheless captures its two critical aspects: (1) Small accidents will be much more likely than big accidents, but the big ones dominate the danger. And (2) the odds of an accident cascading out of control are probably higher than we expect. 

I was eight when the Three Mile Island nuclear plant had its ‘loss of containment’ event. I was 15 when the Chernobyl nuclear power plant exploded. I understood these events in the same terms an adolescent understands anything: The adults are lying to us. To a teenager, there was no point in entertaining a defense of the industry. The entire enterprise dripped with poison.

Risk modelers did not get hung up on stories of heroes and villains. Risk modelers could see the specifics. To the engineers at the NRC, each component in the nuclear power plant was a singular object in their computer model, topologically linked to all the others through a set of gloriously tunable and testable parameters. The problem was not that society relied too much on the volatile and impenetrable math of Probabilistic Risk Assessment. It was that we did not take it seriously enough.

In a world where industry and activists fought to a standstill, Probabilistic Risk Assessment provided the only credible guiding light. Rasmussen and team first began to compile and model relevant data in the early 1970s. Over the decades the industry’s database grew, and the NRC developed an opinion on every valve, every pipe, the position of every flashing light in a plant. This angered the utilities, who could not move a button on a control panel without reams of test data and its associated paperwork. This angered activists when the refinement of models predicted safety margins could be relaxed. 

But Probabilistic Risk Assessment has no emotions. Probabilistic Risk Assessment estimated, validated, learned. Probabilistic Risk Assessment would form the barrier protecting us from catastrophe. 

Was this hubris?

Wellock’s book is big on stories yet short on raw data. But a dive into the academic literature shows that, on implementing the teachings of Probabilistic Risk Assessment after Three Mile Island and (especially) Chernobyl, the rate of nuclear ‘events’ dropped by over a factor of 4

Events Per Reactor Per Year 
0.003 
0.007 
0.015

A model of nuclear ‘events’, which are largely small failures that might require plant closure to replace equipment or redesign subsystems. Data points are dots connected by black lines, while the dashed lines above and below them represent uncertainty bars. The first vertical dotted line represents the failure of Three Mile Island in 1979. The second vertical dotted line represents Chernobyl in 1986; the third, Fukushima in 2011.

A combination of small failures could trigger a cascade to disaster. According to Probabilistic Risk Assessment, the rational approach is to sweat the small stuff. All of it.

Wellock’s book is at its strongest as an obsessively detailed chronicle of the transformation of nuclear plants into panopticons, with the NRC examining every detail of every part, systems diagram, user interface, and personnel training regimen. Risk was brainstormed, quantified, modeled.

Probabilistic Risk Assessment allowed regulators to break an unimaginable problem into parts that were easily visualized, communicated, and controlled. And in story after story, Wellock shows that it actually worked.

So did Probabilistic Risk Assessment deliver, and make nuclear power “Safe Enough?”


On March 16th, 2011, Japanese Prime Minister Kan Naoto learned that he would not have to evacuate Tokyo.[2]

This was five days after the Tōhoku earthquake, a slippage between tectonic plates so powerful that it moved Honshu, the main island of Japan, 2.4 meters to the east. The earthquake generated a tsunami 14 meters high, rolling over the coastline and submerging the protective sea walls of the Fukushima nuclear reactor. The water cut the plant’s electrical connection to the mainland and drowned its backup generators. Pumps responsible for passing 70 tons of water an hour to cool the reactors failed. Temperatures inside began to rise.

The next day, March 12th, the reactor Unit 1 melted down. Water began to react with the zirconium metal that made up the reactor walls, forming hydrogen. Pressure increased, pushing the hot gas through microscopic cracks in the vessel walls until it encountered oxygen outside. The resulting explosion spewed radioactive contamination throughout the building, and into the surrounding air. 

On March 14th reactor Unit 3 exploded. On March 15th reactor Unit 4 exploded as well. 

Unit 4 was the worry. It housed over 1500 spent fuel rods in open, water-filled pools at the top of the building, without any concrete structure surrounding it. With no active recirculating pumps, the water in these pools would heat and evaporate. When the pools dried out – or if the building collapsed, when it spilled out – the plant would become too radioactive to approach. Nuclear reactions would proceed uncontrolled. Radioactive cesium would release directly into the air, and be carried by winds into the surrounding population centers, possibly including Tokyo.

Before March 16th, the water levels in the pools were unknown.

In the worst case scenario, not made public until long after the disaster, the entire Tokyo Metropolitan Area – 35 million people – would have to be moved to temporary shelter. The very existence of the nation of Japan was at stake. And no one – not TEPCO, the utility that owned Fukushima, not the Prime Minister, not the Japanese military – could do anything but hope.

On March 16th, a military helicopter visually confirmed the rods were still submerged. Water, stored in the reactor above, had fortuitously cascaded downwards to refill the pool after Unit 4 exploded. The holding pool’s temperature was near boiling, but the fuel rods were safe. The unthinkable remained unthought.

Wellock is sympathetic to the notion that a full-throated embrace of Probabilistic Risk Assessment in Japan could have prevented Fukushima as well. The NRC had learned, from a second near-disaster at Toledo’s Davis-Besse plant in 2001,[3] that corrosion in the culture of an organization could be just as dangerous as corrosion in materials. This was certainly true of TEPCO – the utility had considered, but rejected, higher walls to keep out the ocean from even a tsunami this large. The plant managers at the time opted to avoid publicly visible upgrades, ironically because they feared that new safety measures would relay the unwanted message that nuclear power was untrustworthy.

In hindsight it is clear that TEPCO performed poorly. It is less certain that it is realistic for nuclear operators and agencies to achieve perfect performance, in all countries, at all times. The Fukushima meltdown did not start with an accumulation of minor crises that Probabilistic Risk Assessment predicted would dominate failure. The Tōhoku earthquake was larger than was thought to be possible on the Honshu fault. Vulnerability to nuclear catastrophe might have sharpened through a slow accumulation of poor decisions. But the physical process was kicked off by a single, devastating event.

As a hero’s journey, Wellock’s history of Probabilistic Risk Assessment ends with disappointment. Our main character is forged in battle, its power spreads to dominate the kingdom, and then is… drowned by a Tsunami?  

This is it? No triumph of nuclear safety? No happily ever after?


There is a temptation to record accidents like Fukushima as aberrations. 

The most important lesson of Probabilistic Risk Assessment, at least as applied to nuclear power, is that outliers like Fukushima are not simply one-off events that can be explained away as special circumstances. Outliers are, in many ways, the only events that matter. 

This is where “Safe Enough?” is weakest. Wellock faithfully reports on what people said about math, but never allows the math to speak for itself. While engineering is the main character in this play, it exists like a Buddhist monk, in the perpetual present. It acts, or it is acted upon. It has no backstory, and it bodes no future. 

To flesh out the character’s motivations, we have to place history aside, and focus like NRC’s engineers on the equations themselves. 

Accidents that play out sequentially over time usually are best modeled as a cascade.[4] Left uncorrected, cascades grow exponentially in scale, one grain of rice falling down a pile to dislodge two, which fall further to dislodge four, then eight, then sixteen, until the entire pile collapses. If we were to run experiments on rice piles as our model of cascades, we’d find there is no ‘average’ collapse, a fact that is true both mathematically and metaphorically. Most events are small, insignificant. Then, without warning, a single occurrence dwarfs anything else experienced, with the number of fallen rice grains capped only by the size of the pile itself.

Earthquakes are cascades: The 1960 Valdivia earthquake off the coast of Chile was not simply big, it released a quarter of the combined energy of every earthquake ever recorded. Forest fires are cascades: The Camp Fire in California in 2018destroyed as many structures as the next seven largest California fires combined. 

Nuclear events are really two cascades in one. The first cascade is a loss of mechanical control, with damage largely limited to the physical plant itself. Left to continue, these failures trigger a second cascade, ‘loss of containment’, release of radiation to the broader world. Scientists have christened these sorts of linked cascades with the name ‘dragon kings‘, befitting their immense power. Fukushima and Chernobyl were not simply the most extreme nuclear events on record, they were hundreds of times more costly than the next largest examples. It is not simply difficult to estimate the exact size of a particular nuclear event. It is difficult to estimate its order of magnitude.

We’d like to take comfort in the facts we have measured: Even considering Chernobyl and Fukushima, the economic and physical damages attributed to nuclear accidents have proven historically small. Nuclear advocates correctly point out that the solar and wind industry have caused more deaths than nuclear. (Exposure to radiation creates a probability of death; a tumble from sixteen stories creates certainty.) The total cost to clean Chernobyl and Fukushima may exceed a trillion dollars, but even consideration of this ‘tax’ would add only a penny or two per kWh to all the energy the industry has created in its history. The health and environmental damage from coal is easily ten times this.

Still. The advocates who intone solemnly on the importance of analyzing nuclear energy in terms of dispassionate numbers, as above, use the wrong models. To estimate the potential impact of cascades, we cannot simply average what has been. Our models have to consider the total damage possible – the number of rice grains in our pile, the energy of the atoms in our nuclear fuel. 

In 2011 Japan experienced an immense amount of bad luck, punctuated by a single bit of good: Fukushima Reactor 4’s exposed fuel rods stayed immersed. The avalanche of disaster stopped. Tokyo was spared. 

Is it right to ignore the cost of the evacuation of Tokyo, merely because an unplanned flow of water saved us? What if we assume the maximum cost of a nuclear event is not $1 trillion for the Fukushima we lived, but $10 trillion for the Fukushima we escaped? [5]  Is nuclear still safe enough then? Five decades of development Probabilistic Risk Assessment has answered innumerable small questions about nuclear energy, but has failed to address the one question we care most about.

In the end, “Safe enough?” is simply not a proper question to ask of a cascade. There is no conspiracy of industry or activists manipulating the public and hiding the truth. If Wellock’s readers leave the book unsatisfied, that is not entirely the fault of the writer. It’s the nature of the math. 

To take Probabilistic Risk Assessment seriously requires that we think beyond intuition and experience, and place our faith in an intricate web of calculations and simulations. That we celebrate meticulousness over freedom and invention. That we recognize that while our vigilance will protect us from some catastrophes, it will never shield us entirely.

In 2019 three executives of TEPCO – their chairman and the two leads of their nuclear division – were found not responsible in criminal court for the Fukushima disaster that occurred under their watch. In a victory for the nuclear industry, the presiding judge, Kenichi Nagafuchi, wrote without irony, “It would be impossible to operate a nuclear plant if operators are obliged to predict every possibility about a tsunami and take necessary measures.”

Despite all the benefits of Probabilistic Risk Assessment, the judge’s words were not wrong. “Safe enough” remains forever the illusion we live with, until the moment we don’t.



[1] Fun fact: There are about 40,000 generations of neutrons every second. This is something the public is dimly aware of because, well, weapons. The known speed of these feedback loops is probably a source some of the public’s hesitation around nuclear energy – one of the public’s Bayesian priors, if you like to frame it in terms of logic. But it’s not a topic I’m going to dig into here.


[2] For this summary I want to cite three sources that I found particularly useful. First, Fukushima in review: A complex disaster, a disastrous response, published in Bulletin of the Atomic Scientists. Second, The Fukushima Nuclear Disaster and the DPJ: Leadership, Structures, and Information Challenges During the Crisis published in Japanese Political Economy. Third, The official report of The Fukushima Nuclear Accident Independent Investigation CommissionWikipedia’s summary is of course excellent as well, but these more academic sources provide an excellent source of stories, and further understanding for the social and political context in Japan at the time of the event.

[3] There was another near-accident at Davis-Besse in 1977, recounted by the shift supervisor here. The story is another absolutely textbook example of how Probabilistic Risk Assessment would have diagnosed a problem that earlier methods missed. Had this incident been taken seriously, disaster at Three Mile Island would have been averted.

[4] Scientists will get into knock-down drag out fights over whether a given data set fits a mathematical form called a ‘power law’, or match better to a ‘log normal distribution‘, where outliers are large but not as dominant. As a point of reference, events that build up one event after the next are commonly power laws, but physics can be subtle, and it turns out an avalanche of snow isn’t a great fit to a power law, while one of rice grains is. As a practical matter, we should focus on the degree to which the worst case event outstrips the rest of the distribution. Generally speaking Nuclear meltdowns should be expected to (and do) mathematically best fit to dragon kings; once an event exceeds a certain damage threshold, it undergoes a “phase change” to a new and much more significant damage mechanism. In the case of nuclear power, the potential maximum cost shifts from “things capped by the budget of a nuclear plant” to “things capped by the budget of a regional economy”. 

[5]  In 2007, the Institute for Radiation Protection and Nuclear Safety (IRSN) assessed disaster scenarios at the Dampierre power plant in Loiret, near Paris. The worst case assessment came out at $5.8 trillion, triple the GDP of France itself. The Tokyo metro area has a GDP roughly twice the Paris metro area, so a $10 trillion estimate is not nuts, though truthfully the people of Japan might simply decide to just live with the fallout rather than pay that figure.

Is it possible to disrupt a cow?

cropped-Stocksy_txp4354128dotO200_Medium_27362.jpg
Technology can displace the cow and save the climate. But we will need to think beyond the bun.

The first, most critical fact to recognize about our food system is that it’s solar powered.

Take a drive along US-85 in Wyoming, or US-26 through Nebraska, or US-18 through South Dakota and it is impossible to miss dozens of clusters of cows standing by the roadside, slowly, contemplatively chewing. That’s what cows do all day, every day — they eat grass. They use that plant energy to locate more grass, and eat that too. A cow’s ceaseless job is to convert the planet’s most fibrous, lowest quality feed into milk and meat. A cow is an engine that turns sunlight to dollars.

It is also an engine that pollutes: Cows alone contribute almost as much to climate change as all of the cars in the world — over 10% of all human-produced greenhouse gases. Their nearly indigestible diet nourishes a microbial community which respires methane into belches and nitrous oxide into manure. Tropical rainforest, and all the carbon it keeps, is leveled to create pasture. Gasoline is burned to keep the butchered meat cold and fresh. Every pound of steak we produce adds almost thirty pounds of carbon to the atmosphere. Every pound of steak protein requires 25 pounds of vegetable protein to grow.

It used to be that the only folks to complain about the global environmental impact of livestock were the sort of people who would uncomfortably lock gazes as they explained the healing power of crystals. But the rise of solar energy has taught Silicon Valley that one person’s fringe is another’s early adopter, and venture capitalists are lining up against the rancher, betting that modern technology can outcompete the world’s livestock. Just, a vegan mayonnaise company, has raised $220 million from private investors and is looking to raise $200 million more. Impossible Foods, a veggie burger company, raised nearly $700 million. Beyond Meats went public in May at a valuation of over $1 billion and is, at the time of this writing, worth over $9 billion.

All of this is intended to position Silicon Valley in advance of an unavoidable trend: For the climate to sustain, global dairy and meat consumption must be cut in half from today’s levels. In developed nations like the US, beef consumption must drop 80% or more by 2050 to make room for consumption by billions of increasingly well-off, meat-hungry Asians and Africans in the coming decades.

So this is what disruption looks like: Four out of every five of those cows on the side of the road must vanish for the planet to survive.

But the cow is a new sort of target for Silicon Valley. A cow is not a hunk of capital like an internal combustion engine or a coal-fired boiler. A cow will not join your social network or be called by an API. A cow is an animal which has been naturally selected, over millions of years, to turn feed into protein as efficiently as nature allows, lest it be out-competed by another, more efficient creature. A cow, thick with meat and fat, is dinner that humans have evolved to love.

Silicon Valley has convened to disrupt a thing that is solar powered, fully autonomous, and has — to borrow one of the Valley’s favorite terms — perfect product-market fit.

Is Silicon Valley insane?


Continue reading

When delivery is free, will ownership survive?

This is a story how self-driving cars will, through cheap delivery, change how we interact with our world.
Let me say at the outset: I am acutely aware that predicting human behavior is the pinnacle of folly. My goal is not to be 100% right. My goal is to find a new way to think about the problem.
Put another way, my aim is to start the conversation, not finish it. Comments appreciated.

Continue reading

This is how Big Oil will die

It’s 2025, and 800,000 tons of used high strength steel is coming up for auction.

The steel made up the Keystone XL pipeline, finally completed in 2019, two years after the project launched with great fanfare after approval by the Trump administration.  The pipeline was built at a cost of about $7 billion, bringing oil from the Canadian tar sands to the US, with a pit stop in the town of Baker, Montana, to pick up US crude from the Bakken formation.  At its peak, it carried over 500,000 barrels a day for processing at refineries in Texas and Louisiana.

But in 2025, no one wants the oil. Continue reading

Experts have massively underestimated solar. Why? (Solar: Part 1)

This is part 1 of a three part series.  Look here for Part 2 and Part 3.

Last year the joists undergirding our carbon-fueled energy system shuddered and flexed.  But in the US, barely anyone noticed.

From Forbes:

The latest battering to coal’s standing came when Dubai announced June 27 that it would build a massive 800-megawatt solar plant that will produce electricity at an average cost of 2.99 cents a kilowatt hour, substantially below what even coal-fired power plants charge.

The price of 2.99 ¢/kWh was 30% cheaper than coal.  It was half the price of a solar bid in Dubai just 18 months earlier.  It was without subsidies.

And it’s not a fluke. Continue reading

The world at a tipping point (Solar: Part 3)

This is part 3 of a three part series.  See here for part 1 and part 2.

Looking for belief in the history of technology development is like looking for the seat of consciousness in the brain.  We can take the brain apart, neuron by neuron, and ask what each cell does.  But nowhere in the system do we get a sense of how they work together; of what makes this warm, pink jelly a soul.

Similarly, the industrial and logistics improvements that make a technology cheaper seem dry, deterministic, and inevitable.  But none of them had to happen today.  It is speed, not engineering alone, that wins markets.  It is speed that transforms civilization.

And when customers and suppliers move in rhythm, it’s astonishing how quickly a technology can grow. Continue reading