Fukushima’s Earthquakes Show That Risk Is Inevitable

By accepting risk and planning for failure, communities are more likely to survive catastrophes.

Illustration of a stack of stones with an exclamation point carved into them
Getty; The Atlantic

Updated at 3:50 p.m. ET on March 16, 2022

Two hundred feet up in the foothills that surround Aneyoshi, a tiny coastal village in Japan, warnings are engraved into the rocks. Most of the messages come from 19th-century survivors of large tsunamis that terrorized people along the coast. “High dwellings are the peace and harmony of our descendants,” one inscription declares. “Remember the calamity of the great tsunamis. Do not build any homes below this point.”

But more recent residents of coastal Japan did build below that point. Homes at first, but eventually nuclear facilities, which were built where they could be cooled by nearby ocean waters. On March 11, 2011, a massive undersea earthquake occurred east of Oshika Peninsula. The quake, which lasted six minutes and remains the fourth-most powerful ever recorded in the world, triggered tsunami waves that reached up to 130 feet above sea level. The rushing water ultimately led to a meltdown at the Fukushima Daiichi nuclear facility, where a loss of power shut off the cooling system, resulting in hydrogen-gas accumulation. As surprised workers tried to cool the facility manually—using water from fire trucks—a gas buildup led to the expulsion of radioactive material into the atmosphere and groundwater. Part of Fukushima prefecture is still uninhabitable.

Book cover of The Devil Never Sleeps
This article is adapted from Kayyem’s recent book.

After the nuclear disasters at Three Mile Island in 1979 and Chernobyl in 1986, investigators blamed a combination of mechanical failure and operator error. The conventional narrative of Fukushima Daiichi’s demise has been somewhat more forgiving of the people running the plant that day. The earthquake and tsunami could not be blamed on them. The most fundamental error, news stories intimated, was putting a nuclear reactor in a location so obviously vulnerable to natural disaster. Previous generations had literally carved warnings in stone: Never again. Because TEPCO, the plant’s operator, had ignored them and built in a risky spot, tragedy was all but inevitable.

This made a tidy story, except for one thing: The Onagawa nuclear-power plant also sits below the rock warnings, but it withstood the earthquake and tsunami. Onagawa was about 30 miles closer to the epicenter of the earthquake than the Daiichi facility was. It experienced the strongest ground shaking of any of the nuclear facilities in the area—or indeed of any nuclear facility in recorded history. Operated by the Tohoku Electric Power Company, the Onagawa reactor did not melt down. It suffered no serious damage. The differing fates of Fukushima Daiichi and Onagawa cannot be explained by the movement of the Earth, because neither plant heeded the century-old warnings.

Never again is a common refrain after traumatic disasters, but it’s also a hard promise to keep. Memories fade over time. But more important, societies change, and so do their risk calculations. Accepting risk is not itself a form of negligence; Japan needed domestic energy sources to power its economy in the decades after World War II, and nuclear-power plants near the coast became essential to the country’s growth. The problem at Fukushima—unlike at Onagawa—was that its designers and managers did not acknowledge or make provision for the risk they had undertaken, so the plant was unprepared when disaster struck.

This was a dangerous omission. Events that threaten human life and safety do not strike at random, nor are they particularly rare. Indeed, an earthquake off the coast of Fukushima today left millions of people in Japan without power—and put residents on alert once again for potential tsunamis. All modern societies face environmental hazards; rely on complex, and in some cases dangerous, technologies; and link up to global trade and transportation networks that move pathogens as well as people.

Onagawa’s strength was simple: The nuclear plant’s operators understood that failure was possible, perhaps even inevitable, so they committed themselves to failing safely. Like Fukushima Daiichi, Onagawa was located near the coast. But its designers had studied past tsunamis and built at an elevation several meters higher than that of its ill-fated counterpart. Long before the earthquake, Tohoku Electric had also required extensive emergency training and scenario planning, including for a massive tsunami, so Onagawa’s employees were ready to shut it down. Departing from the hierarchical norms of Japanese corporate culture, headquarters had delegated authority to the plant managers to react in the moment. Simply put, Onagawa’s employees had their act together. Fukushima’s owners had done far less to create a safety culture, and during the meltdown required leadership’s approval for every crisis decision. That doesn’t work in real time.

For most of my career, I have studied disasters, managed government responses to them, and advised elected leaders and business executives about how to plan for them. I have come to think that the very word disaster wrongly excuses us from the obligation to plan for failure. The word’s original meaning, from Middle French and Old Italian, comes from the prefix dis-, signifying a negative force, and astro, for star—implying that disruptive events occur only because the stars were aligned against us, not because of anything we did or didn’t do. The word catastrophe, derived from a Greek word meaning “sudden turn,” has a similar connotation: It’s just bad luck, befalling a passive population with no capacity to manage destruction that nobody could have foreseen.

But we should not be surprised by natural catastrophes, viral variants, sneak attacks, or other tragedies. The devil never sleeps, I argue in my forthcoming book. The good news is that the means to minimize the resulting harm from sudden shocks are always the same: making sure the risk is communicated and widely understood; preparing individuals to respond to a range of scenarios; ensuring redundancies in safety systems, so that none becomes the last line of defense; testing those systems; challenging the fallacy that a near miss implies immunity from a future calamity; and making adjustments after past mistakes.

People in my field describe the event precipitating a crisis—an earthquake, a hurricane, the emergence of a new virus—as the “boom,” and we divide time and human activity into two phases: “left of boom” and “right of boom.” The former includes the steps we take to prevent the boom from happening in the first place; the latter is what we do in the moments after, and then the weeks and months following, to minimize the harm. But we would be better prepared if we no longer viewed disasters as a surprise moment in time. A society that studiously prepares to fail safer—that makes preparation for what will happen right of boom—is a stronger one than a society that focuses a majority of its efforts on avoiding the failure itself.

Fukushima Daiichi’s operator, I should note, didn’t do enough of either. After the 1945 bombings of Hiroshima and Nagasaki, many in Japan were deeply apprehensive about using nuclear energy to address the country’s needs. TEPCO, Japan’s leading nuclear-power operator, managed to convince itself that it had essentially eliminated any risk. According to Akihisa Shiozaki, a lawyer who organized an independent investigation of the Fukushima disaster, the country was sold a myth: “the absolute safeness” of nuclear power. A later government report blamed an industry mindset that ignored the possibility that even in a nuclear disaster, preparedness could go far in “limiting the consequential damage.” Instead, local opposition had to be managed as more nuclear reactors were built, which in many cases meant not talking about the potential for a worst-case scenario.

Not planning for the right of boom makes sudden shocks of all kinds—including the extreme weather events exacerbated by climate change—far more deadly than they might otherwise be. In 2017, Hurricane Maria struck Puerto Rico. To this day, we still do not know how many people died there. One reason death estimates varied so widely is that most of the deaths were not a direct result of the hurricane itself but downstream consequences of power outages. The losses cascaded: Without electricity, deprivations of water, food, and medicines left people vulnerable. Damage to power infrastructure is a predictable outcome of a hurricane. A faster restoration of the grid—in the absence of preparations that would have made it more resistant to failure in the first place—would have saved many lives.

This logic does not only apply after natural disasters. Before the wars in Afghanistan and Iraq, the military had not adapted its battlefield-medicine rules for an age of urban warfare and improvised explosive devices. Existing rules prioritized carrying injured soldiers away from enemy lines to a medic tent or back to base. But those enemy lines were not always clearly defined in America’s post-9/11 conflicts, and many of the wounded bled to death before they could receive proper care. To prevent injuries to American soldiers, the Pentagon made massive investments in armor design and mine-resistant, ambush-protected light tactical vehicles to protect U.S. troops from harm. But those improvements were not enough. Right-of-boom protocols needed to evolve too: So the military began training field soldiers, most of whom were not medics, to use tourniquets right away to save the life of team members wounded in IED attacks. These efforts—which have since spawned a civilian public-awareness campaign called “Stop the Bleed”—were a way to minimize harm even after a life-threatening event had already occurred.

Planning to fail safely is different from trying to eliminate all risk—which is usually impossible. At this point two years into a global pandemic, for example, even stringent lockdowns are unlikely to prevent all transmission of the coronavirus. When the CDC recently decided to use rates of death and hospitalization, rather than overall infection, as its primary metrics for the severity of the problem, the agency implicitly chose to minimize the negative consequences of the virus rather than try to suppress it altogether. Furthermore, attempts to banish one kind of risk may make others worse. After the Fukushima disaster, Germany began phasing out nuclear power altogether—a decision that has left it more dependent on fossil fuel from Russia.

Deliberately accepting some risks, and then being prepared when disaster strikes, will serve human societies better than pretending we can achieve perfect safety. We cannot prevent an earthquake or a tsunami, but we have control over how much death and destruction it causes.

What happened at Fukushima wasn’t just bad fortune; Onagawa didn’t get lucky. Most people outside Japan have never heard of Onagawa because it was ready to fail under the same conditions that proved cataclysmic at Fukushima. And in disaster management, anonymity—not fame—is a sign of success.


​​When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

This article is adapted from Kayyem’s forthcoming book, The Devil Never Sleeps: Learning to Live in an Age of Disasters.


​When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

Juliette Kayyem is a contributing writer at The Atlantic, the faculty chair of the homeland-security program at Harvard’s Kennedy School of Government, and the author of The Devil Never Sleeps: Learning to Live in an Age of Disasters.