Storm Signalling
Storm Signalling
Storm Signalling
When early warnings are seen, reported, and discussed—but never allowed to steer.
Fukushima Daiichi sits on Japan's northeast coast — six reactors on reclaimed land between the Pacific and the towns of Ōkuma and Futaba. When it was built in 1967, TEPCO cut the natural bluff from 35 metres to 10 and built a seawall of 5.7. Behind that wall: enough radioactive material to render a prefecture uninhabitable.
In June 2009, geologist Yukinobu Okamura sat on a government panel reviewing the country's nuclear plants. He had a question for TEPCO. Coastal geologists had confirmed that in 869 AD, a massive earthquake had sent a tsunami kilometres inland across the Sendai plain — same coastline, same shore. The Jogan earthquake was not ancient legend. It was data. Okamura asked why it wasn't in Fukushima Daiichi's safety guidelines. The TEPCO official pointed to a more recent, smaller earthquake. Okamura pressed. The official acknowledged it was "a historic earthquake" and moved on. Okamura later said he regretted not pushing harder.
On March 11, 2011, a 15-metre tsunami overtopped the plant. Three reactors melted down. Over 150,000 people were evacuated from a contamination zone that still exists today. In the aftermath, Okamura learned what had been behind the official's composure. In 2008 — a year before that panel — TEPCO's own engineers had modelled a Jogan-scale event and estimated run-up of 15.7 metres. The number that would have answered his question already existed inside the organisation he was questioning.
Seventy miles up the coast, the Onagawa plant, closer to the epicentre, survived. Its chief engineer had read the same history, and built the wall to match.
Fukushima Daiichi sits on Japan's northeast coast — six reactors on reclaimed land between the Pacific and the towns of Ōkuma and Futaba. When it was built in 1967, TEPCO cut the natural bluff from 35 metres to 10 and built a seawall of 5.7. Behind that wall: enough radioactive material to render a prefecture uninhabitable.
In June 2009, geologist Yukinobu Okamura sat on a government panel reviewing the country's nuclear plants. He had a question for TEPCO. Coastal geologists had confirmed that in 869 AD, a massive earthquake had sent a tsunami kilometres inland across the Sendai plain — same coastline, same shore. The Jogan earthquake was not ancient legend. It was data. Okamura asked why it wasn't in Fukushima Daiichi's safety guidelines. The TEPCO official pointed to a more recent, smaller earthquake. Okamura pressed. The official acknowledged it was "a historic earthquake" and moved on. Okamura later said he regretted not pushing harder.
On March 11, 2011, a 15-metre tsunami overtopped the plant. Three reactors melted down. Over 150,000 people were evacuated from a contamination zone that still exists today. In the aftermath, Okamura learned what had been behind the official's composure. In 2008 — a year before that panel — TEPCO's own engineers had modelled a Jogan-scale event and estimated run-up of 15.7 metres. The number that would have answered his question already existed inside the organisation he was questioning.
Seventy miles up the coast, the Onagawa plant, closer to the epicentre, survived. Its chief engineer had read the same history, and built the wall to match.
What is it?
Early warnings travel through every layer of the organisation — seen, reported, discussed — yet never translate into course correction. Front-line teams sense change early: operational anomalies, customer behaviour shifting, technical debt accumulating, regulatory unease, market signals that don't add up. The warnings are there. They are often articulated clearly and reported through proper channels. But as they move upward through layers of management, they are translated at each handoff. Uncertainty becomes nuance. Urgency becomes context. By the time concern reaches people with authority to act, it reads like balanced reporting — noted, filed, absorbed into the background.
Why does it matter?
Because it produces a specific outcome: organisations that are surprised by events they had already been told about. Engineers flag architectural risk months before the outage. Sales teams report a shift in buyer behaviour two quarters before the pipeline collapses. A compliance officer raises a concern that lands in a risk register and never leaves it. After the failure, the evidence is always there. The post-mortem finds a trail of signals leading directly to the disaster — and a matching trail of acknowledgements that led to nothing.
It feels like diligence. Risks are discussed. Concerns are logged. Escalation pathways exist and are used — and none of it changes anything. From inside, the system looks like it is working — because the signals are moving. What nobody notices is that they are moving without force. The organisation processes warnings the way a river processes a stone: it flows around them and continues in the same direction.
What causes it?
The hierarchy tax on urgency. Every layer a signal passes through extracts a cost. The person who sees the problem feels it. Their manager contextualises it. Their manager's manager weighs it against other priorities. By the time it reaches someone who can redirect resources or reverse a commitment, it has become one concern among many — stripped of the situational detail and emotional weight that made it a warning in the first place.
The economics of raising alarms. Flagging risk is expensive for the person who does it. It creates discomfort, slows momentum, threatens commitments, and introduces personal exposure. If the warning proves wrong, the cost is borne by the person who raised it. If it proves right, the credit rarely reaches them. Over time, teams learn which signals travel safely upward — and which are better left unspoken. The system selects for reassurance.
The professional instinct to manage ambiguity. Experts hedge. Managers contextualise. By the time it reaches executives, they want certainty — which early warnings can't provide. In highly professional environments, ambiguity — the very quality that makes early signals valuable — is precisely what gets filtered out. A signal survives escalation only if it can be stated with a confidence that early warnings, by definition, cannot carry.
The commitment trap. By the time a warning reaches someone with authority to act, the organisation is often already committed: budgets approved, contracts signed, reputations invested, strategies announced. The signal was communicated fine. Acting on it has simply become more disruptive than ignoring it.
How to recognise it:
Concerns are consistently "noted" but rarely change a decision.
Escalation produces reassurance instead of investigation.
Warnings get reframed as "risks to monitor" — and monitoring never triggers action.
Leadership discovers problems only once they become public or irreversible.
The post-mortem reveals that multiple people saw it coming, and said so.
What you can do:
Protect the signal's force across layers. Build escalation where the original warning — in its own words, with its own uncertainty — reaches decision-makers unedited.
Make it safe to be wrong early. If raising a concern carries career risk, you will only hear warnings people are willing to stake their reputation on — which means you will only hear them too late.
Separate the signal from the decision. People suppress warnings when they believe the warning obligates action. Create space where signals can be heard without immediately triggering a commitment.
Watch for the language of absorption. "We're aware of it." "It's on the risk register." "We're keeping an eye on it." Ask what changed as a result. If nothing changed, the signal didn't steer.
What is it?
Early warnings travel through every layer of the organisation — seen, reported, discussed — yet never translate into course correction. Front-line teams sense change early: operational anomalies, customer behaviour shifting, technical debt accumulating, regulatory unease, market signals that don't add up. The warnings are there. They are often articulated clearly and reported through proper channels. But as they move upward through layers of management, they are translated at each handoff. Uncertainty becomes nuance. Urgency becomes context. By the time concern reaches people with authority to act, it reads like balanced reporting — noted, filed, absorbed into the background.
Why does it matter?
Because it produces a specific outcome: organisations that are surprised by events they had already been told about. Engineers flag architectural risk months before the outage. Sales teams report a shift in buyer behaviour two quarters before the pipeline collapses. A compliance officer raises a concern that lands in a risk register and never leaves it. After the failure, the evidence is always there. The post-mortem finds a trail of signals leading directly to the disaster — and a matching trail of acknowledgements that led to nothing.
It feels like diligence. Risks are discussed. Concerns are logged. Escalation pathways exist and are used — and none of it changes anything. From inside, the system looks like it is working — because the signals are moving. What nobody notices is that they are moving without force. The organisation processes warnings the way a river processes a stone: it flows around them and continues in the same direction.
What causes it?
The hierarchy tax on urgency. Every layer a signal passes through extracts a cost. The person who sees the problem feels it. Their manager contextualises it. Their manager's manager weighs it against other priorities. By the time it reaches someone who can redirect resources or reverse a commitment, it has become one concern among many — stripped of the situational detail and emotional weight that made it a warning in the first place.
The economics of raising alarms. Flagging risk is expensive for the person who does it. It creates discomfort, slows momentum, threatens commitments, and introduces personal exposure. If the warning proves wrong, the cost is borne by the person who raised it. If it proves right, the credit rarely reaches them. Over time, teams learn which signals travel safely upward — and which are better left unspoken. The system selects for reassurance.
The professional instinct to manage ambiguity. Experts hedge. Managers contextualise. By the time it reaches executives, they want certainty — which early warnings can't provide. In highly professional environments, ambiguity — the very quality that makes early signals valuable — is precisely what gets filtered out. A signal survives escalation only if it can be stated with a confidence that early warnings, by definition, cannot carry.
The commitment trap. By the time a warning reaches someone with authority to act, the organisation is often already committed: budgets approved, contracts signed, reputations invested, strategies announced. The signal was communicated fine. Acting on it has simply become more disruptive than ignoring it.
How to recognise it:
Concerns are consistently "noted" but rarely change a decision.
Escalation produces reassurance instead of investigation.
Warnings get reframed as "risks to monitor" — and monitoring never triggers action.
Leadership discovers problems only once they become public or irreversible.
The post-mortem reveals that multiple people saw it coming, and said so.
What you can do:
Protect the signal's force across layers. Build escalation where the original warning — in its own words, with its own uncertainty — reaches decision-makers unedited.
Make it safe to be wrong early. If raising a concern carries career risk, you will only hear warnings people are willing to stake their reputation on — which means you will only hear them too late.
Separate the signal from the decision. People suppress warnings when they believe the warning obligates action. Create space where signals can be heard without immediately triggering a commitment.
Watch for the language of absorption. "We're aware of it." "It's on the risk register." "We're keeping an eye on it." Ask what changed as a result. If nothing changed, the signal didn't steer.
Not yet ready to talk? Look over our captains' shoulders. One insight per week you can actually use tomorrow.
No jargon, no hype, 100% bullshit-free advice.
Not yet ready to talk? Look over our captains' shoulders. One insight per week you can actually use tomorrow.
No jargon, no hype, 100% bullshit-free advice.
Decisions being made without the knowledge existing inside your organisation? Let's build a better decision making process
Decisions being made without the knowledge existing inside your organisation? Let's build a better decision making process