Over the past decade, the language of complexity has become increasingly common in occupational safety and health. Terms like emergence, drift, non-linearity, and work as done now appear in conference keynotes, consultancy offerings, and professional debates with striking regularity.
In principle, this isn’t a problem. Few experienced practitioners would argue that modern organisations are simple, or that human behaviour can always be reduced to simple cause-and-effect chains.
My concern is more specific. It isn’t about whether work is complex. It’s whether complexity theory, as currently promoted in safety, demonstrably prevents harm before the event. That distinction matters.

Explanation versus prevention
Much of what is presented under the banner of complexity is reasonable at explaining accidents after they’ve happened. It offers rich narratives, systemic interpretations, and sophisticated accounts of how failures unfold over time. Explanation has value. But in safety, explanation is never the primary test. The primary test is prevention.
When someone is injured, killed, or made ill at work, the question isn’t whether we can describe the system in elegant terms afterwards. The question is whether our methods, decisions, and controls reduced thelikelihood of that harm occurring in the first place.
In safety-critical domains, theories must earn their place by improving outcomes – not by expanding vocabularies.
The displacement problem
One of my central concerns is that the language of complexity can displace rather than enhance practical safety work.
Once a problem is labelled ‘complex’, there is a tendency for specifics to dissolve. Clear failures become ’emergent properties’. Poor supervision becomes ‘systemic pressure’. Missing controls become ‘non-linear interactions’. Accountability becomes diffuse. Responsibility spreads thinly across the system until it all but disappears.
This is not an argument for simplistic blame or crude root cause thinking. It is an argument for clarity.

Real harm is rarely caused by abstraction. It’s caused by very ordinary things done badly or not at all: unguarded machinery, inadequate isolation, poorly designed work processes, uncontrolled change, normalised deviation, and decisions taken without understanding their consequences.
None of these problems require complexity theory to be identified or addressed.
The circularity problem
Complexity theory, as applied in safety, routinely harvests data from human behaviour while simultaneously asserting that human behaviour is unreliable, biased, adaptive, and opaque. It then treats the outputs of that very behaviour as privileged insight. That is a circularity problem, not a sophistication.
If humans are as fallible, context-bound, and post-hoc rationalising as the theorists insist, then data derived from interviews, workshops, retrospectives, and narratives inherits those same weaknesses. It does not magically transcend them because it is placed inside a complexity framework
Safety is not an interpretive exercise. It is a protective one. Understanding how a system behaves doesn’t, by itself, make that system safer. Observation, description, and sense-making only have value in safety when they lead to earlier intervention, stronger controls, or better decisions before harm occurs.
A theory that helps us explain failure after the event but cannot reliably influence design, supervision, or control before the event may be intellectually interesting, but it is operationally incomplete.

‘Safety is not an intellectual exercise to keep us in work. It is a matter of life and death. It is the sum of our contributions to safety management that determines whether the people we work with live or die.’
Sir Brian Appleton, a technical advisor to the Piper Alpha inquiry
A pattern in debate
I’ve spent time in recent months engaging publicly with advocates of complexity-based approaches in safety. The exchanges have been instructive not for what they revealed about complexity theory, but for what they revealed about how it is defended. A consistent pattern emerges. When asked directly what complexity theory has prevented, responses tend to follow a predictable sequence:
First, redefinition. The question is reframed. What counts as ‘complexity theory’ shifts. Simulation tools, design optimisation, or organisational learning are offered as examples none of which are distinctively complexity-based, and none of which demonstrate pre-event prevention.
Second, abstraction. Rather than naming a prevented incident, the response moves to systems language: emergence, non-linearity, adaptive capacity. The vocabulary expands; the specificity contracts.
Third, credentialism. References to publications, frameworks, and academic affiliations appear. The argument shifts from ‘here is what it prevented’ to ‘here is why you should trust my authority’.
Fourth, and finally, personalisation. When none of the above succeeds, the questioner is told they ‘lack curiosity’, ‘don’t understand the science’, or are ‘trapped in linear thinking’. The question isn’t answered the questioner is dismissed.
The question remains unchanged. The evasion tells its own story.
Curiosity as deflection
A word on ‘curiosity’, since it appears frequently in these exchanges. When someone runs out of answers, telling the questioner to ‘be curious’ is rarely a genuine call for exploration. It is usually a soft-edged way of saying: ‘Why haven’t you shifted to my view?’
Curiosity is not the same as credulity. Asking whether a theory has prevented harm is not ignorance it is due diligence. Demanding evidence is not closed-mindedness it is professional responsibility.
In my own engagements, I have discussed FRAM, AI-led drug discovery, systems failures at Grenfell and Macondo, and the epistemological claims of leading complexity thinkers. That is curiosity. What I haven’t done is concede abstraction in place of clarity and that is not a failure of openness. That is discipline.
Complexity has become the language of those who observe systems – rather than those who make them work.
Phil Douglas
Where I stand
I am not opposed to learning, nor to insight from other disciplines. But in occupational safety and health, ideas must serve prevention, not replace it.
I have spent thirty-five years preventing real harm, in real industries, with real people. I have managed men in a deep coal mine. I have seen what works. I have seen what fails. And I have seen what gets people killed. Until complexity theory can show, clearly and repeatedly, how it prevents harm before the event, it should be treated as an interpretive lens not a substitute for competent safety management.
In safety, outcomes matter more than elegance. Lives depend on that distinction.
And when I ask complexity advocates what they have prevented, they tell me I lack curiosity. That isn’t an answer. That’s evasion.


Psychological Safety