If you read almost any healthcare journal these days, you will find the concept of complexity cropping up more and more. The study of complex adaptive systems, also known as complexity science, is burgeoning, along with examples of its relevance to health care. There are dozens of different accounts of complexity on offer and some of these are themselves formidably complex, so it is easy to find them off-putting. In this article, I want to propose the idea that the fundamentals of complexity are in fact extremely simple. Indeed, I would like to suggest that complicated descriptions of complexity may fail to capture its most important qualities, and that simple ones, especially those that use metaphor and appeal to intuition, may be better ways of doing so.
Two common sayings probably sum up all of complexity theory more concisely than any other formulation. One is the expression ‘the law of unintended consequences.’ This imaginary law encapsulates what everyone already knows about complexity even without realising it. Our everyday experience is that anything we attempt to do, either at work or in our daily lives, can result in consequences we never foresaw. The reasons for this are legion, but commonly they include an incomplete prior assessment of the circumstances, the contrary wishes and actions of other individuals, random accidents, or a change in the prevailing context. These are all typical features of complex systems, from teams and hospitals to families and societies, and the processes that take place in them.
This negative expression also has a corollary, in another common statement ‘the whole is more than the sum of its parts.’ While this is impossible in purely arithmetical terms. it points towards the other, more creative aspect of complexity. When everything goes well – for example, when the initial assessment of a problem is thorough, everyone’s wishes and actions are taken in to consideration, accidents are successfully avoided, and changes in the surrounding context are recognised when they occur – there is a higher chance that the outcome of any attempt to bring about improvement will be better, more productive, and more surprising, than anyone expected. One might argue that anyone who has an understanding of these two sayings in their bones, and generally acts on them, has a better practical understanding of complexity than many people who are confidently able to frame such processes in technical terms like linear thinking versus feedback loops, or reductionism versus emergence, but who seem to lack a intuitive understanding of the idea.
If you want to underpin any instinctive understanding of complexity with some basic knowledge of the field, it is worth knowing that it originated in the 1950s with an Austrian biologist called Ludwig von Bertalanffy,1 although he called it something different (‘general system theory’). You will impress people if you cite him, because his name has largely been forgotten as a result of the explosion of models related to other fields, especially those addressing complexity in human organisations like businesses and public services.2 3 Complexity thinking has now been applied to subjects as varied as computation and artificial intelligence, economics and neuroscience. A series of classic papers in the BMJ by Trish Greenhalgh and others introduced doctors to the field.4–7 Since then, there have been countless books and articles on complexity and healthcare. Two reviews are particularly worth reading. One is by the Health Foundation,8 and the other a ‘white paper’ by Braithwaite and a team at the Australian Institute of Health Innovation.9 Recently, Greenhalgh and Papoutsi have also offered an updated and accessible account of how to apply complexity in the context of health services research.10
What all these works try to do is to address the question: in a world where prediction can never be certain, are there nevertheless some general rules that can reduce uncertainty, so that our actions stand a better chance of achieving their intended results? In essence, these models are attempts to find a reasonable mid-point between the naivety of conventional ‘straight-line’ thinking on the one hand (‘If I do X, then it will inevitably result in Y’) and fatalism on the other hand (‘If something can go wrong, it will.’) The remedies proposed by these authors are all remarkably similar. They generally entail relatively small and experimental interventions, with involvement of all stakeholders, followed by close monitoring of consequences and a willingness to respond to these iteratively and flexibly. Although there is never any certainty in these matters, small changes can result in massive effects while large scale programmes may have little or no impact.
Selected factors to promote complexity thinking (adapted from Braithwaite et al9)
Resist the temptation to focus on an isolated problem. Instead, look for interconnections within the system.
Remember that you can’t actually see very far ahead. Things happen when you least expect them.
Look for patterns in the behaviour of a system, not just at events.
Be careful when attributing cause and effect. It’s rarely that simple.
Generate new ideas beyond your own resources. Ask people with a different perspective for an opinion, including from outside your group.
Keep in mind the system is dynamic, and it doesn’t necessarily respond to intended change as predicted
If you have sufficient resources, draw up a model of the system surrounding a problem.
Use any tools at your disposal including role plays and simulation.
What is striking about these suggestions and similar ones, is that they make sense at every level of organisational activity. For example, in healthcare they apply equally to policy development, education, research, management or the direct clinical care of individual patients. This kind of multiple mirroring, where parts of a system all respond to the same kind of approach, and each level reflects the others, are also a defining characteristic of complexity. If there is one useful piece of advice missing from this and every other list of how to ‘do’ complexity, it is perhaps advice not to follow any of these rules or sets of rules too closely. If any complex system is true to itself, it will foil any attempt to understand or master it entirely. Like riding a bike, if you think too hard about complexity, you may fall over.
Light bulb moments
My own experience as an educator is that a genuinely rich understanding of complexity, as opposed to a merely intellectual one, often occurs as a result of ‘light bulb’ moment. This might be expressed with an exclamation like ‘I’ve just realised how everything in the world is connected to everything else!’” It might even arise from a sudden realisation that a personal, religious or spiritual view of life is entirely compatible with a scientific one. Examples of complexity that sometimes induce such revelations include Darwin’s theory of evolution, when students suddenly grasp how innumerable genetic variations over vast periods of time have interacted with the changing environment, along with members of the same species and of different ones, to culminate in the astounding world of biodiversity we see around us.
Another well-known interactive system that can sometimes help people grasp complexity in an instant is the Krebs Cycle, where very large numbers of molecules interact with each other to generate cellular energy. If you imagine this as a dynamic system, taking place in three dimensions in every cell, with each of these embedded in turn within tissues, organs, the whole organism and surrounding habitat, one can begin to apprehend quite how complex any system can be. You can then start to conceptualise the way human groups behave as exactly such an endless dance of mutually responsive interactions, in which everyone including yourself plays a part.