Systems Innovation 2019 will be a conference on the topics of Complexity Thinking & Systems Change taking place in Barcelona at the end of March 2019. This is an open forum for those organizations and individuals applying complexity and systems thinking in various areas of economy, society, technology or environment towards enabling systems innovation and change – it is for anyone who feels that complexity or systems thinking is central to what they do and wishes to engage with peer organizations and individuals in open discussion and ideas exchange.
The event will bring together up to 10-20 organizations and some 100 individual participants, for an active weekend of presentations, panel discussions and brainstorming sessions on applying the ideas of complexity and systems thinking. The forum will be an opportunity to meet and exchange ideas and perspectives with others in an open space, to foster collaboration and awareness across the community, to hear from our speakers and brainstorm on specific issues of interest to participants.
Basic Info
What
This will be a weekend conference of presentations, workshops and open discussion sessions
Who
This is for those interested in applying complexity thinking to tackling real-world issues and enabling systems change.
Where
The event will take place in Spaces co-working located in the @22 innovation district in the center of Barcelona.
When
Two full day events set for the weekend of the 30 – 31st of March.
Activities
Bring your questions, imagination and enthusiasm for a topic of your interest because this will not be a passive weekend break, but a user generated event. We are mindful that flying people around costs the planet, it costs time, energy and money, we expect everyone to be actively engage in creating an event that has real value and outcomes that move the discussion on systems innovation forward – we have designed the conference to facilitate active engagement from all parties.
This piece – pdf – http://jackmartinleith.com/documents/creating-greatness-in-the-realm-beyond-systems-thinking.pdf – builds on his personal biography (a useful reminder of what co-curator here, David Ing, says – there’s not so much ‘systems thinking’ as there are constellations of thinking, learning reading, personality, and influence around people). I draw particular attention to this because I usually give pretty short shrift to ‘post-systemic’ stuff, because it usually assumes that ‘systems thinkers’ are working in deterministic/mechanistic terms (and those who claim this often do so themselves). And occasionally, it points to ‘systems in the mind’ misleading versus ‘systems in the world’.
However, this piece gives a nice perspective on ‘post-systemic thinking’ which opens up some interesting possibilities.
Design is deep topic. One could say it’s the deepest. It’s about making decisions that affect choices in the world. When you design a chair, what you’ve really done is make a set of choices about how people using it will sit and what their experience will be. People sitting on chairs have choices too. They can defy your expectations by turning the chair on its side and sitting on it that way. That makes them designers as well. Design is a wonderful interactive game.
Below this game, there is a deeper game. It’s the game of principles that help us create stable processes that run in the world, and it’s rather sublime.
One of my favorite examples of this is Postel’s Law. Sometimes it is called ‘The Robustness Principle’, but I think that Jon Postel deserves to have his name attached – it was a tremendous insight. Postel was an instrumental figure in the early days of the internet. Aside from his Law he is probably best known for the DNS Root Authority Test – a courageous act that was equally sublime.
History and biography aside, Postel’s law was a simple observation that he made about networking. In an early internet RFC he said “an implementation should be conservative in its sending behavior, and liberal in its receiving behavior.” This has been paraphrased over the years as “Be liberal in what you accept, and conservative in what you send” and for people who are mathematically inclined: “be contravariant in your inputs and covariant in your outputs.”
What does all of this mean?
Well, the thing that Postel was trying to do is articulate a principle that would help the internet grow. If every node were unforgiving with respect to input errors, the internet would’ve been brittle. Having a little bit of tolerance when you receive inputs, being able to carry on in the face of minor formatting errors, makes the overall system more robust.
The thing that I don’t think many people appreciate about Postel’s Law is its universality. In software, it’s not just about creating components for a network. You see Postel applied in command line utilities in Unix. Many of them are tolerant at input and very regular at output. This allows us to string them together with reasonable assurance that the composition will work. You can also see it at work in the HTML rendering engines of browsers. Browsers have historically been remarkably tolerant of ill-formed HTML and that allowed the web to grow tremendously fast.
We can see Postel in the physical world too. Every time you see a PVC pipe with a flanged end, you’re seeing something that serves as a decent visual metaphor for Postel’s Law. Those pipes fit well together because one end is more accepting. In fact, it’s almost as if they have a magnetic attraction toward attachment. They are certainly built for it. It may seem weird to use this sort of physical analogy – pipes have nothing to do with errors – but still there is this notion of accepting more on one end than the other. It aids composition.
Here’s another example that shows the dynamics of Postel. Imagine being in a room full of language translators. Each translator can either understand one language and speak one another, or understand multiple languages and speak one. It’s likely easier to get from an arbitrary language A to an arbitrary language B using a chain of translators of the latter type than the former.
The galvanizing force behind Postel is the narrowing that happens between the input and the output. In physical terms it is a lessening in size but in information terms we can see it as an analog to lessening of bad variation. The world of information is chaotic. Errors creep in, but if every component takes the opportunity to make things a little simpler or cleaner, it acts as a counter-balance to those errors. In the aggregate, systems become more robust.
Depending upon how much abstraction and analogy you care for, this article is a bit of a wild ride. I’ve drawn a comparison between an information systems principles and the physical world. Let’s go further and extend it to social systems.
Societies often have the notion of ‘good character.’ We can attempt all sorts of definitions but at its core, isn’t good character just having tolerance for the foibles of others and being a person people can count on? Accepting wider variation at input and producing less variation at output? In systems terms that puts more work on the people who have that quality – they have to have enough control to avoid ‘going off’ on people when others ‘go off’ on them, but they get the benefit of being someone people want to connect with. I argue that those same dynamics occur in physical systems and software systems that have the Postel property.
Postel’s Law is not without controversy. If you search you’ll find a lot of criticism. The primary argument is that when components accept loose specifications they invite bad practice. It’s almost like the concept of ‘moral hazard’ in economics. The classic example of this is, again, web browsers. The number of odd cases browsers dealt with in the growth years of the web was staggering, and you could get just about anything that was HTML-like to render. As a result, browser rendering code became some of the most complex and exceptional-case ridden code on the planet. But, in exchange, the web grew swiftly. Over time, browsers have started to tighten the reins. That sort of thing can happen slowly after a growth period. But it doesn’t always. The question is whether accelerated growth is worth it. In manufacturing, we’ve achieved incredible efficiencies by reducing tolerances. Clearly, a different strategy but not one that seems able to achieve the same kind of infectious growth as Postel.
So far I’ve been talking about Postel’s Law as it is a conscious strategy, but it works even if it isn’t. I don’t think browser developers were thinking about input tolerance through the lens of Postel, although they knew that it was a competitive advantage.
I suspect that Postel structuring is prevalent because it is stable under selection. Think about it in these terms: if you have N components with the Postel property and one without, meaning that it is very strict in its inputs, when people are given a choice, they’ll be likely to reject the stricter input component and select a more tolerant one. It’s just less of a headache. To me this seems like an obvious point, but no doubt someone could run simulations to learn more.
In any case, I think that Postel’s Law is rich. It’s like many other systems principles – once you internalize it you can use it in design. It becomes another tool in the toolbox.
The Indianapolis Systems Thinking Forum is a unique opportunity to come together to learn and share experiences using systems thinking in districts, schools, classrooms, businesses, communities, and more. Participants will have the opportunity to grow their systems thinking capacity and make valuable connections to many other systems in the region with the shared goal of fostering an environment that promotes success for young people and adults.
Forum Information
Dates:
One-Day Workshop Option, March 29
Two-Day Option (workshop and collaborative session), March 29 & 30
Lunch will be provided both days.
Venue: Butler University, Indianapolis, Indiana
Itinerary:
March 29: Fundamentals of Systems Thinking Workshop
March 30: Systems Thinking: Reflection and Collaborative Planning
Would you sell your soul on eBay? Right now, of course, you can’t. But in some quarters it is taken for granted that within a generation, human beings—including you, if you can hang on for another 30 years or so—will have an alternative to death: being a ghost in a machine. You’ll be able to upload your mind—your thoughts, memories, and personality—to a computer. And once you’ve reduced your consciousness to patterns of electrons, others will be able to copy it, edit it, sell it, or pirate it. It might be bundled with other electronic minds. And, of course, it could be deleted.
That’s quite a scenario, considering that at the moment, nobody really knows exactly what consciousness is. Pressed for a pithy definition, we might call it the ineffable and enigmatic inner life of the mind. But that hardly captures the whirl of thought and sensation that blossoms when you see a loved one after a long absence, hear an exquisite violin solo, or relish an incredible meal. Some of the most brilliant minds in human history have pondered consciousness, and after a few thousand years we still can’t say for sure if it is an intangible phenomenon or maybe even a kind of substance different from matter. We know it arises in the brain, but we don’t know how or where in the brain. We don’t even know if it requires specialized brain cells (or neurons) or some sort of special circuit arrangement of them.
Nevertheless, some in the singularity crowd are confident that we are within a few decades of building a computer, a simulacrum, that can experience the color red, savor the smell of a rose, feel pain and pleasure, and fall in love. It might be a robot with a “body.” Or it might just be software—a huge, ever-changing cloud of bits that inhabit an immensely complicated and elaborately constructed virtual domain.
We are among the few neuroscientists who have devoted a substantial part of their careers to studying consciousness. Our work has given us a unique perspective on what is arguably the most momentous issue in all of technology: whether consciousness will ever be artificially created.
We think it will—eventually. But perhaps not in the way that the most popular scenarios have envisioned it.
Photo: Edge City/Universal/The Kobal CollectionA Better Turing Test: Shown this frame from the cult classic Repo Man [top], a conscious machine should be able to home in on the key elements [bottom]—a man with a gun, another man with raised arms, bottles on shelves—and conclude that it depicts a liquor-store robbery.
Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality. That’s good news, because it means there’s no reason why consciousness can’t be reproduced in a machine—in theory, anyway.
In humans and animals, we know that the specific content of any conscious experience—the deep blue of an alpine sky, say, or the fragrance of jasmine redolent in the night air—is furnished by parts of the cerebral cortex, the outer layer of gray matter associated with thought, action, and other higher brain functions. If a sector of the cortex is destroyed by stroke or some other calamity, the person will no longer be conscious of whatever aspect of the world that part of the brain represents. For instance, a person whose visual cortex is partially damaged may be unable to recognize faces, even though he can still see eyes, mouths, ears, and other discrete facial features. Consciousness can be lost entirely if injuries permanently damage most of the cerebral cortex, as seen in patients like Terri Schiavo, who suffered from persistent vegetative state. Lesions of the cortical white matter, containing the fibers through which parts of the brain communicate, also cause unconsciousness. And small lesions deep within the brain along the midline of the thalamus and the midbrain can inactivate the cerebral cortex and indirectly lead to a coma—and a lack of consciousness.
To be conscious also requires the cortex and thalamus—the corticothalamic system—to be constantly suffused in a bath of substances known as neuromodulators, which aid or inhibit the transmission of nerve impulses. Finally, whatever the mechanisms necessary for consciousness, we know they must exist in both cortical hemispheres independently.
Much of what goes on in the brain has nothing to do with being conscious, however. Widespread damage to the cerebellum, the small structure at the base of the brain, has no effect on consciousness, despite the fact that more neurons reside there than in any other part of the brain. Neural activity obviously plays some essential role in consciousness but in itself is not enough to sustain a conscious state. We know that at the beginning of a deep sleep, consciousness fades, even though the neurons in the corticothalamic system continue to fire at a level of activity similar to that of quiet wakefulness.
Data from clinical studies and from basic research laboratories, made possible by the use of sophisticated instruments that detect and record neuronal activity, have given us a complex if still rudimentary understanding of the myriad processes that give rise to consciousness. We are still a very long way from being able to use this knowledge to build a conscious machine. Yet we can already take the first step in that long journey: we can list some aspects of consciousness that are not strictly necessary for building such an artifact.
Remarkably, consciousness does not seem to require many of the things we associate most deeply with being human: emotions, memory, self-reflection, language, sensing the world, and acting in it. Let’s start with sensory input and motor output: being conscious requires neither . We humans are generally aware of what goes on around us and occasionally of what goes on within our own bodies. It’s only natural to infer that consciousness is linked to our interaction with the world and with ourselves.
Yet when we dream, for instance, we are virtually disconnected from the environment—we acknowledge almost nothing of what happens around us, and our muscles are largely paralyzed. Nevertheless, we are conscious, sometimes vividly and grippingly so. This mental activity is reflected in electrical recordings of the dreaming brain showing that the corticothalamic system, intimately involved with sensory perception, continues to function more or less as it does in wakefulness.
Neurological evidence points to the same conclusion. People who have lost their eyesight can both imagine and dream in images, provided they had sight earlier in their lives. Patients with locked-in syndrome, which renders them almost completely paralyzed, are just as conscious as healthy subjects. Following a debilitating stroke, the French editor Jean-Dominique Bauby dictated his memoir, The Diving Bell and the Butterfly, by blinking his left eye. Stephen Hawking is a world-renowned physicist, best-selling author, and occasional guest star on “The Simpsons,” despite being immobilized from a degenerative neurological disorder.
So although being conscious depends on brain activity, it does not require any interaction with the environment. Whether the development of consciousness requires such interactions in early childhood, though, is a different matter.
How about emotions? Does a conscious being need to feel and display them? No: being conscious does not require emotion. People who’ve suffered damage to the frontal area of the brain, for instance, may exhibit a flat, emotionless affect; they are as dispassionate about their own predicament as they are about the problems of people around them. But even though their behavior is impaired and their judgment may be unsound, they still experience the sights and sounds of the world much the way normal people do.
Primal emotions like anger, fear, surprise, and joy are useful and perhaps even essential for the survival of a conscious organism. Likewise, a conscious machine might rely on emotions to make choices and deal with the complexities of the world. But it could be just a cold, calculating engine—and yet still be conscious.
Psychologists argue that consciousness requires selective attention—that is, the ability to focus on a given object, thought, or activity. Some have even argued that consciousness is selective attention. After all, when you pay attention to something, you become conscious of that thing and its properties; when your attention shifts, the object fades from consciousness.
Nevertheless, recent evidence favors the idea that a person can consciously perceive an event or object without paying attention to it. When you’re focused on a riveting movie, your surroundings aren’t reduced to a tunnel. You may not hear the phone ringing or your spouse calling your name, but you remain aware of certain aspects of the world around you. And here’s a surprise: the converse is also true. People can attend to events or objects—that is, their brains can preferentially process them—without consciously perceiving them. This fact suggests that being conscious does not require attention.
One experiment that supported this conclusion found that, as strange as it sounds, people could pay attention to an object that they never “saw.” Test subjects were shown static images of male and female nudes in one eye and rapidly flashing colored squares in the other eye. The flashing color rendered the nudes invisible—the subjects couldn’t even say where the nudes were in the image. Yet the psychologists showed that subjects nevertheless registered the unseen image if it was of the opposite sex.
What of memory? Most of us vividly remember our first kiss, our first car, or the images of the crumbling Twin Towers on 9/11. This kind of episodic memory would seem to be an integral part of consciousness. But the clinic tells us otherwise: being conscious does not require either explicit or working memory.
In 1953, an epileptic man known to the public only as H.M. had most of his hippocampus and neighboring regions on both sides of the brain surgically removed as an experimental treatment for his condition. From that day on, he couldn’t acquire any new long-term memories—not of the nurses and doctors who treated him, his room at the hospital, or any unfamiliar well-wishers who dropped by. He could recall only events that happened before his surgery. Such impairments, though, didn’t turn H.M. into a zombie. He is still alive today, and even if he can’t remember events from one day to the next, he is without doubt conscious.
The same holds true for the sort of working memory you need to perform any number of daily activities—to dial a phone number you just looked up or measure out the correct amount of crushed thyme given in the cookbook you just consulted. This memory is called dynamic because it lasts only as long as neuronal circuits remain active. But as with long-term memory, you don’t need it to be conscious.
Self-reflection is another human trait that seems deeply linked to consciousness. To assess consciousness, psychologists and other scientists often rely on verbal reports from their subjects. They ask questions like “What did you see?” To answer, a subject conjures up an image by “looking inside” and recalling whatever it was that was just viewed. So it is only natural to suggest that consciousness arises through your ability to reflect on your perception.
As it turns out, though, being conscious does not require self-reflection. When we become absorbed in some intense perceptual task—such as playing a fast-paced video game, swerving on a motorcycle through moving traffic, or running along a mountain trail—we are vividly conscious of the external world, without any need for reflection or introspection.
Neuroimaging studies suggest that we can be vividly conscious even when the front of the cerebral cortex, involved in judgment and self-representation, is relatively inactive. Patients with widespread injury to the front of the brain demonstrate serious deficits in their cognitive, executive, emotional, and planning abilities. But they appear to have nearly intact perceptual abilities.
Finally, being conscious does not require language. We humans affirm our consciousness through speech, describing and discussing our experiences with one another. So it’s natural to think that speech and consciousness are inextricably linked. They’re not. There are many patients who lose the ability to understand or use words and yet remain conscious. And infants, monkeys, dogs, and mice cannot speak, but they are conscious and can report their experiences in other ways.
So what about a machine? We’re going to assume that a machine does not require anything to be conscious that a naturally evolved organism—you or me, for example—doesn’t require. If that’s the case, then, to be conscious a machine does not need to engage with its environment, nor does it need long-term memory or working memory; it does not require attention, self-reflection, language, or emotion. Those things may help the machine survive in the real world. But to simply have subjective experience—being pleased at the sight of wispy white clouds scurrying across a perfectly blue sky—those traits are probably not necessary.
So what is necessary? What are the essential properties of consciousness, those without which there is no experience whatsoever?
We think the answer to that question has to do with the amount of integrated information that an organism, or a machine, can generate. Let’s say you are facing a blank screen that is alternately on or off, and you have been instructed to say “light” when the screen turns on and “dark” when it turns off. Next to you, a photodiode—one of the very simplest of machines—is set up to beep when the screen emits light and to stay silent when the screen is dark. The first problem that consciousness poses boils down to this: both you and the photodiode can differentiate between the screen being on or off, but while you can see light or dark, the photodiode does not consciously ”see” anything. It merely responds to photons.
The key difference between you and the photodiode has to do with how much information is generated when the differentiation between light and dark is made. Information is classically defined as the reduction of uncertainty that occurs when one among many possible outcomes is chosen. So when the screen turns dark, the photodiode enters one of its two possible states; here, a state corresponds to one bit of information. But when you see the screen turn dark, you enter one out of a huge number of states: seeing a dark screen means you aren’t seeing a blue, red, or green screen, the Statue of Liberty, a picture of your child’s piano recital, or any of the other uncountable things that you have ever seen or could ever see. To you, “dark” means not just the opposite of light but also, and simultaneously, something different from colors, shapes, sounds, smells, or any mixture of the above.
So when you look at the dark screen, you rule out not just ”light” but countless other possibilities. You don’t think of the stupefying number of possibilities, of course, but their mere existence corresponds to a huge amount of information.
Conscious experience consists of more than just differentiating among many states, however. Consider an idealized 1-megapixel digital camera. Even if each photodiode in the imager were just binary, the number of different patterns that imager could record is 21 000 000. Indeed, the camera could easily enter a different state for every frame from every movie that was or could ever be produced. It’s a staggering amount of information. Yet the camera is obviously not conscious. Why not?
We think that the difference between you and the camera has to do with integrated information. The camera can indeed be in any one of an absurdly large number of different states. However, the 1-megapixel sensor chip isn’t a single integrated system but rather a collection of one million individual, completely independent photodiodes, each with a repertoire of two states. And a million photodiodes are collectively no smarter than one photodiode.
By contrast, the repertoire of states available to you cannot be subdivided. You know this from experience: when you consciously see a certain image, you experience that image as an integrated whole. No matter how hard you try, you cannot divvy it up into smaller thumbprint images, and you cannot experience its colors independently of the shapes, or the left half of your field of view independently of the right half. Underlying this unity is a multitude of causal interactions among the relevant parts of your brain. And unlike chopping up the photodiodes in a camera sensor, disconnecting the elements of your brain that feed into consciousness would have profoundly detrimental effects.
To be conscious, then, you need to be a single integrated entity with a large repertoire of states. Let’s take this one step further: your level of consciousness has to do with how much integrated information you can generate. That’s why you have a higher level of consciousness than a tree frog or a supercomputer.
It is possible to work out a theoretical framework for gauging how effective different neural architectures would be at generating integrated information and therefore attaining a conscious state. This framework, the integrated information theory of consciousness, or IIT, is grounded in the mathematics of information and complexity theory and provides a specific measure of the amount of integrated information generated by any system comprising interacting parts. We call that measure Φ and express it in bits. The larger the value of Φ, the larger the entity’s conscious repertoire. (For students of information theory, Φ is an intrinsic property of the system, and so it is different from the Shannon information that can be sent through a channel.)
IIT suggests a way of assessing consciousness in a machine—a Turing Test for consciousness, if you will. Other attempts at gauging machine consciousness, or at least intelligence, have fallen short. Carrying on an engaging conversation in natural language or playing strategy games were at various times thought to be uniquely human attributes. Any machine that had those capabilities would also have a human intellect, researchers once thought. But subsequent events proved them wrong—computer programs such as the chatterbot ALICE and the chess-playing supercomputer Deep Blue, which famously bested Garry Kasparov in 1997, demonstrated that machines can display human-level performance in narrow tasks. Yet none of those inventions displayed evidence of consciousness.
Scientists have also proposed that displaying emotion, self-recognition, or purposeful behavior are suitable criteria for machine consciousness. However, as we mentioned earlier, there are people who are clearly conscious but do not exhibit those traits.
What, then, would be a better test for machine consciousness? According to IIT, consciousness implies the availability of a large repertoire of states belonging to a single integrated system. To be useful, those internal states should also be highly informative about the world.
One test would be to ask the machine to describe a scene in a way that efficiently differentiates the scene’s key features from the immense range of other possible scenes. Humans are fantastically good at this: presented with a photo, a painting, or a frame from a movie, a normal adult can describe what’s going on, no matter how bizarre or novel the image is.
Consider the following response to a particular image: “It’s a robbery—there’s a man holding a gun and pointing it at another man, maybe a store clerk.” Asked to elaborate, the person could go on to say that it’s probably in a liquor store, given the bottles on the shelves, and that it may be in the United States, given the English-language newspaper and signs. Note that the exercise here is not to spot as many details as one can but to discriminate the scene, as a whole, from countless others.
So this is how we can test for machine consciousness: show it a picture and ask it for a concise description [see photos, “A Better Turing Test”]. The machine should be able to extract the gist of the image (it’s a liquor store) and what’s happening (it’s a robbery). The machine should also be able to describe which objects are in the picture and which are not (where’s the getaway car?), as well as the spatial relationships among the objects (the robber is holding a gun) and the causal relationships (the other man is holding up his hands because the bad guy is pointing a gun at him).
The machine would have to do as well as any of us to be considered as conscious as we humans are—so that a human judge could not tell the difference—and not only for the robbery scene but for any and all other scenes presented to it.
No machine or program comes close to pulling off such a feat today. In fact, image understanding remains one of the great unsolved problems of artificial intelligence. Machine-vision algorithms do a reasonable job of recognizing ZIP codes on envelopes or signatures on checks and at picking out pedestrians in street scenes. But deviate slightly from these well-constrained tasks and the algorithms fail utterly.
Very soon, computer scientists will no doubt create a program that can automatically label thousands of common objects in an image—a person, a building, a gun. But that software will still be far from conscious. Unless the program is explicitly written to conclude that the combination of man, gun, building, and terrified customer implies “robbery,” the program won’t realize that something dangerous is going on. And even if it were so written, it might sound a false alarm if a 5-year-old boy walked into view holding a toy pistol. A sufficiently conscious machine would not make such a mistake.
What is the best way to build a conscious machine? Two complementary strategies come to mind: either copying the mammalian brain or evolving a machine. Research groups worldwide are already pursuing both strategies, though not necessarily with the explicit goal of creating machine consciousness.
Though both of us work with detailed biophysical computer simulations of the cortex, we are not optimistic that modeling the brain will provide the insights needed to construct a conscious machine in the next few decades. Consider this sobering lesson: the roundworm Caenorhabditis elegans is a tiny creature whose brain has 302 nerve cells. Back in 1986, scientists used electron microscopy to painstakingly map its roughly 6000 chemical synapses and its complete wiring diagram. Yet more than two decades later, there is still no working model of how this minimal nervous system functions.
Now scale that up to a human brain with its 100 billion or so neurons and a couple hundred trillion synapses. Tracing all those synapses one by one is close to impossible, and it is not even clear whether it would be particularly useful, because the brain is astoundingly plastic, and the connection strengths of synapses are in constant flux. Simulating such a gigantic neural network model in the hope of seeing consciousness emerge, with millions of parameters whose values are only vaguely known, will not happen in the foreseeable future.
A more plausible alternative is to start with a suitably abstracted mammal-like architecture and evolve it into a conscious entity. Sony’s robotic dog, Aibo, and its humanoid, Qrio, were rudimentary attempts; they operated under a large number of fixed but flexible rules. Those rules yielded some impressive, lifelike behavior—chasing balls, dancing, climbing stairs—but such robots have no chance of passing our consciousness test.
So let’s try another tack. At MIT, computational neuroscientist Tomaso Poggio has shown that vision systems based on hierarchical, multilayered maps of neuronlike elements perform admirably at learning to categorize real-world images. In fact, they rival the performance of state-of-the-art machine-vision systems. Yet such systems are still very brittle. Move the test setup from cloudy New England to the brighter skies of Southern California and the system’s performance suffers. To begin to approach human behavior, such systems must become vastly more robust; likewise, the range of what they can recognize must increase considerably to encompass essentially all possible scenes.
Contemplating how to build such a machine will inevitably shed light on scientists’ understanding of our own consciousness. And just as we ourselves have evolved to experience and appreciate the infinite richness of the world, so too will we evolve constructs that share with us and other sentient animals the most ineffable, the most subjective of all features of life: consciousness itself.
About the Authors
CHRISTOF KOCH is a professor of cognitive and behavioral biology at Caltech.
GIULIO TONONI is a professor of psychiatry at the University of Wisconsin, Madison. In “Can Machines Be Conscious?,” the two neuroscientists discuss how to assess synthetic consciousness. Koch became interested in the physical basis of consciousness while suffering from a toothache. Why should the movement of certain ions across neuronal membranes in the brain give rise to pain? he wondered. Or, for that matter, to pleasure or the feeling of seeing the color blue? Contemplating such questions determined his research program for the next 20 years.
The Association for the Scientific Study of Consciousness, of which Christof Koch is executive director and Giulio Tononi is president-elect, publishes the journal Psyche and holds an annual conference. This year the group will meet in Taipei from 19 to 22 June. See the ASSC Web site for more information.
For details on the neurobiology of consciousness, see The Quest for Consciousness by Christof Koch (Roberts, 2004), with a forward by Francis Crick.
The Chinese room argument is a thought experiment of John Searle (1980a) and associated (1984) derivation. It is one of the best known and widely credited counters to claims of artificial intelligence (AI)—that is, to claims that computers do or at least can (someday might) think. According to Searle’s original presentation, the argument is based on two key claims: brains cause minds and syntax doesn’t suffice for semantics. Its target is what Searle dubs “strong AI.” According to strong AI, Searle says, “the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states” (1980a, p. 417). Searle contrasts strong AI with “weak AI.” According to weak AI, computers just simulatethought, their seeming understanding isn’t real understanding (just as-if), their seeming calculation is only as-if calculation, etc. Nevertheless, computer simulation is useful for studyingthe mind (as for studying the weather and other things).
The indicatrices demonstrate the difference between the 3D world as seen from space and 2D projections of its surface
The map–territory relation describes the relationship between an object and a representation of that object, as in the relation between a geographical territory and a map of it. Polish-American scientist and philosopher Alfred Korzybski remarked that “the map is not the territory” and that “the word is not the thing”, encapsulating his view that an abstraction derived from something, or a reaction to it, is not the thing itself. Korzybski held that many people do confuse maps with territories, that is, confuse models of reality with reality itself. The relationship has also been expressed in other terms, such as Alan Watts‘s “The menu is not the meal.”
First published at Heart of the Art, 20th January 2019
This blog is co-authored by John Atkinson and David Nabarro.
David is the strategic director for 4SD. He has previously worked for several years in senior roles within the UN system. These included coordinating the international response to the West Africa Ebola outbreak 2014-15, the UN’s response to volatile food prices and the Movement for Scaling-Up Nutrition. In October 2018 he was joint winner of the World Food Prize.
John is a founding director at PKP. He has designed, instigated and led whole systems change approaches at the global, national and local level for Governments and Cities as well as for multi-national corporations.
In our work together we have explored what systems leadership means, what working with living systems really looks like and how that plays out for real when you have a central role within loosely-organized human systems that are trying to address complex issues.
Where we begin
There are many models for how systems are supposed to work. Each has at its core a philosophy, sometimes explicitly understood and described, often less so. Most define a route towards an answer, via a prescribed methodology, resulting in seemingly inevitable success. Our experiences have been more varied.
The reality is that leadership through large, complex and politically contested issues can be very tough on the people involved. It challenges our perception as to what is for the best, and how best to achieve it. And it challenges how we can find connection with all those who need to be involved.
We find that as systems leaders focus on complex global challenges they cannot just rely on neat and ordered, often mechanical, approaches to problem-solving like Gantt charts, root cause analysis and logical frameworks. They need an altogether different set of characteristics, some of which are not easily learnt in the seminar room.
There are some important basic competences that should not be neglected:
An ability to encourage groups of people with similar core values to come together around a shared purpose.
To nurture co-creation of the future by wide groups of stakeholders.
To convene design-focused workshops full of diverse participants and to make records of decisions made, incorporating them in business plans.
All of these things really matter. We organize sessions that help develop the competence to do this: we participate in them, enjoy them and sense that colleagues benefit from them greatly.
However, it is our belief and experience that the essence of systems leadership goes beyond acquiring this basic competence. It calls for qualities of thought and action that are unique to effective systems leadership. It involves being able to feel what might be possible, how quickly and with whom. It also means living with the pain, discord and conflict that are inherent in getting divergent groups to work effectively together. The emotional core can be dark at times. Systems leaders must be confident that they can preserve their ability to lead in a difficult environment, through being resilient and functioning effectively in messy situations. It also means being capable of helping colleagues find the way along a path ahead, a path that is rarely clear and often needs creating as we go.
As new connections between groups and individuals form, new patterns arise and from that something novel becomes possible. At the same time something is collapsing. The existing ways of finding coherence are challenged. The fulfilling relationships and certainty as to who we are and where we fit are beginning to crumble. Job roles, departments, even whole organisations may play no part in the new future. This is invariably a political space. It can be deeply painful. People will contest the things that need to happen and in doing so they overtly or covertly fight for the continuation of the present. At such moments, all the doubt, insecurity and anger can be focused on you.
Three areas in which being confident helps
To be effective in this realm means being confident in working with the politics of living systems, dealing with uncertainty, and coping with adversity.
Politics: The politics of living systems are important whenever complex problems are being addressed, whether on a local or global scale. Decisions must be made about who gets what. The stakes are high, and different options can seem equally unpalatable. There are constant contests about who will win, and who might lose out; who will do what and when; who will pay, and how. So much depends on where the power to make things happen lies, and how that power is used. The real source of power is not always obvious.
Systems leaders must be confident when working with those who seek to accumulate and then use power, and they must be comfortable operating within this deeply political realm. This applies to the ‘big P’ Politics of local, national and international governance as well as the small ‘p’ of power relations within and between organizations.
Systems leaders appreciate the need to understand how power is being gained and the influences over its use within all manner of political processes. They take account of the multiplicity of power plays under way, with constant competition over scarce resources and much appearing to depend on the outcome of seemingly minor decisions. At the same time, there is a conundrum. It sometimes seems that politics are undermining efforts to get vital tasks done. This all means that an ability to work with the politics of living systems is an essential, but sometimes frustrating, aspect of a systems leader’s professional journey. The political processes are neither good nor bad, quite simply they are a part of the job. You have to be confident about working with them.
Uncertainty: Systems leaders must be comfortable with, and manage, uncertainty at all times. It is a given that there is uncertainty in the environment. If the future direction was clear and agreed you would have no work to do. What we are referring to is your own uncertainty; how you manage yourself. Here are two of the doubts we have felt in ourselves.
First, “how do I know whether my contribution is meaningful?” The uncertainty experienced by a system undergoing change can challenge both the systems leader’s existing sense of coherence as well as her or his ability to maintain it.
When any series of systems are undergoing change, those involved start to doubt their relationships and question who they are and where they fit in. Some familiar things seem to crumble and this causes a fear of collapse.
This leads to anxiety and pain with many people holding on to the past, fighting to continue the present and disagreeing that change is needed. When we find ourselves in these moments, what is our escape valve? For both of us, John and David, being able to cope with uncertainty starts with knowing who we are, warts and all, and finding comfort with that.
Second, “how do I know whether I am being successful?” Most systems leaders sense that there is real progress when the systems themselves begin to grow strongly. Fresh and intriguing connections are made and new patterns start to arise. More effective ways to get things done are emerging. But things will not necessarily be better for everyone, at least in the short term.
That is why the systems leadership role means thinking through what happens to the less desirable parts of any system as well as those which we seek to enhance. None of us can make the difficult parts just vanish. Collapse and emergence walk hand-in-hand. The role of the systems leader is to accompany both, helping different actors work out what to resolve for themselves and what they need to resolve together. Then we devise ways in which it might be helpful to work with people, enabling them to develop the means for resolving the challenges they face.
Adversity: Systems leaders encourage connections between living systems in ways that enable them to make better collective sense of what is going on. Helping to make connections among people with a whole raft of pain and hurt is far from pleasant. In these circumstances, getting better connected can be personally challenging and some people will become hostile towards you. The systems leader can become the personification of a new and unwanted direction and is likely to be on the receiving end of hurt and anger. It is important to remember that whatever is conveyed in gestures, words or feelings is likely to be an expression of something deeper. The leader has to be resilient in the face of adversity and must try to avoid taking personal responsibility for the difficulties within systems.
In summary, when our egos cry out for recognition and reward it is a danger signal that should be heeded. If you want too much for yourself in any outcome (a new role, enhanced reputation or influence), you will invariably fail. Both pain and success are not, and cannot be, about you. If you are prepared to sink without trace in the final outcome, almost perversely you become more influential.
Three areas in which it helps to be capable
We have also seen that there are three key capabilities that are of the essence when systems leadership is applied to complex global challenges: being able to scope, evolve, and strop, jointly with those with whom we work.
Scoping: It is sometimes said that the pursuit of ambitious targets is the key to making things happen, but strong allegiance to targeting can have unintended consequences. It leads to wholesale shifts in organizational priorities or operations, and a focus on what is delivered, rather than what is experienced by those for whom services are provided. The negotiations involved in agreeing common milestones can take precious resources away from creativity and innovation and can even be used to block progress.
What is important is to maintain the sense of a meaningful direction that appeals to many, scoping on behalf of all. The direction and destination do not need to be described precisely or entirely. But they do need to draw on different elements that reflect the interests of the various groups of stakeholders. This lets people see how what they want can be achieved if they decide to play along (or at least appear to).
Both of us have experienced working with small groups that rapidly grew and grew by making sure that, while their purpose was clear, the means for getting there was necessarily vague, as well as being open and available. This attracted others who cared about the work to join the effort. This form of constructive ambiguity is a valuable attribute in systems leadership.
Evolving: In our experience there are important ways in which complex organizations influence what their associates do. One way is to use tightly defined purpose statements, project plans and outcome measures. This can stifle the kind of creativity that enables organizations to grow through adaptation. It is prioritizing the relentless pursuit of a pre-determined strategy, together with its milestones, over the gradual build-up of a strong momentum.
We have found that leaders appreciate that allowing for detailed plans and outcome measures to become apparent as the work progresses, allows for more effective ways to tackle complex issues.
It seems to us that being comfortable with this kind of progressive evolution is at the heart of systems leadership. Though it is welcome to many of those with whom we work, some will still struggle to find ways for combining progressive evolution with operational control and accountability.
Stropping: We appreciate that systems leaders will always prefer to work with whole systems – to “get all the system in the room”. What we find in practice is that this strategy is often realized later rather than earlier in the process. We’ve noticed how at first a small group sense that something disturbs the status quo and seem to coalesce. Slowly others who think like them are drawn into their conversations.
And we are also aware that if things shift significantly, it is absolutely necessary to ensure that the right groups are engaged, especially those with much to lose in any change. They often have quieter voices and less capacity than the groups who tend to be already at the table.
This is an example of stropping, pursuing strategy through opportunity. It is letting the strategy find its own place and pace through the opportunities that appear in the journey of change.
Five qualities – of thought and action – that are also helpful
As we encourage colleagues to develop their capabilities as systems leaders we see how their abilities are influenced by their experiences, presence and personalities. The way in which they do that in any given setting really does depend on who they are. From our perspective, David, as a qualified medical practitioner, can take positions in some groups that John simply cannot.
At the same time, we have come to appreciate the several different qualities of thought and action that help systems leaders as they navigate complexity and ambiguity. As before, we recognize that the ways in which these qualities are applied will be highly contextual and highly personal. We are interested to know how others use them as they lead efforts for systems change. We share five of them now:
1 Hold competing perspectives simultaneously
The nature of living systems is that they look different to everyone depending on where we sit in them. People can therefore hold competing views that are in contradiction to each other, and both can still both true. In the systems leadership role we need to be able to hold multiple competing perspectives simultaneously and give up striving for an objective truth. It isn’t there to be found.
2 See the whole system differently to its separate parts
We don’t arrive at a truth as to how the system works by studying its separate elements. So we shouldn’t do it. There are characteristics of any living system that are a function of that system as a whole and not found in any of its parts. We must focus on how the elements do and don’t relate, and what happens when they act together.
3 Feel into the pace, rhythm and readiness
It doesn’t matter what external timescale or plan is in place, a living system will move at a pace driven by its internal relationships and its relationship with its environment. As systems leaders we need to become adept at feeling into the pace of change that can be handled, the rhythm that underpins that pace and when things are ready, or not ready to move. If it’s not ready, we don’t try to move it. When it is ready to move fast, we don’t slow it down.
4 See the system in relationship to its environment
Living human systems evolve in symbiosis with their environment. An internal focus on the workings of the system tells us only a part of the story. The new stuff is invariably occurring around the points where the system and its environment are in closest contact. We must go and take a look there and reflect on what we’re seeing.
5 Meet people right where they really are
The way people show up is the way they show up. We can’t force them into a different place. We can’t make them move faster than they are prepared to go. So we see them, hear them and engage with them right there, not from where we feel they should be going. Then we’ll find the potential that exists, however great that is or otherwise.
One step at a time with the direction in mind
We constantly remind ourselves that systems leadership is both art and science. It is the artist and scientist in each of us that determines how we respond to what we uncover through our practice of systems leadership.
We hold the key to being good at being ourselves. As each situation unfolds we interpret what we find through our own experiences and emotions. Many of us find that being honest about our real motives, as well as our reactions to what is happening around us, helps us feel our way into the next step. And that, for us, is the beating heart of this art.
We take each situation one step at a time, always enhancing the quality of our thoughts and actions. We become confident within the politics, uncertainty and adversity. We scope, evolve and strop, always together, always trying to remain aware of the direction in which our steps might be leading.
We can’t tell you how to be you. We hope that by sharing our experience of this work it points you at how you might be you a little better. And how, in evolving your capacity to draw on all that you have within, you might make your world a little better too.
Preliminary Steps Toward a Universal Economic Dynamics for Monetary and Fiscal Policy
Cite as:
Yaneer Bar-Yam, Jean Langlois-Meurinne, Mari Kawakatsu, Rodolfo Garcia, Preliminary steps toward a universal economic dynamics for monetary and fiscal policy, arXiv:1710.06285 (October 10, 2017; Updated December 29, 2017).
We consider the relationship between economic activity and intervention, including monetary and fiscal policy, using a universal monetary and response dynamics framework. Central bank policies are designed for economic growth without excess inflation. However, unemployment, investment, consumption, and inflation are interlinked. Understanding dynamics is crucial to assessing the effects of policy, especially in the aftermath of the recent financial crisis. Here we lay out a program of research into monetary and economic dynamics and preliminary steps toward its execution. We use general principles of response theory to derive specific implications for policy. We find that the current approach, which considers the overall supply of money to the economy, is insufficient to effectively regulate economic growth. While it can achieve some degree of control, optimizing growth also requires a fiscal policy balancing monetary injection between two dominant loop flows, the consumption and wages loop, and investment and returns loop. The balance arises from a composite of government tax, entitlement, subsidy policies, corporate policies, as well as monetary policy. We further show that empirical evidence is consistent with a transition in 1980 between two regimes—from an oversupply to the consumption and wages loop, to an oversupply of the investment and returns loop. The imbalance is manifest in savings and borrowing by consumers and investors, and in inflation. The latter followed an increasing trend until 1980, and a decreasing one since then, resulting in a zero interest rate largely unrelated to the financial crisis. Three recessions and the financial crisis are part of this dynamic. Optimizing growth now requires shifting the balance. Our analysis supports advocates of greater income and / or government support for the poor who use a larger fraction of income for consumption. This promotes investment due to the growth in expenditures. Otherwise, investment has limited opportunities to gain returns above inflation so capital remains uninvested, and does not contribute to the growth of economic activity.
Press Release
Wealth redistribution, not tax cuts, key to economic growth
CAMBRIDGE (October 17, 2017) — President Trump’s new tax plan will follow the familiar script of reducing taxes for the rich in the name of job creation. Not only will these trickle-down policies not work—they’ll make the problem worse. A new report by a team of complexity scientists demonstrates an alternative: increase wages to create more investment opportunities for the wealthy, thus creating new jobs and a stronger economy.
In the ten years since the financial crisis, despite massive economic interventions and zero interest rates, unemployment rates have only now returned to pre-crisis levels. Poverty and debt continue to be widespread, and economic growth struggles to reach 3 percent.
The new complexity science analysis describes the flows of money through the economy, not just the overall activity. It shows that there are two cycles of activity that have to be balanced against each other. The first is that workers earn salaries and consume goods and services. The second is that the wealthy invest in production and receive returns on their investment. The two loops have to be in the right balance in order for growth to happen. If there is more money in the worker loop, there aren’t enough products for them to purchase. If there is more money in the investment loop, consumers don’t have enough money to buy products so investment doesn’t happen.
The paper shows that before 1980 there was too much money in the worker/consumer loop. That money was chasing too few products, giving rise to dangerously increasing inflation. After 1980, likely because of the Reaganomics tax changes, the balance tilted the other way. There was too much money in the investor loop and the result was a series of recessions. The Federal Reserve repeatedly intervened by lowering interest rates to compensate workers’ low wages with increased borrowing, in order to increase consumption.
The research shows that the way the government is regulating the economy is like driving a car with only the accelerator and without using the steering wheel. Steering means keeping the balance between the two loops in the right proportion. While Federal Reserve interventions have helped overcome the recessions, today we are up against the guard rail and need to rebalance the economy by shifting money back to the labor/consumer loop.
Since 1980 consumers have accumulated trillions of dollars of debt, and the wealthy have accumulated trillions of dollars of savings that is not invested because there is nothing to invest in that will give returns. This is the result of government policy reducing taxes for the wealthy in the name of increasing economic activity. No matter how much money investors have, these so-called “job creators” do not create jobs when consumers don’t have money to buy products. Increased economic activity requires both investment and purchase power to pay for the things the investment will produce.
The research shows that Reaganomics had the right idea at the time, but there is need today for a new, bold policy change in the opposite direction. The economy will grow if the flow is shifted toward workers/consumers and away from wealthy investors. The work cautions, however, that this has to be done in the right amount. Reaganomics moved things too far toward the wealthy, so shifting the flow in the other direction has to be done in the right measure.
The results suggest that current approaches to correcting economic problems by reducing government spending (austerity), while decreasing taxes for the wealthy to promote investment, are misguided. They may have been good policies in 1980 but they are long outdated today. It turns out that economic inequality is not just a social justice problem, but actually an economic problem. Fixing economic inequality will have dramatic benefits for economic growth.
FIG 1: Schematic model of monetary flow representing the wages and consumption loop and capital and return loop (red). Transfers from or to banks (savings and loans) and government (taxes, transfers, subsidies and other economic activities) are also indicated (black).
FIG 10: Plot of consumption versus investment between 1960 and 2015. The straight lines represent the dynamics of the economy if the ratio of consumption to investment were fixed. Recessions occurred in years marked by red dots.
FIG 3: Economic flows in the US from 1960 to 2015 according to categories of Fig. 1.
FIG 4: Fraction of economic activity for economic flows. The dominant flows are those of the primary loops in Fig. 1.
FIG 5: Wages divided by wages plus returns (similar to sw in Goodwin’s model), reflecting the percentage of economic activity in the wages and consumption loop compared to the total in the two dominant economic loops. A transition between different behaviors is apparent in 1980. Fits are exponential (blue) and sinusoidal (red) curves. Using the expression in Eq. 22 from 1960 to 1985 we have λ= 0.12/yr, z0 = 70.8%, z1 = −0.64%, t0 = 1960, and Eq. 23 from 1986 to 2005 we have z0 = 59.8%, z1 = 1.65%, k = −0.69/yr, φ0 = −6.0, and t0 = 1986, with p < 10−15 and p < 0.0001, respectively.
FIG 6: Estimates of borrowing and total savings (or debt) for Labor and Capital. A transition from capital borrowing to labor borrowing and capital savings in 1980 is evident. A. Labor borrowing obtained by subtracting wages and government benefits from consumption and taxes. B. Capital borrowing obtained by subtracting returns and government interest payments from investment and taxes. C. Labor total savings obtained by aggregating borrowing since 1960 and D. Capital total savings obtained by aggregating since 1960. Total savings (debt) is obtained by aggregating borrowing since 1960.
FIG 7: Examples of economic development diagrams that indicate the state of the economic activity in terms of the wages/consumption (vertical axis) and investment/returns (horizontal axis). A. Shows stable flows that progress according to different policies that infuse the same proportion of money into each of the loops. This would be the case if all proportions would be functional. B. Shows the case where as one of the loops becomes larger than the other, economic activity is compromised and flows deviate in a way that eventually reduces economic activity consistent with the expectation that one of the loops is necessary for the other. C. Shows what happens to B when policies are shifted by adding additional flows to the investor loop. Scales are arbitrary.
FIG 11: Same data shown in Fig. 10 but the region of the data between the 1960 and 2007 lines is expanded to the entire first quadrant by setting the 2007 vector direction as the x-axis (by subtracting it from all data) and, similarly, the 1960 straight line as the y-axis. Recessions occurred in years marked by red dots.
FIG 12: Interest rate (blue), inflation rate (red), and real interest rate (green) showing the two regimes of behavior prior to and after 1980 consistent with investment limited and consumption limited regimes.This suggests that the current zero interest rate is not due to the financial crisis but rather due to the limiting behavior associated with the consumption limited regime that started in 1980.
Phone: 617-547-4100 | Fax: 617-661-7711 | Email: office at necsi.edu
You are a working on systems change. At times it feels isolating, overwhelming and you’re not clear what move to make next.
You could really do with a place for thinking, reflection and support.
The Systems Sanctuary was designed with you in mind.
We host virtual, peer-mentoring programs for systems leaders like you to:
Share your unique challenges
Learn new tools and techniques and
Build an international network of new friends who can support you
We prioritize participation from people who are working from lived experience of unjust and unhealthy systems, and we welcome applications from people all over the world.
Early Bird price till 18 January 2019
Final deadline for applications 25 January 2019
Find out more below…
“Warm and human facilitation, good structure, sense of openness. The huge diversity of work people are undertaking was stimulating and inspiring” ”
Peer support for systems leaders focused on climate change
Working on climate systems change from intersecting angles including renewable energy, food production, climate justice, gender equity and finance and at least two years into their work.
Peer support for women of systems change amid life transition
Open to women of all ages and who have been, or are beginning to work in, systems change. We encourage women and women identified participation of all ages to apply.
Peer support for systems leaders at least two years into their work
Previous applicants have worked on economic development, food security, immigration, aging, criminal justice system, to cross cutting topics like racism, gender equity, health and education.
Peer support for systems leaders at least 2 years in, living in Australia or New Zealand
Many of our applicants come from Aus/NZ so we’ve created an In the Thick of It Cohort designed specifically to build the ecosystem of systems changers in the region.Find out more
What do you know about systems leadership?
Read our new publication, highlighting key themes on the challenge of systems leadership from our first Cohort of In the Thick of It, 2018.
Any application of formal rationality to the real world is relative to an ontology, which cannot be derived formally. “Paradigm shift” means a large ontological reorganization. Broader understanding of this remodeling is needed now. https://t.co/ouuKTiRxHj
In 2010 I was introduced to the Berkana Institutes’s Two Loop model, and I come back to it again and again. As I’ve moved across different projects and jobs, it’s still the best way I’ve found to place myself in the system and what kind of role I’m playing. At Government Digital Service and the Co-op I was working in the dominant system trying to do the transition work. At Tech for Good Global, our whole purpose was centred around illuminating the pioneers and trying to build community so that the field grew in coherence. And a lot of the Point People’s work has been about connecting, building and nourishing networks across both systems.
It’s worth watching their short video that I’ve linked to above but I’ve also tried to sketch it out below, as I understand it.
The Berkana Two Loops Model- it’s intentional that the two loops never touch as they are two entirely different paradigms.
In essence it shows a dominant system that is dying, and an emergent system that has the potential to become the system of influence. As the dominant system reaches its peak, new pioneers emerge (1), recognising that the dominant system (however impossible and far away that might seem) is beginning to decline.
The emergent system
It’s important that this new, emergent system is named and that the pioneers, the people and organisations building alternatives are connected together (2), and the work they are doing, illuminated.
Through this illumination and nurturing they form communities of practice (3)and grow more coherence as a field. As they do, more people and organisations join.
Illumination is also necessary to show a path for transition from the dying system to the alternative, emergent system. I also marked on here those people that create an alternative system but remain on the edges or disconnected from the main influence of the system(4). These are the people that take themselves off to build new communities, living in alternative ways, but turn their back on any responsibility for anyone else.
The dominant system — but a system in decline
Of course a lot of what goes on in the dominant system is trying to crush the alternatives that are appearing in the emergent system.
It helps when there are people in the dominant system who work to protect and enable those alternatives as they emerge, whether through funding, new policies, different kinds of commissioning etc — holding the space for pioneers to do their work.
There are people that help keep the dominant system stable as it dies — this is important because there is still a lot that is dependent on that system.
Others work to help people and organisations transition from the existing, dominant system — helping make tangible how to do things in a new way and showing them what is happening in the emergent system. I always picture these people as doing hand-holding work — walking alongside organisations to cross the “transition bridge.” Some make it, others don’t.
But it’s the last role that I’m particularly interested in at the moment. The Hospice Worker role. As the dominant system starts to decline, they provide care and compassion for those that are dying and alleviate the pain.
The need to close things down, dismantle them, end things, is a natural part of change, but I don’t think we do it very well. I don’t think there is a well designed practice around it. And that’s the start of a new enquiry for me — The Farewell Fund — introduced in my next post.
You must be logged in to post a comment.