PsyArXiv Preprints | The sense of should: A biologically-based model of social pressure – Theriault, Young, Barrett (2019)

 

Source: PsyArXiv Preprints | The sense of should: A biologically-based model of social pressure

 

The sense of should: A biologically-based model of social pressure

AUTHORS
CREATED ON

January 09, 2019

LAST EDITED

September 17, 2019

Sense_of_Should_preprint.pdf

Version: 6

Abstract

What is social pressure, and how could it be adaptive to conform to others’ expectations? Existing accounts highlight the importance of reputation and social sanctions. Yet, conformist behavior is multiply determined: sometimes, a person desires social regard, but at other times she feels obligated to behave a certain way, regardless of any reputational benefit—i.e. she feels a sense of should. We develop a formal model of this sense of should, beginning from a minimal set of biological premises: that the brain is predictive, that prediction error has a metabolic cost, and that metabolic costs are prospectively avoided. It follows that unpredictable environments impose metabolic costs, and in social environments these costs can be reduced by conforming to others’ expectations. We elaborate on a sense of should’s benefits and subjective experience, its likely developmental trajectory, and its relation to embodied mental inference. From this individualistic metabolic strategy, the emergent dynamics unify social phenomenon ranging from status quo biases, to communication and motivated cognition. We offer new solutions to long-studied problems (e.g. altruistic behavior), and show how compliance with arbitrary social practices is compelled without explicit sanctions. Social pressure may provide a foundation in individuals on which societies can be built.

See less

Preprint DOI

10.31234/osf.io/x5rbs

License

CC-By Attribution 4.0 International 

The Information Theory of Individuality – Krakauer, Bertschinger, Olbrich, Ay, Flack (2014)

Source: [1412.2447] The Information Theory of Individuality

 

The Information Theory of Individuality

We consider biological individuality in terms of information theoretic and graphical principles. Our purpose is to extract through an algorithmic decomposition system-environment boundaries supporting individuality. We infer or detect evolved individuals rather than assume that they exist. Given a set of consistent measurements over time, we discover a coarse-grained or quantized description on a system, inducing partitions (which can be nested). Legitimate individual partitions will propagate information from the past into the future, whereas spurious aggregations will not. Individuals are therefore defined in terms of ongoing, bounded information processing units rather than lists of static features or conventional replication-based definitions which tend to fail in the case of cultural change. One virtue of this approach is that it could expand the scope of what we consider adaptive or biological phenomena, particularly in the microscopic and macroscopic regimes of molecular and social phenomena.

Subjects: Populations and Evolution (q-bio.PE)
Cite as: arXiv:1412.2447 [q-bio.PE]
(or arXiv:1412.2447v1 [q-bio.PE] for this version)

Ecology and Society: The dynamics of purposeful change: a model – Silverman and Hill (2018)

Silverman, H., and G. M. Hill. 2018. The dynamics of purposeful change: a model. Ecology and Society 23(3):4. https://doi.org/10.5751/ES-10243-230304

Source: Ecology and Society: The dynamics of purposeful change: a model

Ecology and Society
E&S HOME > VOL. 23, NO. 3 > Art. 4
Go to the pdf version of this article
The following is the established format for referencing this article:
Silverman, H., and G. M. Hill. 2018. The dynamics of purposeful change: a model. Ecology and Society23(3):4.
https://doi.org/10.5751/ES-10243-230304

Synthesis

The dynamics of purposeful change: a model

1Pacific Northwest College of Art, 2University of Portland

ABSTRACT

In order to describe and depict the dynamics of purposeful change, we reexamine the concept of social-ecological systems (SES) and propose a linked but not integrated SES model. Adapting core resilience tools (stability landscape and panarchy), we construct a general model and then use a framework of key concepts (identity, logics, affiliations, affordances) to analyze the dynamics depicted therein. We illustrate this model’s use in two cases: a retrospective analysis of food-systems work amidst contending social regimes and an interpretive reading of published narratives describing individual-to-ecological stability and change. We discuss this model’s applicability in situations involving divergent perspectives, micro-meso-macro social dynamics, social regime identity, and the distinct dynamics of social and ecological systems. This examination illustrates the power and flexibility of these core resilience tools.

Key words: bricolage; institutional logics; path dependence; reflexivity; social attractors; system archetypes

INTRODUCTION

Efforts to describe and depict the dynamics of purposeful change, from the individual level to the ecological, encounter numerous challenges. By definition, such efforts must bridge across or be fragmented by academic disciplines. Conceptual tools (e.g., models, methods, metaphors) developed in one context may not apply to another.

A strength of resilience scholarship is its shared set of tools for conducting interdisciplinary examinations. However, and as we will illustrate, the dynamical systems modeling developed by resilience scholars for the study of ecosystems does not directly translate to the social domain (Anderies and Norberg 2008, Byrne and Callaghan 2014). In order to explore these complexities, we use core resilience tools, the stability landscape and panarchy, to construct a model of individual-to-ecological dynamics. Both these tools reflect a systems approach (Folke 2006), and we likewise follow a systems approach in adapting these tools.

This model development leads us to reexamine the concept of social-ecological systems (SES) (Folke 2016). We describe the SES as a tool for conceptualizing interrelationships across social and ecological system domains. This statement is not intended to question the reality of intertwined social and ecological phenomena. Indeed, humans are embedded in and dependent upon the natural world. While emphasizing the reality of such phenomena, we concurrently emphasize the conceptual nature of tools, such as SES models, with which one might investigate and understand such phenomena (Becker 2012). With this dual emphasis, we underscore the potential for multiple SES approaches.

To distinguish and discuss how social-ecological interrelationships might be conceptualized, we draw a distinction between integrated and linked SES approaches. We describe an “integrated” or “unit-of-analysis” approach as typified by the combined representation of social and ecological dynamics in a single stability landscape (Sendzimir et al. 2007, Westley et al. 2011, Rockström 2014, Allen et al. 2016). In contrast, we describe “linked” or “linked-but-not-integrated” SES approaches as emphasizing social-ecological interactions while also “explicitly distinguishing” between the dynamics of social and ecological system domains (Manuel-Navarrete 2015).

This paper’s outline is as follows. In a Theoretical Background section, we use two core resilience tools, the stability landscape and panarchy, to construct a linked-but-not-integrated SES model. In the Methodssection, we describe our approach to developing and illustrating this model’s use as an analytical tool. We develop this tool by analyzing its depiction of individual-to-ecological dynamics, and we illustrate its use in two case studies. Lastly, we discuss this model’s practical applications and conclude by revisiting our initial propositions.

Questions about SES integration are not new. Holling (2001) and Westley et al. (2002) sought to distinguish ways in which human capabilities differ from those of other species. Walker et al. (2006) expressed cautions about “a common framework of system dynamics” before proposing its adoption. Since then, scholars have challenged integrated treatments of social and ecological dynamics (Hatt 2013, Brown 2016). What are the implications of integrating or linking depictions of social-ecological dynamics in a general model? This question animates our investigation.

Linked SES models can have significant practical applications. The focus of resilience scholarship on transformability (Folke et al. 2010, Smith and Stirling 2010, Pelling et al. 2012, Olsson et al. 2014) points to the value of granular resolution on social dynamics. We discuss this model’s applicability in four types of situations, involving divergent perspectives, micro-meso-macro social dynamics, social regime identity, and the distinct dynamics of social and ecological systems.

Continues in source: Ecology and Society: The dynamics of purposeful change: a model

The Limits of Science | National Affairs, Dworkin (2019)

via Thea Snow

Source: The Limits of Science | National Affairs

Ronald W. Dworkin

In the modern world, science has become the ultimate guide for describing reality. It’s easy to see the appeal. Science has a beautiful clarity and economy; its laws are straightforward and unchanging. It reveals the workings of the world around us with such calmness and exactness, and with such an appearance of impartiality, that we feel satisfied with its answers and seek nothing more.

Newtonian mechanics represent the nearest approach to this ideal of science ever achieved. Given the masses, positions, and motions of objects, their future positions and motions can be calculated with extraordinary precision. Sir Isaac Newton’s method was a revolution. Before Newton, science was conducted in an altogether different way; investigators speculated rather than experimented. It was Newton who stripped objects of all but their most basic attributes — mass and density — and timed their fall, drawing conclusions from what he observed rather than from what he imagined. By reducing objects to a few measurable characteristics, he was able to discover the universal laws that governed the behavior of all objects.

An analogous revolution occurred in political thought around the same period. While ancient philosophers tried to define virtue, Thomas Hobbes, whose lifetime spanned Newton’s early years, took the opposite approach. Stripping people of all but their most basic (and base) attributes — selfishness and vanity — he claimed to explain mankind’s mechanics, as it were, and the structure of civilization. His rules of the social contract explained how the basic machine of society works, just as Newton’s laws of motion explained how the machine of the universe works.

The scientific revolution has now entered a second phase. It has moved beyond the hard sciences and Hobbesian philosophy and become the unifying principle of many activities in daily life. Through the relatively new disciplines of psychology, neuroscience, human science, and social science, it has inserted itself into how people think and behave at the individual level, affecting everything from interpersonal relationships to psychological health to education. The scientific revolution permeates our lives, shaping our sense of reality and truth. But sometimes it does so in ways that result in sheer absurdity. This is because of flaws within the scientific method itself — in other words, at the scientific revolution’s core. These flaws rarely show up in hard science, but they grow more obvious, and more problematic, as humanity takes the place of inanimate objects as the method’s primary target.

To better understand what has happened, it will help to take a brief look back at the scientific revolution’s first phase.

NEWTON’S METHOD

In 1666, Isaac Newton was 23 years old and living in the English countryside when, according to legend, an apple fell while he was sitting under a tree. Lost in meditation, he wondered why an apple always falls to the ground and never sideways or upward. His reflections eventually led to his discovery of the law of gravity.

But the falling apple was only a fortuitous trigger. Newton’s mind was on the sun, the moon, the stars, and the five planets visible to the naked eye in his time. Nicolaus Copernicus had already shown that the planets orbited the sun, but he assumed they did so in circles. Later, Johannes Kepler had demonstrated that the orbits of the planets follow an elliptical pattern; he was even able to give an exact timetable of planetary motions. But there were no mechanical laws to account for these events. This was the problem Newton was wrestling with when the apple fell. His laws of motion sought to explain the biggest things in the universe — like the planets — and not the little things in people’s lives, like apples.

Newton’s scientific method is the basis of almost all scientific inquiry today (and, as we shall see, most non-scientific inquiry too). Most children learn in school that the scientific method involves forming a hypothesis, testing it through experimentation, and then analyzing the results to verify or disprove the hypothesis. All this seems clear and benign, but problems lurk just below the surface. The method requires some assumptions that inherently limit how true the results can be.

First, the scientific method is one of intentional ignorance. To understand complex phenomena, the method demands that investigators focus on certain chosen details, isolate them, and leave out all the rest. Thus, willfully or unconsciously, investigators artificially limit themselves and reach conclusions by looking at only a small portion of the facts.

Second, in isolating such details, and supposing such isolation to be accurate, investigators suppose what is false. Because investigators do not work with all the facts, their conclusions about complex phenomena are also false. At best, a conclusion may apply under the narrowest conditions.

And third, the scientific method encourages investigators to transcend individual details that can be seen or felt, and to substitute generalizations that are convenient for thought but nothing more than phantoms. Investigators credit these phantoms with real existence.

The limits of this method are quickly understood in practice. I encounter the pitfalls every day as a physician. Take, for example, the simple act of measuring a patient’s temperature. The scientific method tries to produce something exact and independent of human sensation. This is why doctors use a thermometer instead of just asking a patient if he feels warm or chilled. Science thinks it possible not only to feel, but to measure, how hot a body is; it assumes an absolute standard of hotness and coldness exists outside of ourselves. But what is temperature? Scientists have tried to define the term, calling it an “emergent property” of molecular motion, yet that phrase is too abstract to convey much information. Although temperature itself can be described in exact form (a number), the concept of temperature lacks clear meaning.

Thermometers do provide doctors with valuable information; there is a practical correlation between the number and the patient’s state of health. But that number is not the same as the truth of the generality that is supposed to underlie the number. In fact, as a physician, I don’t even need to use the word “temperature” when using a thermometer. I can just correlate the number on the thermometer with the patient’s status and start treatment.

The scientific method encourages doctors to find some exact and invariable understanding of temperature (by using a thermometer) while disregarding the human and the personal (asking whether the patient feels hot or cold). But doctors also recognize that discounting a patient’s symptoms makes no sense. When a patient says he feels warm or cold, his statement — vague as it is — has some meaning. Although imperfect, it is actually more real and certainly more relevant than the vague concept of temperature as an “emergent property.” This is why doctors use the scientific method only sometimes.

In medicine, the scientific method generates useful abstract concepts by studying thousands of bodies shorn of their attributes except for the handful being studied; those concepts are then applied to individual bodies in the form of diagnostic categories and treatments. The process works — sometimes — because the human body obviously has certain characteristics universal to all of us.

But the human mind is far more singular. Today’s scientific revolutionaries — social scientists, human scientists, and psychologists — have embraced the scientific method wholeheartedly, but they have forgotten that generalizations and universal concepts have far less value when working with non-material subjects such as the mind.

ZOOMING IN

To understand the error in the “second phase” of the scientific revolution, imagine a man, in trying to understand an object, moving away from that object rather than toward it. Instead of handling the object and examining it on all sides, he pushes it into the distance so that all details of color and unevenness of surface disappear, and only the object’s outline remains on the horizon. Because the object is now so smooth and uniform, the man thinks he has a clear understanding of it. This would be a delusion, of course, yet this is what professionals pushing the second phase of the scientific revolution argue: place people at a distance; siphon away all but a few of their individual attributes; create general, smooth, and uniform concepts from people’s minds; and we will better understand them.

The reason this makes no sense is also the reason why the scientific revolution launched by Newton began in astronomy, rather than in medicine, psychology, or human science: The scientific method works best when applied to an area we know little about. We sit on a tiny speck in the universe and make extremely limited observations about stars, planets, and galaxies. We can talk about them only in the simplest terms. Where there may be curves or parabolas, we see only a tiny fraction of the path from one angle and so call it a line. Because astronomical facts are evident on such an enormous scale that we see only a small portion of them, our ignorance lets us believe we have found the ideal science — only rarely does anything arise to challenge it. The star at a distance really appears to be smooth and uniform, even though it’s not. Conveniently for astronomers, their ignorance is not a matter of choice.

For similar reasons, physics and chemistry are the next most perfect sciences. Their scales are so tiny that we can’t see most of the details, only general effects here and there. For example, when chemists mix substances, the result is sometimes a new color or a precipitate. The precise movements of all the molecules in the mixture are unknown to chemists, just as the precise movements of all the stars in the universe are unknown to astronomers, yet certain observable changes do occur. Chemists single them out as the main phenomena, when in fact a lot more is involved. By focusing on just a few facts and dismissing countless others, chemists are able to arrange them in some order, generalize about them, and convey the sense of an ideal science the way astronomers do. It is no coincidence that the most perfect equations in chemistry involve gases, which are often invisible and the least amenable to detailed description.

The closer we get to our subject and the more we know, however, the more the scientific method breaks down. An astronomer can feel comfortable calling a faraway star’s path a line, even though it may curve out there at the edge of the universe; he can assume the scientific method has revealed the truth, and it will likely never be disproven. But as a doctor, I can’t focus on a few facts to the exclusion of others, for life is the level on which I work. In the operating room, I see people react differently to anesthesia all the time; I see lines become curves. I see a patient’s facial expression convey more than a supposedly objective measurement. I see the chaos of a dappled skin pattern convey more accurate information than what the scientific method has built out of carefully isolated details.

And though there is a great deal of variety in how human bodies react, it is nothing compared to the variety and unpredictability of human behavior. This is the level on which social scientists, human scientists, and psychologists work, and, unlike faraway stars, human life is something that we know a lot about. For every one observation made about stars, poets and philosophers have made millions about people’s habits, behaviors, and feelings. All people, expertly trained and uneducated alike, are intimately familiar with life. This is why the scientific method works so poorly on the level of life. Compared to astronomy, we see so much more. We know so many more details, and therefore we can watch the scientific method go wrong.

Even the most perfect concepts in the hard sciences are unreal. For example, Newton’s concept of absolute motion yielded a mathematical formula for planetary movement under certain conditions. Yet that mathematical formula does not deal with actual facts; it deals with a mental supposition, that there is one body moving alone in absolute space — a case that has never occurred and can never occur. The path it predicts is unreal. True, the formula is very accurate, and it lets scientists predict celestial events, but previous formulas also foretold astronomical events with some accuracy. Newton’s formula based on absolute motion is a more convenient fiction than prior formulas, but no less a fiction.

In chemistry, the perfect gas is a gas that achieves a fixed and stable condition in which its molecules cease to interact with one another. But such conditions never actually arise. At most, equations for the perfect gas apply exactly to a real gas at one theoretical point, when the state of an ever-changing gas corresponds exactly to that of a perfect gas. Then again, they apply only if the gas reaches that theoretical point, and it can’t — which means the gas equations are more metaphysics than physics.

If a perfect state is impossible to achieve with inanimate objects, it is infinitely more impossible to achieve with human beings. Our minds are in constant flux. The psychologist’s concepts, the sociologist’s categories, the economist’s equations, and the cognitive neuroscientist’s principles are all flawed. They depend on a stable state, and yet no one position in life can be maintained in the midst of life’s constant motion and innumerable changes. Even if perfect stability could occur, it would do so for only an instant.

The difference between Newton and today’s scientific revolutionaries is that the former could conceal the scientific method’s defects while the latter cannot. Newton isolated certain celestial phenomena and created an unreal situation through his concept of absolute motion. From there, he derived an equation to estimate planetary motion. That equation works quite well because we test it under conditions that replicate the state of ignorance that Newton created when he limited the conditions of his experiment. He isolated facts and got away with it. Today’s revolutionaries, on the other hand, sometimes exhibit a misplaced confidence in the scientific method, believing that they can isolate human variables and apply their concepts in unreal situations. In their case, reality always hits back.

CONSEQUENCES OF THE UNREAL

Newton had warned others not to take the method too far. “To explain all nature is too difficult a task for any one man or any one age. ‘Tis much better to do a little with certainty,” he wrote.

But his advice was forgotten, as the allure of science and its authority proved irresistible in many disciplines not well-suited for it. When Freudian psychoanalysis dominated the field of psychology in the first half of the 20th century, science was not foremost in psychologists’ minds. But in the 1940s and ’50s, clinical psychologists began to embrace the scientific model of mental illness. Suddenly, future candidates for their profession were required to earn Ph.D.s. Psychologists began to wear white coats like physicians. Their articles began to read like studies in the Journal of Physics.

This same shift — abandoning the speculative or philosophical study of various aspects of human life in favor of a more scientific and quantitative approach — occurred in other disciplines around the same time. In the 19th century, before it was influenced by the scientific method, economics was called “political economy” and was under the control of philosophers such as John Stuart Mill and Karl Marx. To be an economist, one had to have a worldview; no special mathematical skill was generally required. But by the middle of the 20th century, economics had become highly mathematical. Political science has become a discipline of equations too. And though the notion of “public policy” as a discipline has existed for centuries, degree programs focusing on quantitative analysis did not really begin until the 1930s. Using the scientific method to mine “big data” has since become a defining activity for public-policy professionals. As for the field of “human science,” it didn’t even exist until the late 1960s. Neuroscience is more logically connected to the scientific method, but “cognitive neuroscience,” which ties the brain to human behavior, only came into being in 1976.

Practitioners of various disciplines in the 20th century knew the facts of life were vast and unmanageable and variable from person to person, but rather than satisfying themselves with groping for life’s answers through the veil of that reality, as previous generations had done, they used the scientific method to wander outward, seeking something definite and universal in abstraction. Rather than accept life’s complexities, they created concepts devoid of the imperfect human element. Rather than use generalizations merely to organize their thoughts, they credited their abstract concepts with a positive and authoritative existence, as an actual representation of facts. Although each person knows himself to be a unity, they carved up people into categories, subcategories, and disciplines, thinking that through fragmentation they would find some deeper meaning about life. In the end, all they did was to travel to the edge of sense and nonsense. This is the crisis in science that we find ourselves facing today.

We see this often in public policy, whose practitioners in government use the scientific method to devise rules for us to live by — including rules that sometimes violate common sense. Often some “study” lies behind the rule, a study using the scientific method and written in the scientific style, and therefore taken to represent unquestionable truth. An example of this is the famous 2007 campus sex study that led to the illiberal tribunals that now try many young college men without due process. The authors abstracted from hundreds of singular lives and relationships to create more generalized categories of sexual harassment, which were then merged into the more universal category of “sexual assault.” Using this vast new universal category, suddenly one in five college women had been sexually assaulted, though under any commonsense definition, the number was much lower. This so-called “rape crisis” led to the Department of Education’s issuing methods for prosecution that some have likened to the medieval Star Chamber.

We see similar patterns in psychology. There are more than a million caregivers working in the U.S. mental-health system (a 100-fold increase since 1940), all using some variation of the scientific method. Although much of what these caregivers provide is common sense, they insist on using the scientific forms to vindicate it. For example, it is common sense that a sad person might benefit from friendly conversation, but when the scientific method proves it, the point takes on the aura of scientific knowledge and therefore authority.

To see the problem in action, consider the true story of a middle-schooler whose mother wanted to arrange for her son to get more time to take his math tests. The school administrator told the mother to take him to a psychologist to secure a “formal accommodation.” The psychologist found no deficit in the boy’s processing speed, working memory, or fluency, so to give him the formal accommodation, the psychologist had to document a disability. So he diagnosed the boy with “depression/anxiety disorder” using sketchy criteria and prescribed psychotherapy, which was required to receive the accommodation. If the mother refused the psychotherapy, her son would not get the accommodation. The mother refused, and the son continued to receive average grades in school.

This mother and her son were victims of the scientific method. Psychologists bundle together certain human attributes and give them names — for example, “anxiety” and “depression.” Real people have thousands and thousands of attributes, but psychologists limit the number of attributes so they can form general concepts. Psychologists then create a concept out of these concepts — for example, a “disorder” — thereby broadening the generalization while sacrificing even more depth. The process is not unreasonable; well-defined categories make information easier to comprehend and easier to apply to new cases. And it is a basic principle of the scientific method that the fewer the variables, the cleaner the result. But in their quest for universal concepts, psychologists risk abandoning the real article and taking up with a shadow — that is, taking a concept drawn from a handful of human attributes, declaring it as a “disorder,” and applying it to a living person with thousands of attributes.

In another true example, a father insisted that his daughter practice the piano at an early age because neuroscience studies had shown that giving young children intensive music lessons enlarged the brain’s corpus callosum. Expert musicians reportedly have larger corpus callosums than non-musicians do, and the father wanted his daughter to have every possible advantage in this regard.

The daughter, who hated practicing, was a victim of the scientific method. Neuroscientists strip off all attributes they think they see in one musician and not in another until they find an attribute, namely the brain, which is common to all musicians. One relevant quality inside the brain is called the corpus callosum, and since it is known only by size, the corpus callosum and size become correlative attributes that neuroscientists find useful to class musicians by, not because they represent the various musicians particularly well, but because they are found in all musicians. It is like classifying people by their shoes — not because shoes are a valuable method of classification, but simply because everyone wears shoes of one kind or another. Having thought away all the other qualities of musicians except the two correlatives of corpus callosum and size, neuroscientists try to explain good musicianship by these two “thinks” that are left. They credit these “thinks” (corpus callosum and size) with an independent existence and proceed to derive an understanding of good musicianship from them.

Or consider a third example of our obsession with science doing more harm than good. On February 2, 1989, a new regulation published in the Federal Register required that mental-health caregivers working with elderly people have a college degree in the human behavioral sciences. My mother, lacking such a degree, lost her job as a social worker at a convalescent hospital after 20 years. Her “common touch” approach to the melancholy and disappointments of old age was deemed inferior to the categories of thought drawn from scientific abstractions and used by credentialed caregivers. The patients at the hospital rebelled; they despised the new, credentialed, young social worker; they sensed they were being considered solely from a clinical point of view. The scientific method had so denuded the young professional’s language of the personal, the vital, and the singular that they felt insulted. They wanted my mother back, but the law prevented it.

Our society’s obsession with a certain idea of science has resulted in the popular prejudice that the scientific method is the mark of a thinking person, and that those who question its conclusions are “anti-science” or “deniers” of science. At the very least, people feel inclined to give the findings of neuroscience, psychology, social science, and human science the benefit of the doubt as these disciplines gain more influence over our lives.

But in the process, we neglect their limits. As humanity increasingly looks to the scientific method to understand itself, it will inevitably be disappointed by the results. The methods that work on celestial objects — bodies too distant to be knowable — can never produce truly satisfying results at the intimate level of the human. There is simply too much rich complexity to isolate our variables, or to make statements or formulas or theories that can apply to all of us.

BACK TO HUMANITY

As a child, I loved the art of Dr. Frank Netter, the famous medical illustrator. The bodies of the people he drew matched perfectly with bodies I had already seen, but the colors did not. The violet blue of venous blood and the flaming red of a swollen abscess were unlike anything in my reality. Aside from the colors, organs themselves became transformed by Dr. Netter’s brush. The stark gray matter of the brain came alive, breathing intelligence. When red muscles were drawn taut, it was as though the body’s structure, firmly planted on the page, was resisting a wrenching and oppressive force.

Once I attended medical school, I realized there was none of this in real life. Neither muscle nor blood was beset by a stirring tremor. No colors came ablaze. At the very distance from which Dr. Netter had painted sick patients, I now stood, seeing nothing of what he had seen and wondering how he had.

I assumed I had been deceived by that special intensity that forms all youthful aspirations. Now I was a man of science. To get with the program, I purposely used the most unimaginative, the most scientific words possible when discussing my patients. Rather than say “red,” I said “discolored”; “big” became “enlarged.” I tried to excise from my vocabulary any words that seemed not just artistic but even remotely human — words such as heavy, light, hot, cold, vitality, right, and wrong.

This is one way to understand the purpose of the scientific method: Avoid judgments and feelings whenever possible and rely more on measurements, numbers, and physical dynamics. All sciences make the same effort. Biology is denuded of notions of vitality and forces of personality to become a question of cellular affinities, chemical reactions, and laws of osmosis. Physics is a question of atoms and sub-atomic particles. In psychology, social science, and human science, the scientific method prods professionals to go outside of humanity in search of something exact in itself, something that it can call substantial, and then return with abstract concepts created out of generalities to apply to clients.

During my fifth week in a hospital ward, I had an experience that challenged this view. One morning on my rounds, I talked with an elderly woman whose hair was out of place. She was mostly quiet, though she did say, “I just don’t feel well.” I dismissed the episode, but when my attending doctor heard about it, she roared into action. The patient was quickly evaluated and transferred to radiology for a ventilation-perfusion scan, which showed that she had suffered a blood clot in her lung. Later, I asked the attending doctor how she knew the patient had been in such trouble. She smiled and said that most elderly women, even the sickest, remain attentive to their hair. Only when they are in extremis do they ignore it. The second hint was the patient’s complaint. When old people complain that “this hurts” or “that hurts,” it’s usually not an emergency, she said. Something in them probably does hurt; getting old means every body part hurts at one point or another. It is when elderly patients are imprecise and say “I just don’t feel well” and are unable to blame any specific body part that a serious problem looms.

Life itself — which can only be known and experienced through real interactions with human beings — had taught me more than any controlled experiment could. I learned my lesson that day about the value of using my senses. A doctor must look at and listen to patients, even to the small stuff, and not limit his thinking to general categories.

The scientific method has enormous value in medicine, but it had made me distrust my senses. At first glance, this point seems counterintuitive. The whole purpose of Newton’s scientific revolution was to observe rather than to speculate, to use the senses to discover facts and reach objective conclusions rather than to ponder ideas in some medieval study. But the speculators who preceded Newton and who had tried to answer fundamental questions of the universe by merely thinking deeply about them committed another crime in the eyes of purveyors of the scientific method: They inevitably mixed their feelings in with their conjectures.

By relying on his senses and looking outward, Newton avoided the trap of pure guesswork. In the process he created a purely intellectual representation of the universe. His discovery was (and remains) a tremendous practical success. But inherent in the scientific method is the desire to clean all feeling out of fact. In the process, the real is emptied of meaning until it becomes pure generalization. The senses themselves cease to be valued.

In the hard sciences, this defect is worth it. But as the scientific method creeps into the human realm, the desire to be objective, to empty facts of feeling, demands the abandonment of the senses as well as the emotional and spiritual realms of the human person. Trying to create a purely intellectual representation of the cosmos is one thing; trying to create a purely intellectual representation of human beings is quite another.

Reflecting once again on Dr. Netter’s medical illustrations can reveal the limits of the scientific method and a path forward for its practitioners in the 21st century. His pictures were not completely accurate. They could not be. Yet it is also wrong to say that he simply thought up or invented his images of the human body. He saw the human being from his point of view. That was the whole point. His eyes absorbed and dispersed the rays of light at an angle special to him. Different impressions reached his nerve centers in their quest for synthesis, for fusion. And he managed to rouse the torpid mass of human flesh, wrest a feverish excitement from axons transmitting signals or blood pulsating through arteries, and communicate to viewers like me the emotion engulfing him.

With his senses, he studied bodies and their parts, yet behind his senses was a unity — a single individual with physical, intellectual, emotional, and spiritual facets, as complex as a universe. In that single unity, fact could not be divorced from feeling; to understand humanity, the two could not be separated. This was Dr. Netter’s insight. In his art he tried to understand another human being in the same way he tried to understand himself. It was not the scientific method. It was life.

The scientific revolution has been an enormous success; it has improved our health and prosperity while helping us to better understand the natural world. But in their zeal to apply the scientific method to the complexity of humanity itself, scientific revolutionaries sometimes push too far. The time has come to pause the second phase of the scientific revolution — to recover a more humble and skeptical approach to what the scientific method can achieve, to unite the emotional, spiritual, and intellectual dimensions of life, and to find our way back to our humanity.

Ronald W. Dworkin is a physician and political scientist. His work can be accessed at RonaldWDworkin.com.

Source: The Limits of Science | National Affairs

Twenty years of network science, Vespignani (2018)

Source: Twenty years of network science

Twenty years of network science

The idea that everyone in the world is connected to everyone else by just six degrees of separation was explained by the ‘small-world’ network model 20 years ago. What seemed to be a niche finding turned out to have huge consequences.

In 1998, Watts and Strogatz1 introduced the ‘small-world’ model of networks, which describes the clustering and short separations of nodes found in many real-life networks. I still vividly remember the discussion I had with fellow statistical physicists at the time: the model was seen as sort of interesting, but seemed to be merely an exotic departure from the regular, lattice-like network structures we were used to. But the more the paper was assimilated by scientists from different fields, the more it became clear that it had deep implications for our understanding of dynamic behaviour and phase transitions in real-world phenomena ranging from contagion processes to information diffusion. It soon became apparent that the paper had ushered in a new era of research that would lead to the establishment of network science as a multidisciplinary field.

Before Watts and Strogatz published their paper, the archetypical network-generation algorithms were based on construction processes such as those described by the Erdös–Rényi model2. These processes are characterized by a lack of knowledge of the principles that guide the creation of connections (edges) between nodes in networks, and make the simple assumption that pairs of nodes can be connected at random with a given connection probability. Such a process generates random networks, in which the average path length between any two nodes in the network — measured as the smallest number of edges needed to connect the nodes — scales as the logarithm of the total number of nodes. In other words, randomness is sufficient to explain the small-world phenomenon popularized as ‘six degrees of separation’3,4: the idea that everyone in the world is connected to everyone else through a chain of, at most, six mutual acquaintances.

However, random construction fell short of capturing the local cliquishness of nodes observed in real-world networks. Cliquishness is measured quantitatively by the clustering coefficient of a node, which is defined as the ratio of the number of links between a node’s neighbours and the maximum number of such links. In real-world networks, node clustering is clearly exemplified by the axiom ‘the friends of my friends are my friends’: the probability of three people being friends with each other in a social network, for example, is generally much higher than would be predicted by a model network constructed using the simple, stochastic process.

To overcome the dichotomy between randomness and cliquishness, Watts and Strogatz proposed a model whose starting point is a regular network that has a large clustering coefficient. Stochasticity is then introduced by allowing links to be rewired at random between nodes, with a fixed probability of rewiring (p) for all links. By tuning p, the model effectively interpolates between a regular lattice (p → 0) and a completely random network (p → 1).

At very small p values, the resulting network is a regular lattice and therefore has a high clustering coefficient. However, even at small p, short cuts appear between distant nodes in the lattice, dramatically reducing the average shortest path length (Fig. 1). Watts and Strogatz showed that, depending on the number of nodes5, it is possible to find networks that have a large clustering coefficient and short average distances between nodes for a broad range of values, thus reconciling the small-world phenomenon with network cliquishness.

Figure 1 | The small-world network model. In 1998, Watts and Strogatz1 described a model that helps to explain the structures of networks in the real world. a, They started with a regular network, depicted here as nodes connected in a triangular lattice in which each node is connected to six other nodes. b, They then allowed links between nodes to be rewired at random, with a fixed probability of rewiring for all links. As the probability increases, an increasing number of short cuts (red lines) connect distant nodes in the network. This generates the small-world effect: all nodes in the network can be connected by passing along a small number of links between nodes, but neighbouring nodes are connected to one another, forming clustered cliques.

Watts and Strogatz’s model was initially regarded simply as the explanation for six degrees of separation. But possibly its most important impact was to pave the way for studies of the effect of network structure on a wide range of dynamic phenomena. Another paper was also pivotal: in 1999, Barabási and Albert proposed the ‘preferential-attachment’ network model6, which highlighted that the probability distribution describing the number of connections that form between nodes in real-world networks is often characterized by ‘heavy-tailed’ distributions, instead of the Poisson distribution predicted by random networks. The broad spectrum of emergent behaviour and phase transitions encapsulated in networks that have clustered connectedness (as in Watts and Strogatz’s model) and heterogeneous connectedness (as in the preferential-attachment model) attracted the attention of scientists from many fields.

A string of discoveries followed, highlighting how the complex structure of such networks underpins real-world systems, with implications for network robustness, the spreading of epidemics, information flow and the synchronization of collective behaviour across networks7,8. For example, the small-world connectivity pattern proved to be the key to understanding the structure of the World Wide Web9 and how anatomical and functional areas of the brain communicate with each other10. Other structural properties of networks came under the microscope soon after1113, such as modularity and the concept of structural motifs, all of which helped scientists to characterize and understand the architecture of living and artificial systems, from subcellular networks to ecosystems and the Internet.

The current generation of network research cross-fertilizes areas that benefit from unprecedented computing power, big data sets and new computational modelling techniques, and thus provides a bridge between the dynamics of individual nodes and the emergent properties of macroscopic networks. But the immediacy and the simplicity of the small-world and preferential-attachment models still underpin our understanding of network topology. Indeed, the relevance of these models to different areas of science laid the foundation of the multidisciplinary field now known as network science.

Integrating knowledge and methodologies from fields as disparate as the social sciences, physics, biology, computer science and applied mathematics was not easy. It took several years to find common ground, agree on definitions and reconcile and appreciate the different approaches that each field had adopted to study networks. This is still a work in progress, presenting all the difficulties and traps inherent in interdisciplinary work. However, in the past 20 years a vibrant network-science community has emerged, with its own prestigious journals, research institutes and conferences attended by thousands of scientists.

By the 20th anniversary of the paper, more than 18,000 papers have cited the model, which is now considered to be one of the benchmark network topologies. Watts and Strogatz closed their paper by saying: “We hope that our work will stimulate further studies of small-world networks.” Perhaps no statement has ever been more prophetic.

Nature 558, 528-529 (2018)

doi: 10.1038/d41586-018-05444-y

 

Continues in source: Twenty years of network science

Interesting twitter discussion on OODA loop and complexity – from @commandodev

 

 

Scientific uncertainty, complex systems, and the design of common-pool institutions – James Wilson (2002)

 

Source: (PDF) Scientific uncertainty, complex systems, and the design of common-pool institutions

Scientific Uncertainty, Complex Systems, and the Design ofCommon-Pool InstitutionsJames WilsonThis paper addresses the question of how we cope with scientific uncertainty in exploited, complex naturalsystems such as marine fisheries. Ocean ecosystems are complex and have been very difficult to manage, asevidenced by the collapses of many large-scale fisheries (Boreman et al. 1999; Ludwig et al., 1993: NationalResearch Council, 1999). A large part of the problem arises from scientific uncertainty and our understandingof the nature of that uncertainty. The difficulty of the scientific problem in a complex, quickly changing, andhighly adaptive environment such as the ocean should not be underestimated. It has created pervasiveuncertainty that has been magnified by the strategic behavior of the various human interests who play in thegame of fisheries management.This paper argues that in complex systems creates a more difficult conservation problem than necessarybecause (1) we have built into our governing institutions a very particular and inappropriate scientificconception of the ocean that assumes much more control over natural processes that we might hope to have(i.e., we assume we are dealing with an analog of simple physical systems), and (2) the individual incentivesthat result from this fiction, even in the best circumstances, are not aligned with social goals of sustainability.As a result, I believe we have slowed significantly the process of learning about the ocean, defined scientificuncertainty and precautionary acts in a way that may turn out to be highly risky, and created dysfunctionalmanagement institutions. This chapter suggests we are more likely to find ways to align individual incentiveswith ecosystem sustainability if we begin to view these systems as complex adaptive systems. This perspectivealters especially our sense of the extent and kind of control we might exercise in these systems and, as a result,has strong implications for the kinds of individual rights and collective governance structures that might work
(18) (PDF) Scientific uncertainty, complex systems, and the design of common-pool institutions. Available from: https://www.researchgate.net/publication/313201072_Scientific_uncertainty_complex_systems_and_the_design_of_common-pool_institutions [accessed Dec 31 2019].

Continues in source: (PDF) Scientific uncertainty, complex systems, and the design of common-pool institutions

 

Transition Design Seminar CMU – Syllabus and Course Schedule for the Transition Design 2 Seminar

 

Source: Transition Design Seminar CMU – Syllabus and Course Schedule for the Transition Design 2 Seminar

About Transition Design

Transition Design acknowledges that we are living in ‘transitional times,’ takes as its central premise the need for societal transition (systems-level change) to more sustainable futures, and argues that design and designers have a key role to play in these transitions. This kind of design is connected to long horizons of time and compelling visions of sustainable futures and must be based upon new knowledge and skill sets.

In the past, there have been many attempts to leverage design as an agent for positive social change, but few of these have articulated how to undertake, lead and catalyze such change. Nor have they identified or incorporated the areas of knowledge and investigation required to do so. Transition Design is complementary to, and borrows from, myriad other design approaches (such as design for service and social innovation), but is distinct in several ways and is therefore generating a corresponding body of new knowledge and skill sets that can deepen and enhance design within more traditional and mainstream contexts.

The idea of and need for transition is central to a variety of current discourses concerned with how change manifests and how it can be initiated and directed (in ecosystems, organizations, communities/societies, economies and even individuals). These approaches inspired the term ‘Transition Design’, a new area of design focus that is informed by knowledge outside design such as science, philosophy, psychology, social science, anthropology and the humanities in order to gain a deeper understanding of how to design for change/transition in complex systems. Transition Design:

  • Brings together two global memes: 1) the recognition that whole societies and their infrastructures must transition toward more sustainable states; 2) that these transitions will require systems-level change and a deep understanding of systems dynamics.
  • Uses living systems theory as both an approach to understanding wicked problems and designing solutions to address them.
  • Develops design solutions that protect and restore both social and natural ecosystems through the creation of mutually beneficial relationships between people, the things they make and do, and the natural environment.
  • Sees everyday life and lifestyles as the most important and fundamental context for design.
  • Emphasizes the need to resolve conflictual stakeholder relations, while leveraging area of agreement/alignment.
  • Emphasizes the value of developing compelling visions of long-term, sustainable futures: stakeholders are able to transcend their differences in the present by focusing on a future they can all agree upon.
  • Designs solutions for short, medium and long horizons of time, at all levels of scale of everyday life (the household, the neighborhood, the city, the region).
  • Looks for emergent possibilities within problem contexts and amplifies grass-roots efforts and solutions that are already underway.
  • Links existing solutions together so that they can function as steps in a larger transition vision.
  • Distinguishes between ‘wants’ or ‘desires’ and genuine needs and bases solutions upon maximizing the satisfiers for the widest possible range of needs.
  • Sees the designer’s own mindset and posture as an essential component of transition designing.
  • Calls for the reintegration and re-contextualization of diverse transdisciplinary knowledge.

The Transition Design Framework

We use a heuristic model to characterize four different but interrelated and mutually influencing areas of Transition Design. These areas are 1) Vision; 2) Theories of Change; 3) Mindset & Posture; 4) New Ways of Designing.

Improvisation Blog: About Aboutness and Relations: Thoughts on #TheDigitalCondition

 

Source: Improvisation Blog: About Aboutness and Relations: Thoughts on #TheDigitalCondition

Tuesday, 29 October 2019

About Aboutness and Relations: Thoughts on #TheDigitalCondition

As part of the Cambridge Culture, Politics and Global Justice group on the Digital Condition, I made a video response which sought to bring a cybernetic perspective to Margaret Archer’s views on the “Practical domain” as pivotal in the relations between nature and the social. I remember challenging Archer on this many years ago when she gave a talk in London about her work on reflexivity and I suggested that Maturana and Varela’s concept of “structural coupling” provided a clearer explanation of what she was trying to articulate in terms of the relations between people, practices and things. She brushed the point aside at the time, although more recently I heard her talk more approvingly of autopoietic theory, so I’d be interested to know what she thinks now. This is my video:

One of the things about making a video like this is that it is a very different kind of thing from  Archer’s paper that we were all reading…

Continues in source: Improvisation Blog: About Aboutness and Relations: Thoughts on #TheDigitalCondition

 

Social Dynamics A curated collection of works by Jay W. Forrester

 

 

http://collections.systemdynamics.org/jwf/social-dynamics/


Social Dynamics
A curated collection of works
by Jay W. Forrester

“Loonshots” and phase transitions are the key to innovation, physicist argues | Ars Technica

via Arthur Battram

 

Source: “Loonshots” and phase transitions are the key to innovation, physicist argues | Ars Technica

 

“Loonshots” and phase transitions are the key to innovation, physicist argues

Ars chats with physicist and biotech guru Safi Bahcall about his book Loonshots.

Vannevar Bush seated at his desk, circa 1940-1944. During President Franklin Roosevelt's administration, Bush built a national science policy based on a new structure for innovating quickly and effectively.
Enlarge / Vannevar Bush seated at his desk, circa 1940-1944. During President Franklin Roosevelt’s administration, Bush built a national science policy based on a new structure for innovating quickly and effectively.

Few people these days are familiar with the name Vannevar Bush, an engineer who played a significant role in fostering the developing of key technologies that helped the Allied Forces win World War II.  He also spearheaded a highly influential federal report, Science: The Endless Frontier. Presented to President Franklin Roosevelt in 1945, the report famously argued for federal  funding of basic research in science, calling it “the pacemaker of technological progress.” It shaped national science policy in the US for decades, and helped usher in an unprecedented explosion of economy-boosting scientific and technological innovation. (On the downside, Bush took a very dim view of the humanities—including science history—and social sciences.)

Physicist Safi Bahcall first learned about Vannevar Bush when he joined the President’s Council of Advisers on Science Technology in 2011, charged with producing a version of that 1945 report for the 21st century. The experience dovetailed nicely with his longstanding interest in the arc of human thought over the course of history, and his background as both a physicist and a biotech entrepreneur. (Bahcall comes by his physics bona fides naturally: his father is the late John Bahcall, best known for helping to solve the solar neutrino problem.)  The result: an intriguing new theory about fostering innovation, based on the physics of phase transitions, that led to his first popular science book: Loonshots: How To Nurture the Crazy Ideas that Win Wars, Cure Diseases, and Transform Industries.

“I think business people are really tired of the thousands of more or less identical business books produced every year, saying more of less the same stuff,” Bahcall told Ars about his fresh approach to the topic. “And most economists have never seen the inside of a real company, so their models have no connection to reality. I happen to be in the middle of a very weird Venn Diagram of someone with condensed matter physics experience, someone with business experience, someone who likes to tell stories, and likes to think about history.”

According to Bahcall, the most significant breakthroughs comes from what he calls “loonshots,” as opposed to “franchises”: ideas that seem a bit crazy and are hence often dismissed outright, with anyone championing it labeled unhinged. There are two types. An S-type loonshot introduces a novel strategy or business model that no one believes can ever make money. When Sam Walton founded Walmart in 1962, for instance, he did so in a small town far away from major cities, bucking conventional thinking about the best locations for major retail. Walmart is now the largest corporation in the world by revenue, per the Fortune Global 500 list.

Sam Walton's original Walton's Five and Dime Store in Bentonville, Arkansas, now serves as The Walmart Museum
Enlarge / Sam Walton’s original Walton’s Five and Dime Store in Bentonville, Arkansas, now serves as The Walmart Museum

A P-type loonshot introduces a new product or technology that nobody believes will work. Business leaders once thought the telephone was little more than a toy, and foolishly passed on investing in what would become the Bell Telephone Company. Similarly, physicist Robert H. Goddard‘s design for a liquid-fuel rocket in the 1920s was dismissed by academic and military experts at the time. Decades later, his invention helped usher in the era of spaceflight.

Understanding the science of phase transitions can nurture loonshots faster and better, according to Bahcall, so groups can achieve a harmonious balance between radical innovation (loonshots) and “operational excellence” (stable franchises). Instead of trying to change corporate culture, he maintains that small changes in structure can help transform group behavior, much like making tiny structural changes in a material can change its phase (water freezes into ice, or boils away as vapor). That was the secret to Vannevar Bush’s success: US military culture was resistant to taking risks on radical new ideas, so instead of trying to change the culture, he changed the structure, creating a separate research branch (eventually leading to the establishment of  DARPA), where those radical “high risk, high gain” ideas could find a home.

Bahcall’s theory rests on three fundamental concepts familiar to any condensed matter physicist: phase separationdynamic equilibrium, and critical mass. No two phases can co-exist in an organization—say, being good at loonshots (eg, original independent films) versus excelling at franchises (eg the Marvel Cinematic Universe)—unless they are poised right at the critical edge of a phase transition. “At the cusp of a phase transition, blocks of ice co-exist with pockets of liquid,” he writes. The phases break apart but stay connected, cycling back and forth to maintain a state of dynamic equilibrium—teetering on the edge of chaos.  Ars sat down with Bahcall to learn a bit more about his intriguing new theory.

Ars Technica: Most people associate the critical threshold of a phase transition with Malcolm Gladwell’s 2000 bestseller The Tipping Point. How does your book differ from Gladwell’s take, almost 20 years later?

BahcallThe Tipping Point is simply a qualitative discussion of the concept that the spread of ideas is governed by a phase transition. That’s well known in the literature, and [Gladwell’s book] weaves popular stories around the concept that the spread of ideas is like the spread of a virus. Gladwell pioneered the idea of translating academic science through compelling personal stories for a popular audience. That field, which was N=1 when he did it, is now N=5,000 people imitating him.

Robert H. Goddard, bundled against the cold weather of March 16, 1926, holds the launching frame of his most notable invention — the first liquid-fueled rocket, an example of a "P-type" loonshot.
Enlarge / Robert H. Goddard, bundled against the cold weather of March 16, 1926, holds the launching frame of his most notable invention — the first liquid-fueled rocket, an example of a “P-type” loonshot.
NASA/Public domain

But there’s no underlying new theory there. Loonshots is written by a scientist, and it’s based on an underlying original theory that hasn’t existed in the world of economics. No one has ever suggested the concept of an organization having a phase transition based on underlying incentives. People have been working on this problem literally for 200 years, since Adam Smith first asked, “Well, how might incentives affect behavior in organizations?” There’s a straightforward underlying academic paper I could write, which is essentially appendix B of the book. Here’s the model. Here’s why it’s reasonable. Here’s why I’m making these approximations, and here’s how you analyze this model. And here’s what you extract from it.

Ars: Let’s talk a moment about “disruptive innovation” versus “loonshots,” because you draw a distinction between these two concepts.

Bahcall: So many people are sick of hearing about disruptive innovation. The flaw with that is that it’s a hindsight problem. Disruptive innovation is all about the effects of something on a market. If you’re talking about a new idea, the market might be two years, five years, ten years, or 20 years away. The even bigger flaw is that if it’s a very early stage idea, any experienced entrepreneur knows you have no idea where it’s going to be not only a year from now, but even next week. It could morph into something totally different. So you talk about disruptive innovation to analyze history. Otherwise, we should rip that word out of the dictionary.

A loonshot is about testing an embedded belief, those things that you are sure—as a manager, or a business leader running any kind of group—are absolutely true about your world, your market, your products. But what if you’re wrong? Do you want to hear about that new idea about as much as a bullet coming at your head, or do you want to nurture loonshots to challenge your beliefs?

Ars: How does the concept of phase transitions in physics translate into a viable model for human organizations? 

Bahcall: The underlying idea is that there are phases of human organization driven by the underlying interactions. So whenever you organize people into a group, the only precondition you need is that there is a mission for that group and a reward system, meaning an incentive system tied to that mission.

It can seem to a general audience to be a little crazy. How could you possibly apply physics to people? But it really is no different than economics. A market is just people interacting with incentives. The buyers want to get the lowest price, and the sellers want to get the highest price. Those are the interaction rules. The laws of supply and demand, or the “invisible hand” of the market, emerge simply from those rules. I’m just taking economics and applying it to a different system: an organization.

“The underlying idea is that there are phases of human organization driven by the underlying interactions.”

If you’re interacting within an organization, it’s not buyers and sellers, it’s employees and managers. And instead of buying and selling goods, wanting higher or lower prices, they want to maximize their incentives, their reward systems. It’s really the intersection of three things: organizational behavior, economics, and physics. Physics is only brought in as a way of thinking about collective behavior in language that economists don’t usually think about. Ultimately this is a sub discipline of economics, organizational economics, which is understanding the influence of incentives inside an organization.

The tools, or techniques, of phase transitions are a simple set of shortcuts to extract useful insights from a system. You have people in this construct of an organization. You make a simple model. The goal is to make a model that keeps it simple, but not simplistic. You want to capture enough of the underlying interaction so that you can probe the features of the system you’re interested in, but not so much that the problem becomes intractable. So that’s what I did. I found a not very complicated model of an organization and the incentives inside of an organization that was tractable.

Ars: Can you explain how organizational structure changes and a phase transition might occur as a company grows in size?

Bahcall: Whenever you organize people into a group you instantly create two forces on any individual member of that group. One is their stake in the outcome of the project that they’re working on. And the other is the perks of rank within the hierarchy. So if you’re a two-person group, let’s say each person has 50 percent stake in the outcome. Whether you call A, or B, the captain, and the co-captain, is irrelevant. If the project works, everybody is happy, and if it fails, they’re depressed and unemployed. With four people, you’re now at 25 percent stake. You’re probably going to have a team captain and three team members, but it still doesn’t matter very much.

But when you’ve a hundred people, your stake becomes, let’s say, 1 percent. You can’t have one person with 99 people reporting to them. That just doesn’t work. So you have one CEO, five VPs, 25 SVPs, and the rest are the associates or worker bees. Now, if your stake is 1 percent, what’s your reward for getting promoted? It’s probably more than 1 percent. All of a sudden we’ve had a shift. Somewhere between four and a hundred, there is a shift in balance between these two forces.

That’s the phase transition. That’s the qualitative aspect. You can write down what that looks like more mathematically, with realistic incentives. You get cash: that’s how much your base salary goes up in the hierarchy. And you get equity. That’s your stake. You write those two terms down, and then you see what the break even point is, where the derivative is zero. That gives you the equivalent of the critical point. It also tells you what controls that size. Those are the dials that you can adjust. By cranking up the size, you’re effectively making a more innovative group.

Safi Bahcall applies the tools and techniques of phase transitions to making science and technological breakthroughs.
Enlarge / Safi Bahcall applies the tools and techniques of phase transitions to making science and technological breakthroughs.
St Martin’s Press

Ars: You also talk about how having a “system mindset,” versus an “outcome mindset,” can help an organization maintain the balance between radical innovation and operational excellence.

Bahcall: I took that idea from one of [chess grandmaster] Gary Kasparov‘s books. The process that helped him achieve world champion status is that, when he lost a game, he wouldn’t just analyze why a particular move was a bad one. That’s an outcome strategy. Why did my outcome not achieve what I wanted? A more interesting level is one step up, where you look at the process behind the decision. Why did you make that decision? What set of rules were you following?? That has leverage far beyond that one move. It could apply to hundreds or thousands of games in the future.

Now let’s translate that to teams, groups, or companies. A team launches a product. The product flops in the marketplace. Some teams will say, “All right, let’s sit down and figure out what happened here,” post mortem. “Well, this product didn’t have this feature. Our competitor clearly had that feature. It was superior. So let’s make sure next time we launch a product we look at these features and we don’t launch until it’s at least as good as our competitor.” That’s the lazy, lower level, outcome mindset.

The more sophisticated meta level is, how did we as a group arrive at the decision? If you use that as an opportunity to analyze your decision-making system inside a company, you can gain far more leverage. If you want to maintain this delicate balance between loonshots and franchises, you want to understand the process by which you make those decisions. Key to the success of that system is maintaining life at the edge—maintaining balance between these two groups. To maintain life on the edge, on the cusp of the phase transition, you need to be constantly probing your system.

JENNIFER OUELLETTE

Jennifer Ouellette is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Los Angeles.

Enactive Management – de la Cerda (2007), with Humphreys and Saavedra (2018), and Ramírez-Vizcaya and Froese (2019)

I think this is part of the puzzle 🙂

 

Source: [PDF] Self Management : an Innovative Tool for Enactive Human Design | Semantic Scholar

Published 2007

Self Management : an Innovative Tool for Enactive Human Design

O. García de la Cerda

The purpose of this paper is to present an innovative and creative approach to the problemD solution in decision making, based on the understanding of decision makers as human beings, and decision making processes as human networks, in an organizational context. This approach basically consists of the development of a powerful ontological tool for the observation, self-observation, design and innovation of human beings from passive observers towards enactive observers, who have to make decisions and solve problem situations through the interactions in which they participate. This tool named CLEHES© (Body – Language – Emotion – History – Eros – Silence) allows to develop not only all the human potential inside us, but also to bring all organizational resources, such as information technology and communications, into the decision makers bodies, to invent and re-invent new human practices that can create value to our organizations. Several applications of this tool have taken place in different domains and organizational contexts with amazing results, which have been the focus of continuous research projects and managerial education instances

 

Source: (PDF) Enactive management: A nurturing technology enabling fresh decision making to cope with conflict situations

Enactive management: A nurturing technology enabling fresh decision making to cope with conflict situations

Abstract
The focus of this paper is observation, self-observation, and enactive management of organizational conflict situations whereby a community, an organization, or a human being has the possibility of recognizing their resources and generating changes in their practices if they so desire, and making fresh decisions, in the sense that different ontological dimensions are involved. We show how considering Body¹- Language- Emotions- History- Eros- Silence can configure a nurturing technology call CLEHES. This tool has been applied for diverse people, groups, communities, and organizations that need and wish to develop their own skills to inquire conflict practice resolutions, in order to learn as a human decision support system. Conflict situations are understood as interactions, a breakdown in-between CLEHES from the individual or social standpoints. This tool allows observing the boundaries of conflict situations and building an observer system with the ability to manage, solve, or attenuate the situation, enabling fresh decision-making attending to the context in which the organization moves. This learning process happens in a constructed place called an Enactive Laboratory where strategies are developed to cope with the domains and context in the perceived individual and human activities systems. We present a case study focusing on a Learning Family Mediators System.

 

Source: Frontiers | The Enactive Approach to Habits: New Concepts for the Cognitive Science of Bad Habits and Addiction | Psychology

Front. Psychol., 26 February 2019 | https://doi.org/10.3389/fpsyg.2019.00301

The Enactive Approach to Habits: New Concepts for the Cognitive Science of Bad Habits and Addiction

  • 1Philosophy of Science Graduate Program, National Autonomous University of Mexico (UNAM), Mexico City, Mexico
  • 2Institute for Philosophical Research (IIF), National Autonomous University of Mexico (UNAM), Mexico City, Mexico
  • 3Institute for Applied Mathematics and Systems Research (IIMAS), National Autonomous University of Mexico (UNAM), Mexico City, Mexico
  • 4Center for the Sciences of Complexity (C3), UNAM, Mexico City, Mexico

Habits are the topic of a venerable history of research that extends back to antiquity, yet they were originally disregarded by the cognitive sciences. They started to become the focus of interdisciplinary research in the 1990s, but since then there has been a stalemate between those who approach habits as a kind of bodily automatism or as a kind of mindful action. This implicit mind-body dualism is ready to be overcome with the rise of interest in embodied, embedded, extended, and enactive (4E) cognition. We review the enactive approach and highlight how it moves beyond the traditional stalemate by integrating both autonomy and sense-making into its theory of agency. It defines a habit as an adaptive, precarious, and self-sustaining network of neural, bodily, and interactive processes that generate dynamical sensorimotor patterns. Habits constitute a central source of normativity for the agent. We identify a potential shortcoming of this enactive account with respect to bad habits, since self-maintenance of a habit would always be intrinsically good. Nevertheless, this is only a problem if, following the mainstream perspective on habits, we treat habits as isolated modules. The enactive approach replaces this atomism with a view of habits as constituting an interdependent whole on whose overall viability the individual habits depend. Accordingly, we propose to define a bad habit as one whose expression, while positive for itself, significantly impairs a person’s well-being by overruling the expression of other situationally relevant habits. We conclude by considering implications of this concept of bad habit for psychological and psychiatric research, particularly with respect to addiction research.

 

A curriculum for meta-rationality )(What they don’t teach you at STEM school) | Meaningness – and some summary posts on David Chapman’s ideas

Other links:

To quote from https://meaningness.com/fluidity-preview/comments:

[I distinguish] three sources of “nebulosity”: linguistic ambiguity, epistemological uncertainty, and ontological indefiniteness. The first two are “problems in the map” and the third is “problems in the territory.”

Generally, it seems rationalism tries to deal with the map problems, and ignores the territory problems. (The “Guide to Words” is about linguistic ambiguity; and Bayes/decision theory/etc. are about epistemological uncertainty.)

“Fluidity” or “meta-rationality” is about territory “problems.” That is, the world is inherently fluid/mushy/vague, independent of any being’s beliefs about it.

I don’t know of any discussion by rationalists of ontological indefiniteness. The unstated background assumption seems to be that the world is perfectly well-behaved: facts are definitely true or false. It is just stuff in our brains (language and beliefs) that are imperfect.

Ontological remodelling – it’s not just that ‘the map is not the territory’, it’s that the territory is inherently fluid/mushy/vague: https://meaningness.com/eggplant/remodeling

How do we know? https://meaningness.com/metablog/meta-systematic-judgement

Nebulosity and pattern: https://meaningness.com/monism-dualism-recursion

Monism and dualism are opposites. But because each is obviously wrong, each turns into the other when cornered. A devious trick!

Monism is the stance that fixates sameness and connections, and denies differences and boundaries. Dualism is just the other way around: it denies sameness and connections, and fixates differences and boundaries.

Both these confused stances sometimes show themselves to be obviously wrong. The complete stance of participation recognizes that samenesses and differences, boundaries and connections, are all real, but also always somewhat nebulous: ambiguous and fluid. This is obviously accurate, but usually less convenient. Monism and dualism are simpler, and deliver particular emotional payoffs—some of the time.

not eternalism or nihilism but meaningness, not monism or dualism but participation, not causality or chaos but flow: https://meaningness.com/all-dimensions-schematic-overview

Pattern: https://meaningness.com/pattern

Nebulosity: https://meaningness.com/nebulosity

Terminology: emptiness and form, nebulosity and pattern: https://meaningness.com/terminology/emptiness-form-nebulosity-pattern

Pattern and nebulosity on the Deconstructing Yourself podcast: https://meaningness.com/metablog/deconstructing-yourself-6

Pattern and Nebulosity, with David Chapman

 

The syllabus for a curriculum teaching meta-rational skills: how to evaluate, combine, modify, discover, and create effective systems.

Source: What they don’t teach you at STEM school | Meaningness

This post sketches a hypothetical curriculum for developing these meta-systematic capabilities. It’s preliminary; perhaps even premature. There is no existing presentation of this subject that I know of, which makes it more difficult than it should be. My understanding of the topic draws on a dozen academic disciplines, each written in its own unnecessarily obscure code. Both my understanding, and the pedagogical structure I’m proposing, are tentative and incomplete.

Partly this presentation hopes to inspire some readers to pursue meta-systematicity; partly it is a plan for a large project that I hope to pursue myself; partly I hope you will give feedback, make suggestions, or contribute ideas to the project too!

Goal and audience

The overall goal is to take you from systematic rationality to meta-rationality as quickly and painlessly as possible. The curriculum should re-present insights I’ve found in many semi-relevant fields, as clearly and simply as possible, in STEM-friendly terms, in a structured, sequential format.

Learning meta-systematic skills shouldn’t be so hard, and meta-systematic understanding is particularly valuable in STEM. It is inherently somewhat conceptually difficult; but probably not as difficult as, say, senior-year undergraduate physics. However, it does have cognitive prerequisites.

This curriculum is for people who have mastered systematic rationality, specifically in a STEM framework. For the most part, you have to have a thorough understanding of how to work within systems before it’s feasible to step up and out of them, to manipulate them from above. There are other routes to mastering systematic rationality—through experience as a manager in a bureaucratic organization, for instance—but this curriculum will assume a STEM background.

The minimum requirement might be an undergraduate STEM degree; but research experience at the graduate level may be needed. You have to have seen how many different systems work, and—more importantly—how they fail. At the undergraduate level, you are mainly shielded from the failures, and systems get presented as though they were Absolute Truth. Or, at least, they are taught as though Absolute Truth lurks somewhere in the vicinity, obscured only by complex details. Recognizing that there is no Absolute Truth anywhere is a small downpayment on the price of entry to meta-systematicity.

That may already have set off warning bells. Woomeisters and postmodernists say things like that—and if you think they are horribly wrong, I agree!

This curriculum is about how to do STEM better. It is not about taking you out of a STEM worldview into some alternative. Everything here is on top of that view. It addresses limitations in the way STEM is typically taught and practiced, but does not contradict any of its content. There is no woo involved—including no STEM-flavored woo, such as neurobabble or quantum or Gödel woo.

In fact, a critical step is letting go of some of STEM’s own woo—quasi-religious beliefs about the ability of rationality to deliver certaintyunderstanding, and control. For that letting-go, the meta-systematic mode demands that one develop an additional cognitive style. Routine STEM is easy for those who are precise and rigid of mind, and so find promises of certainty, understanding, and control particularly comforting. Meta-systematicity requires openness, flexibility, daring, and uncommonly realistic common sense—as well as technical precision.

I’ll begin with some preliminary definitions, and provide a brief overview of the curriculum. Then most of the page goes through the syllabus, organized into ten modules, in more detail. That is still just a summary, which may be difficult to make sense of on its own. I’ve included in it links to resources that provide more explanation; some of my own web pages, and articles and books by others. At this stage in the project, even these leave many holes, which I hope to fill gradually. Many of the books are seriously difficult reading; the hypothetical curriculum would extract and explain clearly their relevant points.

Some loose definitions

By system, I mean, roughly, a collection of related concepts and rules that can be printed in a book of less than 10kg and followed consciously. A rational system is one that is “good” in some way. There are many different conceptions of what makes a system rational. Logical consistency is one; decision-theoretic criteria can form another. The details don’t matter here, because we are going to take rationality for granted.

Meta-systematic cognition is reasoning about, and acting on, systems from outside them, without using a system to do so. (Reasoning about systems using another system is systematic, and meta, but not “meta-systematic” in this sense.1) Meta-rationality, then, is “good” meta-systematic cognition. Mostly I use the terms interchangeably.

One field I draw on is the empirical psychology of adult development, as investigated by Robert Kegan particularly. This framework describes systematic rationality as stage 4 in the developmental path. Stage 5 is meta-systematic. However, as far as I know, no one from this discipline has applied the stage theory to STEM competence specifically. Empirical study of cognitive development in graduate-level STEM students would be helpful,2 but in the absence of that I’m working from a combination of first principles, bits of theory taken from many apparently-unrelated disciplines, anecdata, and personal experience.

According to this framework, there is also a stage 4.5, in which you lose the quasi-religious belief in systems, but haven’t yet developed the meta-systematic understanding that can replace blind faith. Stage 4.5 leaves you vulnerable to nihilism, including ontological despair (nothing seems true), epistemological anxiety (nothing seems knowable), and existential depression (nothing seems meaningful). It’s common to get stuck at 4.5, which is awful.

Continues in source: What they don’t teach you at STEM school | Meaningness

Supervenience – Wikipedia, the free encyclopedia | Model Report: Systems Thinking, Modeling and Simulation News

Source: Supervenience – Wikipedia, the free encyclopedia | Model Report: Systems Thinking, Modeling and Simulation News

W. Ulrich’s Home Page: A Mini-Primer of Critical Systems Heuristics

Key readings – https://wulrich.com/readings.html

Another overview – https://www.betterevaluation.org/en/plan/approach/critical_system_heuristics

A brief introduction by Werner Ulrich (pdf) http://projects.kmi.open.ac.uk/ecosensus/publications/ulrich_csh_intro.pdf

Other overviews:

  • https://i2s.anu.edu.au/resources/critical-systems-heuristics

Source: CSH | W. Ulrich | Ulrich’s Home Page: A Mini-Primer of Critical Systems Heuristics

 

Abstract “Critical Systems Heuristics,” also just called “Critical Heuristics” or “CSH,” is a framework for reflective practice based on practical philosophy and systems thinking. The basic idea of CSH is to support boundary critique – a systematic effort of handling boundary judgments critically. Boundary judgments determine which empirical observations and value considerations count as relevant and which others are left out or are considered less important. Because they condition both “facts” and “values,” boundary judgments play an essential role when it comes to assessing the meaning and merits of a claim. Their systematic discussion can help bridge differences of perspectives across disciplines and between experts and non-experts. They also lend themselves to a specific critical employment, calledemancipatory boundary critique, against claims that do not uncover their underlying boundary assumptions. CSH can thus serve as a tool for coproducing knowledge as well as for critical and emancipatory purposes on the part of people concerned by, but not necessarily involved in, the definition of relevant facts and values.

Critical systems heuristics (Ulrich 1983) represents the first systematic attempt at providing both a philosophical foundation and a practical framework for critical systems thinking. The Greek verb heurisk-ein means to find or to discover; heuristics is the art (or practice) of discovery. In management science and other applied disciplines, heuristic procedures serve to identify and explore relevant problem aspects, questions, or solution strategies, in distinction to deductive (algorithmic) procedures, which serve to solve problems that are logically and mathematically well defined. Professional practice cannot do without heuristics, as it usually starts from ‘soft’ (ill-defined, qualitative) issues such as what is the problem to be solved and what kind of change would represent an improvement.

critical approach is required since there is no single right way to decide such issues; answers will depend on personal interests and views, value assumptions, and so on. A critical approach does not yield any single right answers either; but it can support processes of reflection and debate about alternative assumptions. Sound professional practice is critical practice.

CSH aims to support reflective professional practice through a critical employment of the systems idea. The methodological core idea is that all problem definitions, proposals for improvement, and evaluations of outcomes depend on prior judgments about the relevant whole system to be looked at. Improvement, for instance, is an eminently systemic concept, for unless it is defined with reference to the entire relevant system, suboptimization will occur. CSH calls these underpinning judgments boundary judgments, as they define the boundaries of the reference system (the situation or context considered relevant) to which a proposition refers and for which it is valid.

Accordingly, the methodological core idea of CSH is to support systematic processes of boundary critique. To this end, CSH offers (among other concepts) a table of boundary categories (Figure 1) that translates into a checklist of twelve critical boundary questions (Ulrich 1987, 1996, 2000). These can be used:

  1. To identify boundary judgments systematically;
  2. To analyze alternative reference systems for defining a problem or assessing a solution proposal; and
  3. To challenge in a compelling way any claims to knowledge, rationality, or ‘improvement’ that rely on hidden boundary judgments or take them for granted.

The first two applications are basic for dealing with multiple perspectives in basically cooperative settings. They can help people understand why in respect to one and the same situation, their considerations of “fact” and “value” differ. They can thus help to bridge such differences or at least, to promote mutual understanding and cooperation in handling them. The third application, by contrast, leads to an emancipatory employment of systems thinking called emancipatory boundary critique. It offers both those involved in and those affected by professional practice an opportunity to develop a new kind of critical competence, a competence that will not depend on any special theoretical knowledge or expertise with respect to the problem or situation in question that would reach beyond what is available to ordinary citizens.

In short, CSH can be defined as a critical methodology for identifying and debating boundary judgements.

Continues in source: CSH | W. Ulrich | Ulrich’s Home Page: A Mini-Primer of Critical Systems Heuristics