Why Model? Joshua M Epstein (2008)

The article cited by Abeba Birhane

Source: Why Model?

 

JASSS logo ----

Joshua M. Epstein (2008)

Why Model?

Journal of Artificial Societies and Social Simulation vol. 11, no. 4 12
<http://jasss.soc.surrey.ac.uk/11/4/12.html>

For information about citing this article, click here

Received: 15-Oct-2008    Accepted: 19-Oct-2008    Published: 31-Oct-2008

PDF version


* Abstract

This lecture treats some enduring misconceptions about modeling. One of these is that the goal is always prediction. The lecture distinguishes between explanation and prediction as modeling goals, and offers sixteen reasons other than prediction to build a model. It also challenges the common assumption that scientific theories arise from and ‘summarize’ data, when often, theories precede and guide data collection; without theory, in other words, it is not clear what data to collect. Among other things, it also argues that the modeling enterprise enforces habits of mind essential to freedom. It is based on the author’s 2008 Bastille Day keynote address to the Second World Congress on Social Simulation, George Mason University, and earlier addresses at the Institute of Medicine, the University of Michigan, and the Santa Fe Institute.

1.1
The modeling enterprise extends as far back as Archimedes; and so does its misunderstanding. I have been invited to share my thoughts on some enduring misconceptions about modeling. I hope that by doing so, I will give heart to aspiring modelers, and give pause to misguided critics.

Why Model?

1.2
The first question that arises frequently—sometimes innocently and sometimes not—is simply, “Why model?” Imagining a rhetorical (non-innocent) inquisitor, my favorite retort is, “You are a modeler.” Anyone who ventures a projection, or imagines how a social dynamic—an epidemic, war, or migration—would unfold is running some model.
1.3
But typically, it is an implicit model in which the assumptions are hidden, their internal consistency is untested, their logical consequences are unknown, and their relation to data is unknown. But, when you close your eyes and imagine an epidemic spreading, or any other social dynamic, you are running some model or other. It is just an implicit model that you haven’t written down (see Epstein 2007).
1.4
This being the case, I am always amused when these same people challenge me with the question, “Can you validate your model?” The appropriate retort, of course, is, “Can you validate yours?” At least I can write mine down so that it can, in principle, be calibrated to data, if that is what you mean by “validate,” a term I assiduously avoid (good Popperian that I am).
1.5
The choice, then, is not whether to build models; it’s whether to build explicit ones. In explicit models, assumptions are laid out in detail, so we can study exactly what they entail. On these assumptions, this sort of thing happens. When you alter the assumptions that is what happens. By writing explicit models, you let others replicate your results.
1.6
You can in fact calibrate to historical cases if there are data, and can test against current data to the extent that exists. And, importantly, you can incorporate the best domain (e.g., biomedical, ethnographic) expertise in a rigorous way. Indeed, models can be the focal points of teams involving experts from many disciplines.
1.7
Another advantage of explicit models is the feasibility of sensitivity analysis. One can sweep a huge range of parameters over a vast range of possible scenarios to identify the most salient uncertainties, regions of robustness, and important thresholds. I don’t see how to do that with an implicit mental model. It is important to note that in the policy sphere (if not in particle physics) models do not obviate the need for judgment. However, by revealing tradeoffs, uncertainties, and sensitivities, models can discipline the dialogue about options and make unavoidable judgments more considered.

Can You Predict?

1.8
No sooner are these points granted than the next question inevitably arises: “But can you predict?” For some reason, the moment you posit a model, prediction—as in a crystal ball that can tell the future—is reflexively presumed to be your goal. Of course, prediction might be a goal, and it might well be feasible, particularly if one admits statistical prediction in which stationary distributions (of wealth or epidemic sizes, for instance) are the regularities of interest. I’m sure that before Newton, people would have said “the orbits of the planets will never be predicted.” I don’t see how macroscopic prediction—pacem Heisenberg—can be definitively and eternally precluded.

Sixteen Reasons Other Than Prediction to Build Models

1.9
But, more to the point, I can quickly think of 16 reasons other than prediction (at least in this bald sense) to build a model. In the space afforded, I cannot discuss all of these, and some have been treated en passant above. But, off the top of my head, and in no particular order, such modeling goals include:
  1. Explain (very distinct from predict)
  2. Guide data collection
  3. Illuminate core dynamics
  4. Suggest dynamical analogies
  5. Discover new questions
  6. Promote a scientific habit of mind
  7. Bound (bracket) outcomes to plausible ranges
  8. Illuminate core uncertainties.
  9. Offer crisis options in near-real time
  10. Demonstrate tradeoffs / suggest efficiencies
  11. Challenge the robustness of prevailing theory through perturbations
  12. Expose prevailing wisdom as incompatible with available data
  13. Train practitioners
  14. Discipline the policy dialogue
  15. Educate the general public
  16. Reveal the apparently simple (complex) to be complex (simple)

Explanation Does Not Imply Prediction

1.10
One crucial distinction is between explain and predict. Plate tectonics surely explains earthquakes, but does not permit us to predict the time and place of their occurrence. Electrostatics explains lightning, but we cannot predict when or where the next bolt will strike. In all but certain (regrettably consequential) quarters, evolution is accepted as explaining speciation, but we cannot even predict next year’s flu strain. In the social sciences, I have tried to articulate and to demonstrate an approach I call generative explanation, in which macroscopic explananda—large scale regularities such as wealth distributions, spatial settlement patterns, or epidemic dynamics—emerge in populations of heterogeneous software individuals (agents) interacting locally under plausible behavioral rules (Epstein 2006Ball 2007). For example, the computational reconstruction of an ancient civilization (the Anasazi) has been accomplished by this agent-based approach (Axtell et al. 2002Diamond 2002) I consider this model to be explanatory, but I would not insist that it is predictive on that account. This work was data-driven. But I don’t think that is necessary.

To Guide Data Collection

1.11
On this point, many non-modelers, and indeed many modelers, harbor a naïve inductivism that might be paraphrased as follows: ‘Science proceeds from observation, and then models are constructed to ‘account for’ the data.’ The social science rendition— with which I am most familiar—would be that one first collects lots of data and then runs regressions on it. This can be very productive, but it is not the rule in science, where theory often precedes data collection. Maxwell’s electromagnetic theory is a prime example. From his equations the existence of radio waves was deduced. Only then were they sought … and found! General relativity predicted the deflection of light by gravity, which was only later confirmed by experiment. Without models, in other words, it is not always clear what data to collect!

Illuminate Core Dynamics: All the Best Models are Wrong

1.12
Simple models can be invaluable without being “right,” in an engineering sense. Indeed, by such lights, all the best models are wrong. But they are fruitfully wrong. They are illuminating abstractions. I think it was Picasso who said, “Art is a lie that helps us see the truth.” So it is with many simple beautiful models: the Lotka-Volterra ecosystem model, Hooke’s Law, or the Kermack-McKendrick epidemic equations. They continue to form the conceptual foundations of their respective fields. They are universally taught: mature practitioners, knowing full-well the models’ approximate nature, nonetheless entrust to them the formation of the student’s most basic intuitions (see Epstein 1997). And this because they capture qualitative behaviors of overarching interest, such as predator-prey cycles, or the nonlinear threshold nature of epidemics and the notion of herd immunity. Again, the issue isn’t idealization—all models are idealizations. The issue is whether the model offers a fertile idealization. As George Box famously put it, “All models are wrong, but some are useful.”

Suggest Analogies

1.13
It is a startling and wonderful fact that a huge variety of seemingly unrelated processes have formally identical models (i.e., they can all be seen as interpretations of the same underlying formalism). For example, electrostatic attraction under Coulomb’s Law and gravitational attraction under Newton’s Law have the same algebraic form. The physical diversity of diffusive processes satisfying the “heat” equation or of oscillatory processes satisfying the “wave” equation is virtually boundless. In his economics Nobel Lecture, Samuelson writes that, “if you look at the monopolistic firm as an example of a maximum system, you can connect up its structural relations with those that prevail for an entropy-maximizing thermodynamic system…absolute temperature and entropy have to each other the same conjugate or dual relation that the wage rate has to labor or the land rent has to acres of land.” One diagram, in his words, does “double duty, depicting the economic relationships as well as the thermodynamic ones.” (Samuelson 1972; see also Epstein 1997) In developing the Anasazi model noted earlier, my colleagues and I made a “computational analogy” between the well-known Sugarscape model (Epstein and Axtell 1996) and the actual MaiseScape on which the ancient Anasazi lived.
1.14
I am suggesting that analogies are more than beautiful testaments to the unifying power of models: they are headlights in dark unexplored territory. For instance, there is a powerful theory of infectious diseases. Do revolutions, or religions, or the adoption of innovations unfold like epidemics? Is it useful to think of these processes as formal analogues? If so, then a powerful pre-existing theory can be brought to bear on the unexplored field, perhaps leading to rapid advance.

Raise New Questions

1.15
Models can surprise us, make us curious, and lead to new questions. This is what I hate about exams. They only show that you can answer somebody else’s question, when the most important thing is: Can you ask a new question? It’s the new questions (e.g., Hilbert’s Problems) that produce huge advances, and models can help us discover them.

From Ignorant Militance to Militant Ignorance

1.16
To me, however, the most important contribution of the modeling enterprise—as distinct from any particular model, or modeling technique—is that it enforces a scientific habit of mind, which I would characterize as one of militant ignorance—an iron commitment to “I don’t know.” That is, all scientific knowledge is uncertain, contingent, subject to revision, and falsifiable in principle. (This, of course, does not mean readily falsified. It means that one can in principle specify observations that, if made, would falsify it). One does not base beliefs on authority, but ultimately on evidence. This, of course, is a very dangerous idea. It levels the playing field, and permits the lowliest peasant to challenge the most exalted ruler—obviously an intolerable risk.
1.17
This is why science, as a mode of inquiry, is fundamentally antithetical to all monolithic intellectual systems. In a beautiful essay, Feynman (1999) talks about the hard-won “freedom to doubt.” It was born of a long and brutal struggle, and is essential to a functioning democracy. Intellectuals have a solemn duty to doubt, and to teach doubt. Education, in its truest sense, is not about “a saleable skill set.” It’s about freedom, from inherited prejudice and argument by authority. This is the deepest contribution of the modeling enterprise. It enforces habits of mind essential to freedom.

*  Acknowledgements

I thank Ross A. Hammond for insightful comments and acknowledge funding support from the National Institutes of Health MIDAS Project [GM-03-008] and the 2008 NIH Director’s Pioneer Award [1DP1OD003874-01].

*  References

AXTELL, RL, JM Epstein, JS Dean, GJ Gumerman, AC Swedlund, JHarberger, S Chakravarty, R Hammond, J Parker, and M Parker, “Population Growth and Collapse in a Multi-Agent Model of the Kayenta Anasazi in Long House Valley”. Proceedings of the National Academy of Sciences, Colloquium 99(3): 7275-79.BALL, Philip (2007), “Social Science Goes Virtual” Nature, Vol 448/9 August .

DIAMOND, Jared M. , “Life with the Artificial Anasazi,” Nature 419: 567-69.

EPSTEIN, Joshua M. and Robert Axtell (1996). Growing Artificial Societies: Social Science from the Bottom Up. MIT Press.

EPSTEIN, Joshua M. (1997). Nonlinear Dynamics, Mathematical Biology, and Social Science. Addison-Wesley Publishing Company, Inc.

EPSTEIN, Joshua M. (2006). Generative Social Science: Studies in Agent-Based Computational Modeling. Princeton University Press.

EPSTEIN, Joshua M. (2007). “Remarks on the Role of Modeling in Infectious Disease Mitigation and Containment”. In Stanley M. Lemon, et al, Editors, Ethical and Legal Considerations in Mitigating Pandemic Disease: Workshop Summary. Forum on Microbial Threats, Institute of Medicine of the National Academies. National Academies Press.

FEYNMAN, Richard P. (1999) “The Value of Science.” In Feynman, R. P. The Pleasure of Finding Things Out. Perseus Publishing.

SAMUELSON, Paul A. (1972). “Maximum Principles in Analytical Economics. In The Collected Scientific Papers of Paul A. Samuelson, edited by Robert Merton, Vol III, 8-9. Nobel Memorial Lecture, Dec. 11, 1970. MIT Press.

Why model? Abeba Birhane

The Möbius Stripper's avatarAbeba Birhane

I came across this little paper on the Introduction to Dynamical Systems and Chaos online course from Santa Fe. It was provided as a supplementary reading in the ‘Modelling’ section. The paper lays out some of the most enduring misconceptions about building models.

“The modeling enterprise extends as far back as Archimedes; and so does its misunderstanding.”

So, why model? What are models? And who are modellers?

Prior to reading this paper, my short answers to these questions would have been in accordance with the widely held misconceptions that:

We model to explain and or predict. Models are formal representations (often mathematical) of phenomenon or processes. And a modeller is someone who builds these explicit formal mathematical models. However, Epstein explains:

“Anyone who ventures a projection, or imagines how a social dynamic—an epidemic, war, or migration—would unfold is running some model.”

I like the idea that we all run some…

View original post 667 more words

Why being smart is not enough — the social skills and structures of tackling complexity – Mieke van der Bijl

Source: Why being smart is not enough — the social skills and structures of tackling complexity

 

Why being smart is not enough — the social skills and structures of tackling complexity

Imagine you are working for an organisation and you are invited to contribute to solving a certain complex problem that needs to be addressed, a problem that is outside the full control of your organisation. For example, you might be working in a university and be asked to think about how to improve the wellbeing of students. Or you are working for a tech business and asked to contribute to making a certain industry more sustainable. You are invited to a meeting to discuss this problem. When you enter the room you first notice that there are no tables and the chairs are positioned in a circle. The person receiving you introduces themselves as ‘the facilitator’. No one seems to be chairing the meeting, there is no meeting agenda and there does not seem to be anyone there who is taking minutes. You do see a massive piece of butchers’ paper stuck on the wall. Then you notice there are a lot of people there you have not met before. You start chatting to the person next to you and within a minute you realise that you totally disagree with the person and the way they think about the problem. But before you can start defending your point of view the person says: ‘your opinion is totally different than mine, how interesting!’ [1].

Sounds exciting? Something you do all the time? Or does it sound like a waste of time? Even though this is a fictive situation, there are many organisations around the world who are adopting ways of working together that are similar to this. These organisations are aiming to address complex societal problems such as climate change, housing affordability, crime, and chronic health issues, and are applying new ways of thinking about these problems, because traditional problem-solving techniques have not been effective. At the same time, they are applying and experimenting with new ways of collaborating and working together. As an academic working in a faculty of transdisciplinary innovation I am very interested in these ways of working. Earlier this year I had the opportunity to conduct a small study on this topic as part of a 6-month ‘professional experience program’ generously offered by my current employer, the University of Technology Sydney. In this blog post I will share my main insights.

To explore this topic I spent one month each at two organisations who have expertise in addressing complex challenges: the MaRS Solutions Lab(Toronto), and Oslo School of Architecture and Design (Oslo). And I conducted interviews with six experts in systems & complexity thinking and/or in facilitation of complex collaborations.

So what did I learn?

There is one thing that became very clear to me and that is that thinking about how we work together and how we behave as individuals, is at least as important as how we think about innovation in complex problem situations. Or, as Cheryl Dahle (Fliplabs, Carnegie Mellon University) mentioned:

“The insights, the research, the smarts, the intellectual heft part of this world gets you to a certain place and everything after that is about relationships”.

A second thing that I learned is that these ways of working together have characteristics that are very similar to those of the complex systems that these collaborations aim to influence, namely that:

  • These ways of working are systemic, acknowledging that the whole is more than the sum of its parts
  • There is a pattern or structure to these ways of working, but at the same time, they are unpredictable, and leave space for ‘emergence’
  • They have many parts that are invisible
  • And they are fundamentally different from the dominant current ways of working in organisations that are focused on efficiency, optimisation and exploitation.

To explain that, let me first take a step back and explain briefly what a complex system is and why it is relevant.

continues in source

 

Warm Data to Better Meet the Complex Risks of This Era

Warm Data to Better Meet the Complex Risks of This Era
DECEMBER 7, 2018 ~ NORA BATESON
By Nora Bateson 2018

*This is a small piece that was written for the document presented for the General Assembly 2019 Global Risk Assessment.

IMG_1033

The problems the world is facing now, including ecological damage, natural disasters, poverty, species loss, political upheaval, refugee trauma, and even health epidemics, can all be described as complex, that is, they are born of circumstances that are multi-causal and non-linear. This complexity vexes the traditional problem-solving model of separating the problems into singularly defined parts and solving for the symptoms. The very nature of complexity undermines the familiar mandate to define goals and strategies to achieve pre-envisioned, single sector solutions. None of the issues above can be understood as stand alone issues. These issues are wrapped in contextual interdependencies that require an entirely different approach in assessment, and action.

Nora Bateson's avatarnorabateson

By Nora Bateson 2018

*This is a small piece that was written for the document presented for the General Assembly 2019 Global Risk Assessment.

IMG_1033

The problems the world is facing now, including ecological damage, natural disasters, poverty, species loss, political upheaval, refugee trauma, and even health epidemics, can all be described as complex, that is, they are born of circumstances that are multi-causal and non-linear. This complexity vexes the traditional problem-solving model of separating the problems into singularly defined parts and solving for the symptoms. The very nature of complexity undermines the familiar mandate to define goals and strategies to achieve pre-envisioned, single sector solutions. None of the issues above can be understood as stand alone issues. These issues are wrapped in contextual interdependencies that require an entirely different approach in assessment, and action.

Warm Data is a Complementary form of Information.

A majority of current scientific research…

View original post 1,229 more words

INCOSE Human Systems Integration 2019 Conference September 11-13, 2019 Biarritz, France

 

Source: HSI2019 when / where

Join us for the INCOSE Human Systems Integration 2019 Conference

Visit Biarritz

September 11 to 13, 2019
Le Bellevue Conference Center, Biarritz, France

The Model Thinker: What You Need to Know to Make Data Work for You: Scott E. Page

cxdig's avatarComplexity Digest

From the stock market to genomics laboratories, census figures to marketing email blasts, we are awash with data. But as anyone who has ever opened up a spreadsheet packed with seemingly infinite lines of data knows, numbers aren’t enough: we need to know how to make those numbers talk. In The Model Thinker, social scientist Scott E. Page shows us the mathematical, statistical, and computational models–from linear regression to random walks and far beyond–that can turn anyone into a genius. At the core of the book is Page’s “many-model paradigm,” which shows the reader how to apply multiple models to organize the data, leading to wiser choices, more accurate predictions, and more robust designs. The Model Thinker provides a toolkit for business people, students, scientists, pollsters, and bloggers to make them better, clearer thinkers, able to leverage data and information to their advantage.

Source: www.amazon.com

View original post

Improvisation Blog: Why has my blogging slowed down? (Some thoughts on holograms, music and machine learning) – Mark Johnson

Source: Improvisation Blog: Why has my blogging slowed down? (Some thoughts on holograms, music and machine learning)

 

Monday, 26 November 2018

Why has my blogging slowed down? (Some thoughts on holograms, music and machine learning)

I realise that I haven’t blogged very much recently. Partly it’s because I’ve been very busy and a bit exhausted. But also I think it’s because I’ve got so much in my head at the moment, I don’t know how to get anything out.

What I have been doing is a bit of cybernetic evangelism. It’s been great to take people to the Stafford Beer archive and watch their heads explode! It’s quite a predictable thing…

But I’m also thinking about Beer’s holism, and the way that his approach unfolds from nothing. It’s this “unfolding from nothing” which really fascinates me. It’s rather like my friend Peter Rowlands’s work on physics which expands from the idea of nothing (or his ‘nilpotent’ formulae). Is everything really nothing? What’s real about nothing?

I have some good friends visiting me at the end of the week who also know about this stuff. We’re going to do a session in the Beer archive. I’m hoping some colleagues from Liverpool also come along.

It’s weird how things start to fit together. I suppose my biggest interest in cybernetics at the moment is coherence: how do things fit together? The mechanistic/stochastic approach of cybernetics doesn’t address this question well. But David Bohm’s idea of the hologram (which Beer was at least aware of because he had a book on it), does.

All the things that fascinate me most, such as music, or conversation, or narrative, or biological form, are all coherent. Are they all holographic?

This question about holograms – particularly in music – has taken on a new dimension for me. I am also working with machine learning tools for a big medical project. We have a problem: how to adjust the judgement of a trained network without screwing up all the other judgements of the network. It’s not really how machine learning is meant to work.

We have a brute-force fix for the problem, but it’s rather unsatisfactory (and probably not reliable). It would be much better to have a better understanding of what is happening in a convolutional neural network (CNN)

It turns out that current thinking is that it is a kind of hologram. CNNs encode differences and orders of things in a fractal structure. That means that the nature of the problem we want to solve is “where to change the fractal/hologram so that a specific change in its ordering may be made”.

This is rather like asking “where to change the  fractal holographic image so that the hologram changes shape in a particular way”. The nature of the challenge can be seen by examining a holographic plate:

What it encodes is the interference pattern of light. That means actually that it encodes distance and time (time because interference involves frequency). Now if we can understand how the interference pattern arises, we might be able to understand where to manipulate it.
Music, I suspect has a holographic structure. I’m writing a paper at the moment where this structure is considered using the fractal images of anticipatory systems which were developed by Daniel Dubois. His images look like this:
In music’s hologram, I think what is encoded is the interference between different redundancies. Like light, this interference means that the hologram encodes time and difference. Because we can play with music in a very practical way, maybe there is a way in which insight gained from music can help with the practical problem of machine learning.
Let’s see.
In the meantime, I’ve felt the need to shake up my metasystem…

Complexity Across the Disciplines: Course Resources from Loren Demerath

From the excellent Human Current podcast at the NECSI conference: http://www.human-current.com/episode-109-the-social-pursuit-of-order

 

These resources from Loren Demerath – course resources with loads of links and content from a course rather modestly entitled

Soc 395: “The Emergence of Order: the Universe, Life, Consciousness, and Society”

Source: Complexity Across the Disciplines: Course Resources

Jeanne Hamming

English

Centenary

 

Loren Demerath

Sociology

Centenary

 

Dante Suarez

Economics

Trinity

 

Mark Goadrich

Computer Sci.

Hendrix

 

Steve Desjardins

Chemistry

Washington

& Lee

 

Scott Davis

Philosophy

Richmond

 

    In 2016 THE ASSOCIATED COLLEGES OF THE SOUTH

         funded a team of faculty to create an interdisciplinary course on complexity.

    This page archives the resources for that course, and are free for educational use.

Professor Interviews

     & Discussions

Powerpoints

    & Notes

Models &

Illustrations

RESOURCES FOR TEACHING COMPLEXITY

Readings

Videos

Podcasts

Visual Agility: Why We Model – Ruth Malan on LinkedIn

https://www.linkedin.com/pulse/visual-agility-why-we-model-ruth-malan/ (with illustrations)

Visual Agility: Why We Model
Published on June 23, 2018
Ruth Malan
Architecture Consultant at Bredemeyer Consulting

Context for this Discussion
Design of complex systems is hard — wickedly hard! It takes all the cognitive assist we can muster. Trade-offs must be made because there is interaction — not just interaction among components to create a capability, but interaction among properties. And interaction between the system and its users and containing systems(-of-systems). And more! These systems are evolving — the more agile, the more we try to take this co-evolution, this learning across boundaries, this symmathesy, into account.

This is responsive design, with an emphasis on responsive, and on design. Design in the classic Herbert Simon sense of design to make the system more the way we want it to be, more the way it ought to be. And responsive not just in the user interaction sense, but responsive to need, to changing understanding of need, and changing needs and contexts of use and operation.

That, after all, is what we mean by agility — sensing change and responding adaptively. Responding to emerging or re-envisaged need, to opportunity or threat, and adapting. Adapting as the context shifts, and as we see opportunities to inventively combine and improve capabilities, innovating into the adjacent possible.

When we think of design in this way, as not just a learning process, but a co-learning process, it’s clear that we want to learn in the cheapest medium that will produce learning that helps us resolve design direction, and the design decisions.

So let’s revisit modeling, and why we should bring it back to the agile design table. I’m focusing here on architectural design (significant design decisions shaping how the system is built), but much of the discussion applies also to design of what the system is (what we’d usually call requirements). [Our orientation to architecture is that there needs to be interaction/co-learning not just across what the system is and its containing systems (or how it is used), but interaction between the design of what the system is and how it is built (to affect desired outcomes; what it is made of; interactions and emergent properties; etc.). But that is another story, for another day.]

We Model: To Observe
The point that I want to draw out here, is that of sketching to observe and attend more closely. And sketching not only to see structure, but to explore and discover or uncover relationships (that may be obscured in or even by the code), and to investigate system behaviors. Placing an emphasis on understanding mechanisms by considering which parts work in concert to achieve some capability (function and properties), and how they do so. Adjusting our point of view, seeing from other perspectives. Seeing to understand, looking for surprises, for contradictions that unseat our assumptions. Building our theory of design, and this design, and the relationships between contextual demands and forces and the design and its outcomes.

We Model: To Think
Designing is thinking. Hard! Reasoning, relating, making trade-offs. Architectural design is thinking across the system — that’s a lot to hold in mind. Visual thinking expands our mental capacity, and sketching these images and models expands our processing power still more. The mind’s eye is resourceful, but it can’t readily see what it’s missing. Sketching creates external memory, so we can hold more information and see relationships, make connections, spot inconsistencies and gaps, and draw inferences about causalities and consequences, highlighting what we need to exercise or test further.

Sure, code externalizes thought too. And code (and TDD) is an effective design medium. Still, models help us think through aspects like overall structure, or particular design challenges, even before we have (all the) code. We play out ideas in our imagination and on paper/whiteboard/screen, to define, refine, re-imagine, and redefine the dominant architectural challenges and figure out architectural strategies to address them, identifying where to do more focused experiments in code. This may sound like a lot, but architectural decisions are make or break, structurally or strategically significant, and key to design integrity. They warrant closer attention. Judgment factors.

Drawing views of our system, helps us notice the relationships between structure and dynamics, to reason about relationships that give rise to and boost or damp properties. We pan across views, or out to wider frames, taking in more of the system and what it does where and how and why (again, because we must make tradeoffs we need to weigh value/import and risk/consequences and side effects).

We Model: To Think Together
We draw diagrams or model (some aspect of the system) to think, alone, and to create a shared thoughtspace where we can think together (and across time) about the form and shape and flow of things, considering how-it-works both before we have code and when the very muchness of the code obfuscates and it is all too much to hold in our head, yet we need to think, explore, reason about interactions, cross-cutting concerns, how things work together, and such. [That long sentence reifies how soon too much becomes cognitively intractable.]

Now we have more minds actively engaged in coming up with alternatives, testing and challenging and improving the models. But also more shared understanding. At least, we’re closer to a more reliably shared mental model of the system, with architectural views and maps to guide further design, and redesign (we’re actively learning, if we’re innovating), work.

We Model: To Test
Models help us try out or test our ideas — in an exploratory way when they are just sketches, and thought experiments, where we “animate” the models in mind and in conversation. Just sketches, so less is invested. Less ego. Less time.

We sketch-prototype alternatives to try them out in the cheapest medium that fits what we’re trying to understand and improve. We seek to probe, to learn, to verify the efficacy of the design elements we’re considering, under multiple simultaneous demands. We acknowledge we can misperceive and deceive ourselves, and hold our work to scrutiny, seeing it from different perspectives, from different vantage points but also with different demands in mind. We consider and reconsider our design for fit to context, and to purpose. We evolve the design. We factor and refactor; we reify and elaborate. We test and evolve. We make trade-offs and judgment calls. We bring in others with fresh perspective to help us find flaws. We simulate. We figure out what to probe further, what to build and instrument. We bring what we can to bear, to best enable our judgment, given the concerns we’re dealing with.

We humans are amazing; we invented and built all the tech we extend our capabilities with! And fallible; many failures, often costly, got us here, and we’re still learning of and from unanticipated side-effects and consequences. Software is a highly cognitive substance with which to build systems on which people and organizations depend. So. We design-test our way, with different media and mediums to support, enhance, stress and reveal flaws in our thinking. Yes in code. But not only in code.

In the Cheapest Medium that Fits the Moment
Along the way — early, and then more as fits the moment — we’re “mob modeling or “model storming” “out loud” “in pairs” or groups. And all that agile stuff. Just in the cheapest medium for the moment, because we need to explore options quick and dirty. Hack them — with sketches/models, with mock-ups, with just-enough prototypes. Not just upfront, but whenever we have some exploring to do, that we can do more cheaply than running experiments by building out the ideas in code. We do that too. Of course. But! We have the option to use diagrams and models to see what we can’t, or what is hard to see and reason about, with (just) code. Enough early. And enough along the way so that we course correct when we need to. So that we anticipate enough. So that we direct our attention to what is important to delve into and probe/test further. So that we put in infrastructure in good time. And avoid getting coupled to assumptions that cement the project into expectations that are honed for too narrow a customer need/group. And suchness.

In Sum, Then
When we, as a field, for the most part turned away from BDUF (big design upfront) toward Agile methods, we tended, unfortunately, to turn away from architecture visualization and modeling. We’ve argued here that sketching and modeling is indeed a way to be agile – to learn with the cheapest method that will uncover key issues, alternatives, and give us a better handle on our design approach. Not to do this as Big Modeling Upfront, but to have and apply our modeling skills early, to understand and shape strategic opportunity, and to set design direction for the system. And to return to modeling whenever that is the cheapest way to explore a design idea, just enough, just in time, to get a better sense of design bets worth investing more in, by building the ideas out and deploying at least to some in our user base, to test more robustly.

“Building software is an expensive way to learn*” — Alistair Cockburn
Image Credit: Sketch by @Stuartliveart and Words by Alistair Cockburn (@TotherAlistair). Used with permission.

  • Footnote: The more complete version includes: “How are you going to probe the world, don’t just think through the problem?” along with “Building software is an expensive way to learn.” It comes from Alistair Cockburn’s “Heart of Agile” workshop, at the point where he is addressing probing and learning about the world the problem serves (like doing surveys).

In this post, I have borrowed “Building software is an expensive way to learn” into the architecture space to remind us to learn as cheaply as fits the design problem of the moment. This post uses “probe” in its dual sense — (i) to explore (in this case, with our mind aided by sketches/diagrams/modeling tools) to understand/discover/investigate and (ii) to instrument to examine/learn. It addresses “how will we probe the world [of the technical design]?” with allusions to thought experiments, prototypes and building out design elements to more robustly test them, etc., but it would be useful to elaborate further in a post that focuses there. And then there’s all the posts that could address the more broad design context (design of what the system is and is becoming, going beyond design of the system internals). But next, I think I’ll address heuristics, again focused on architectural design.

Why “Many-Model Thinkers” Make Better Decisions

Source: Why “Many-Model Thinkers” Make Better Decisions

 

Why “Many-Model Thinkers” Make Better Decisions

NOVEMBER 19, 201
“To be wise you must arrange your experiences on a lattice of models.”  Charlie Munger

Organizations are awash in data — from geocoded transactional data to real-time website traffic to semantic quantifications of corporate annual reports. All these data and data sources only add value if put to use. And that typically means that the data is incorporated into a model. By a model, I mean a formal mathematical representation that can be applied to or calibrated to fit data.

Some organizations use models without knowing it. For example, a yield curve, which compares bonds with the same risk profile but different maturity dates, can be considered a model. A hiring rubric is also a kind of model. When you write down the features that make a job candidate worth hiring, you’re creating a model that takes data about the candidate and turns it into a recommendation about whether or not to hire that person. Other organizations develop sophisticated models. Some of those models are structural and meant to capture reality. Other models mine data using tools from machine learning and artificial intelligence.

The most sophisticated organizations — from Alphabet to Berkshire Hathaway to the CIA — all use models. In fact, they do something even better: they use many models in combination.

Without models, making sense of data is hard. Data helps describe reality, albeit imperfectly. On its own, though, data can’t recommend one decision over another. If you notice that your best-performing teams are also your most diverse, that may be interesting. But to turn that data point into insight, you need to plug it into some model of the world — for instance, you may hypothesize that having a greater variety of perspectives on a team leads to better decision-making. Your hypothesis represents a model of the world.

Though single models can perform well, ensembles of models work even better. That is why the best thinkers, the most accurate predictors, and the most effective design teams use ensembles of models. They are what I call, many-model thinkers.

In this article, I explain why many models are better than one and also describe three rules for how to construct your own powerful ensemble of models: spread attention broadly, boost predictions, and seek conflict.

The case for models

First, some background on models. A model formally represents some domain or process, often using variables and mathematical formula. (In practice, many people construct more informal models in their head, or in writing, but formalizing your models is often a helpful way of clarifying them and making them more useful.) For example, Point Nine Capital uses a linear model to sort potential startup opportunities based on variables representing the quality of the team and the technology. Leading universities, such as Princeton and Michigan, apply probabilistic models that represent applicants by grade point average, test scores, and other variables to determine their likelihood of graduating. Universities also use models to help students adopt successful behaviors. Those models use variables like changes in test scores over a semester. Disney used an agent-based model to design parks and attractions. That model created a computer rendition of the park complete with visitors and simulated their activity so that Disney could see how different decisions might affect how the park functioned. The Congressional Budget office uses an economic model that includes income, unemployment, and health statistics to estimate the costs of changes to health care laws.

In these cases, the models organize the firehose of data. These models all help leaders explain phenomena and communicate information. They also impose logical coherence, and in doing so, aid in strategic decision making and forecasting. It should come as no surprise that models are more accurate as predictors than most people. In head-to-head competitions between people who use models and people who don’t, the former win, and typically do so by large margins.

Models win because they possess capabilities that humans lack. Models can embed and leverage more data. Models can be tested, calibrated, and compared. And models do not commit logical errors. Models do not suffer from cognitive biases. (They can, however, introduce or replicate human biases; that is one of the reasons for combining multiple models.)

Combining multiple models

While applying one model is good, using many models — an ensemble — is even better, particularly in complex problem domains. Here’s why: models simplify. So, no matter how much data a model embeds, it will always miss some relevant variable or leave out some interaction. Therefore, any model will be wrong.

With an ensemble of models, you can make up for the gaps in any one of the models. Constructing the best ensemble of models requires thought and effort. As it turns out, the most accurate ensembles of models do not consist of the highest performing individual models. You should not, therefore, run a horse race among candidate models and choose the four top finishers. Instead, you want to combine diverse models.

For decades, Wall Street firms have used models to evaluate investment risk. Risk takes many forms. In addition to risk from financial market fluctuations, there exist risks from geopolitics, climactic events, and social movements, such as occupy Wall Street, not to mention, risks from cyber threat and other forms of  terrorism. A standard risk model based on stock price correlations will not embed all of these dimensions.  Hence, leading investment banks use ensembles of models to assess risks.

But, what should that ensemble look like?  Which models does one include, and which does one leave out?

The first guideline for building an ensemble is to look for models that focus attention on different parts of a problem or on different processes. By that I mean, your second model should include different variables. As mentioned above, models leave stuff out. Standard financial market models leave out fine-grained institutional details of how trades are executed. They abstract away from the ecology of beliefs and trading rules that generate price sequences. Therefore, a good second model would include those features.

The mathematician Doyne Farmer advocates agent-based models as a good second model. An agent-based model consists of rule based “agents” that represent people and organizations. The model is then run on a computer. In the case of financial risk, agent-based models can be designed to include much of that micro-level detail. An agent-based model of a housing market can represent each household, assigning it an income and a mortgage or rental payment. It can also include behavioral rules that describe conditions when the home’s owners will refinance and when they will declare bankruptcy. Those behavioral rules may be difficult to get right, and as a result, the agent-based model may not be that accurate — at least at first. But, Farmer and others would argue that over time, the models could become very accurate.

We care less about whether agent-based models would outperform other standard models than whether agent-based models will read signals missed by standard models. And they will. Standard models work on aggregates, such as Case-Shiller indices, which measure changes in prices of houses. If the Case-Shiller index rises faster than income, a housing bubble may be likely. As useful as the index is, it is blind to distributional changes that hold means constant. If income increases go only to the top 1% while housing prices rise across the board, the index would be no different than if income increases were broad based. Agent based models would not be blind to the distributional changes. They would notice that people earning $40,000 must hold $600,000 mortgages. The agent based model is not necessarily better. It’s value comes from focusing attention where the standard model does not.

The second guideline borrows the concept of boosting, a technique from machine learning. Ensemble classification algorithms, such as random forest models consist of a collection of simple decision trees. A decision tree classifying potential venture capital investments might say “if the market is large, invest.” Random forests are a technique to combine multiple decision trees. And boosting improves the power of these algorithms by using data to search for new trees in a novel way. Rather than look for trees that predict with high accuracy in isolation, boosting looks for trees that perform well when the forest of current trees does not. In other words, look for a model that attacks the weaknesses of your current model.

Here’s one example. As mentioned, many venture capitalists use weighted attribute models to sift through the thousands of pitches that land at their doors. Common attributes include the team, the size of the market, the technological application, and timing. A VC firm might score each of these dimensions on a scale from 1 to 5 and then assign an aggregate score as follows:

Score = 10*Team + 8*Market size + 7*Technology + 4*Timing

This might be the best model the VC can construct. The second best model might use similar variables and similar weights. If so, it will suffer from the same flaws as the first model. That means that combining it with the first model will probably not lead to substantially  better decisions.

A boosting approach would take data from all past decisions and see where the first model failed. For instance, it may be that be that investment opportunities with scores of 5 out of 5 on team, market size, and technology, do not pan out as expected. This could be because those markets are crowded. Each of the three attributes —team, market size, and workable technology — predicts well in isolation, but if someone has all three, it may be likely that others do as well and that a herd of horses tramples the hoped for unicorn. The first model therefore would predict poorly in these cases. The idea of boosting is to go searching for models that do best specifically when your other models fail.

To give a second example, several firms I have visited have hired computer scientists to apply techniques from artificial intelligence to identify past hiring mistakes. This is boosting in its purest form. Rather than try to use AI to simply beat their current hiring model, they use AI to build a second model that complements their current hiring model. They look for where their current model fails and build new models to complement it.

In that way, boosting and attention share something in common: they both look to combine complementary models. But attention looks at what goes into the model — the types of variables it considers — whereas boosting focuses on what comes out — the cases where the first model struggles.

Boosting works best if you have lots of historical data on how your primary model performs. Sometimes, we don’t. In those cases, seek conflict. That is, look for models that disagree. When a team of people confronts a complex decision, it expects — in fact it wants — some disagreement. Unanimity would be a sign of group think. That’s true of models as well.

The only way the ensemble can improve on a single model is if the models differ. To borrow a quote from Richard Levins, the “truth lies at the intersection of independent lies.” It does not lie at the intersection of correlated lies. Put differently, just as you would not surround yourself with “yes men” do not surround yourself with “yes models.”

Suppose that you run a pharmaceutical company and that you use a linear model to projects sales of recently patented drugs. To build an ensemble, you might also construct a systems dynamics model as well as a contagion model. Say that the contagion model results in similar long-terms sales but a slower initial uptake, but that the systems dynamics model leads to a much different forecast. If so, it creates an opportunity for strategic thinking. Why do the models differ?  What can we learn from that and how do we intervene.

In sum, models, like humans, make mistakes because they fail to pay attention to relevant variables or interactions. Many-model thinking overcomes the failures of attention of any one model. It will make you wise.


Scott E. Page is the Leonid Hurwicz Collegiate Professor of Complex Systems, Political Science, and Economics at the University of Michigan and an external faculty member of the Santa Fe Institute. He is the author of The Model Thinker (Basic Books, 2018).

Ivo Velitchkov on Twitter: “The nature of social organization of production: From firms to complex dynamics – Terra – – Systems Research and Behavioral Science – Wiley Online Library https://t.co/NgIXZ3XFdb… https://t.co/xNiO3B6qix”

of production: From firms to complex dynamics – Terra – – Systems Research and Behavioral Science – Wiley Online Library https://buff.ly/2KyiCUU 4:05 pm – 24 Nov 2018

Reinvigorating Quality of Working Life Research | Grote + Guest | 2017 | Human Relations

Since the 1981 publication on #QualityOfWorkingLife by #EricTrist, perhaps it’s time for a revisiting.  #GudelaGrote (ETH Zurich) and #DavidGuest (King’s College) wrote:

We will make and substantiate five claims in this essay:

  • (1) the initial QWL movement of the 1960 and 1970s offers an early model for evidence-based policy-making and managerial practice resulting from interdisciplinary social science research that provides useful lessons for contemporary practice;
  • (2) contemporary developments in work and in society more broadly justify a renewed focus on QWL;
  • (3) recent research relevant to QWL has been conducted with increasingly narrow disciplinary foci and overly optimistic assumptions regarding the compatibility of individual and organizational interests, which has limited its policy impact. Researchers need to address the challenge of competing perspectives in this regard;
  • (4) a revised list of QWL criteria and an associated analytic framework, that take into consideration both relevant developments in society and advances in research can serve as a basis for a renewed QWL research agenda;
  • (5) QWL researchers need to (re)learn how to create policy impact by working to an interdisciplinary, stakeholder-focused and intervention-oriented research agenda.

This kind of QWL research agenda should benefit evidence-based policy-making and interventions in organizations, but also academic research itself by rebalancing its rigour and relevance.

We will conclude with some remarks on where we hope a discussion provoked by this essay might lead us as a scientific community concerned with improving QWL [Grote and Guest (2017), pp. 150-151, editorial paragraphing added).

Although the researchers see a return to the scientific approach back to interdisciplinary, the political and economic environment in 2017 is seen as unfavourable towards QWL.

Table 1. Changing frames for quality of working life (QWL) research.
Original QWL movement QWL research from the 90s to today Proposed future QWL research
Orientation towards practice Normative; evidence-based intervention Creating an evidence base for practice Normative; creating an evidence base for practice and
evidence-based interventions
Research focus Relevance Rigour Relevance and rigour
Scientific approach Interdisciplinary Disciplinary Interdisciplinary
Level of analysis Meso to macro Micro to meso Multi-level
Promoted employment relations Collective agreements Individual agreements Combining collective and individual focus
Political and economic
environment
Favourable towards QWL Unfavourable towards QWL Unfavourable towards QWL
Social impetus Emphasis on collective emancipation as a route to societal prosperity Individual proactivity for personal emancipation Emphasis on individual and collective paths to emancipation

The employment relations are no longer on just collective agreements, but on combining the individual and the collective.

After revising a list of quality of working life criteria (adapted from Walton (1973) and Walton (1974)), the researchers propose a framework.

In Figure 1, we outline an integrative framework that incorporates all criteria in the classification.

Figure 1. An integrated framework for future quality of working life research.

Figure 1. An integrated framework for future quality of working life research.

At its heart (level 1) is the individual worker and their job, reflected in Individual proactivity and the Development of human capacities, implying a focus on job content, decision-latitude and employee
development.

In the first band around this core (level 2), reflecting the organizational context of work, we locate organizational HRM policy-related criteria including Adequate and fair compensation, Safe and healthy working environment, and Social integration.

The outer band (level 3) covers issues related to the world outside work including Consideration of the total life space, Social relevance and Flexible working, although the latter potentially cuts across all three levels.

The boundaries between the different levels of analysis are likely to vary in strength and there is inevitably some overlap. Specifically, Growth and security is placed at the boundary of level 1 and 2 and Constitutionalism, that is the protection and promotion of employees’ rights and mechanisms for representation, sits between levels 2 and 3.

Outside the sphere of QWL we locate national and international institutional and legislative arrangements and the wider economic and financial systems that facilitate, prescribe and also inhibit QWL activities [Grote and Guest (2017), pp. 156-157, editorial paragraphing added).

The researchers then propose four research approaches that will have impact.

Reference

Grote, Gudela, and David Guest. 2017. “The Case for Reinvigorating Quality of Working Life Research.” Human Relations 70 (2): 149–67. https://doi.org/10.1177/0018726716654746.  Alternate search on Google Scholar.

Trist, Eric L. 1981. The Evolution of Socio-Technical Systems: A Conceptual Framework and Action Research Program. Occasional Paper 2. Toronto, Canada: Ontario Quality of Working Life Centre.  Alternate searches on Google Scholar and on Worldcat.

#quality, #quality-of-working-life, #trist

Systems thinking for evaluation design

From Sjon van ‘t Hof’s always brilliant blog – and Bob Williams is always someone to look out for.

Transcript, summary, and concept map
During March 25-26, 2015, Bob Williams, the main author of Wicked Solutions and several other books on systems thinking, gave a workshop “Wicked Solutions: A Systems Approach to Complex Problems” on the use of the systems approach in evaluation design at the Research Institute for Humanity and Nature (RIHN) in Kyoto, Japan. RIHN is a development organization that conducts practical transdisciplinary studies of development problems and their solution. Some of the researchers participating in the workshop were involved in peatland management research on Bali and Sulawesi, Indonesia. Peatland is a major part of coastal wetland geomorphology around the world (4 million km2). The catch (22) is that peat grows naturally, but the process is reversed when the land is drained for agriculture. Peat covers half of the Netherlands, where the peatlands – which are often already below sea level – subside to ever lower levels, while sea levels are rising. A small part of the 2-day workshop was taped on video, edited, and posted on Youtube in two 30-minute parts youtu.be/lFcWhGE7moQ and youtu.be/5RRHpXl2hrw. They are not about peat but about the principles of systemic design. The transcription and slides have been combined in a single pdf. This post contains a concept map and summary. Have fun.

csl4d's avatarCSL4D

Transcript, summary, and concept map

During March 25-26, 2015, Bob Williams, the main author of Wicked Solutions and several other books on systems thinking, gave a workshop “Wicked Solutions: A Systems Approach to Complex Problems” on the use of the systems approach in evaluation design at the Research Institute for Humanity and Nature (RIHN) in Kyoto, Japan. RIHN is a development organization that conducts practical transdisciplinary studies of development problems and their solution. Some of the researchers participating in the workshop were involved in peatland management research on Bali and Sulawesi, Indonesia. Peatland is a major part of coastal wetland geomorphology around the world (4 million km2). The catch (22) is that peat grows naturally, but the process is reversed when the land is drained for agriculture. Peat covers half of the Netherlands, where the peatlands – which are often already below sea level –…

View original post 1,275 more words

Practice of Change – Doctoral Dissertation for review until March 2018 – Joichi Ito

Source: Practice of ChangeThe Practice of Change

How I survived being interested in everything

Doctoral dissertation, academic year 2018

Keio University

Graduate School of Media & Governance

Joichi Ito

Abstract

Over the last century civilization has systematically supported a market- based approach to developing technical, financial, social and legal tools that focus on efficiency, growth and productivity. In this manner we have achieved considerable progress on some of the most pressing humanitarian challenges, such as eradicating infectious diseases and making life easier and more convenient. However, we have often put our tools and methods to use with little regard to their systemic or long-term effects, and have thereby created a set of new, interconnected, and more complex problems. Our new problems require new approaches: new understanding, solution design and intervention. Yet we continue to try to solve these new problems with the same tools that caused them.

Therefore in my dissertation I ask:

How can we understand and effectively intervene in interconnected complex adaptive systems?

 In particular, my thesis presents through theory and practice the following contributions to addressing these problems:

 

  1. A post-Internet framework for understanding and intervening in complex adaptive systems. Drawing on systems dynamics, evolutionary dynamics and theory of change based on causal networks, I describe a way to understand and suggest ways to intervene in complex systems. I argue that an anti-disciplinary approach and paradigm shifts are required to achieve the outcomes we desire.
  2. Learnings from the creation and management of post-Internet organizations that can be applied to designing and deploying interventions. I propose an architecture of layers of interoperability to unbundle complex, inflexible, and monolithic systems and increase competition, cooperation, generativity, and flexibility. I argue that the Internet is the best example of this architecture and that the Internet has provided an opportunity to deploy this architecture in other domains. I demonstrate how the Internet has has made the world more complex but through lowering the cost of communication and collaboration has enabled new forms of organization and production. This has changed the nature of our interventions.
  3. How and why we must change the values of society from one based on the measurement of financial value to flourishing and robustness. The paradigm determines what we measure and generates the values and the goals of a system. Measuring value financially has created a competitive market-based system that has provided many societal benefits but has produced complex problems not solvable through competitive market-based solutions. In order to address these challenges, we must shift the paradigm across our systems to focus on a more complex measure of flourishing and robustness. In order to transcend  our current economic paradigm, the transformation will require a movement that includes arts and culture to transform strongly held beliefs. I propose a framework of values based on the pursuit of flourishing and a method for transforming ourselves.

 

Reflecting on my work experience, I examine my successes and failures in the form of learnings and insights. I discuss what questions are outstanding and conclude with a call to action with a theory of change; we need to bring about a fundamental normative shift in society through communities, away from the pursuit of growth for growth’s sake and towards a sustainable sensibility of flourishing that can draw on both historical examples and the sensibilities of some modern indigenous cultures, as well as new values emerging from theoretical and practical progress in science.

“Question authority and think for yourself.”

— Timothy Leary

Scheming with Timothy Leary in 1995″ style=”box-sizing: inherit; border-style: none; display: block; width: 529.912px;”>

Scheming with Timothy Leary in 1995

Keywords: Cybernetics, Systems Dynamics, Philosophy of Science, Internet, Cryptocurrency

This work is open for community review until March 4, 2019. After this date, readers are encouraged to continue commenting on and engaging with the text, but their comments will not be taken into consideration for the revision of the work. A revised version of Practice of Change will be published by the MIT Press in the fall of 2019.

Chapters

1. Introduction

by Joichi Ito

Nov 19, 2018

Description of thesis and overview of the structure and argument of the dissertation.

2. Requiring Change

by Joichi Ito

Nov 19, 2018

Descriptions of several systems that require interventions as a result of the increasing complexity of their environments.

3. Theory of Change

by Joichi Ito

Nov 19, 2018

A theory of change developed through the work of others and my own experience.

4. Practice of Change

by Joichi Ito

Nov 19, 2018

Describing through my own experience how to test and deploy the ideas.

5. Agents of Change

by Joichi Ito

Nov 19, 2018

Personal reflections and thoughts on how we might behave as individuals and institutions.

6. Conclusion

by Joichi Ito

Nov 19, 2018

Summary of the dissertation and exploration of future work.

Feature & Community Newsletter

Practice of Change

D J Stewart: Origins of Cybernetics

Update: site now dead but links at:
https://web.archive.org/web/20171128041809/http://www.hfr.org.uk:80/cybernetics-pages/origins.htm
https://web.archive.org/web/20171128045734/http://www.hfr.org.uk/cybernetics-pages/index.htm

Origins of Cybernetics

Source: D J Stewart: Origins of Cybernetics”

 

About Cybernetics

spacer

Index button
spacer

This Internet version © Copyright D J Stewart 2000. All rights reserved.
Original version © Copyright D J Stewart 1959.

An essay on the Origins of Cybernetics

D J STEWART

Contents

Norbert Wiener meets Arturo Rosenblueth
Prediction and voluntary control
The birth certificate of cybernetics
Classification of behaviour
Purpose equated with feedback
Application to cardiac muscle
Rosenblueth gives the first talk on cybernetics
The new conference group forms
Cybernetics is given its name
The group holds regular meetings
The theory of information
Automatic computing machinery

 

The human factors research website

http://www.hfr.org.uk/index.htm

also has:

 

D J Stewart

Point button Pages about Cybernetics
Point button Books on Cybernetics
Point button Pages about Ternality Theory
Point button Pages about Ternary Analysis
Point button References and abstracts of papers on
Ternality Theory and Ternary Analysis
Point button Pages about Understanding of Science
Point button References and abstracts of papers on
Understanding of Science
No Way button About D J Stewart
Links button Links to other relevant websites