The Model Thinker: What You Need to Know to Make Data Work for You: Scott E. Page

cxdig's avatarComplexity Digest

From the stock market to genomics laboratories, census figures to marketing email blasts, we are awash with data. But as anyone who has ever opened up a spreadsheet packed with seemingly infinite lines of data knows, numbers aren’t enough: we need to know how to make those numbers talk. In The Model Thinker, social scientist Scott E. Page shows us the mathematical, statistical, and computational models–from linear regression to random walks and far beyond–that can turn anyone into a genius. At the core of the book is Page’s “many-model paradigm,” which shows the reader how to apply multiple models to organize the data, leading to wiser choices, more accurate predictions, and more robust designs. The Model Thinker provides a toolkit for business people, students, scientists, pollsters, and bloggers to make them better, clearer thinkers, able to leverage data and information to their advantage.

Source: www.amazon.com

View original post

Improvisation Blog: Why has my blogging slowed down? (Some thoughts on holograms, music and machine learning) – Mark Johnson

Source: Improvisation Blog: Why has my blogging slowed down? (Some thoughts on holograms, music and machine learning)

 

Monday, 26 November 2018

Why has my blogging slowed down? (Some thoughts on holograms, music and machine learning)

I realise that I haven’t blogged very much recently. Partly it’s because I’ve been very busy and a bit exhausted. But also I think it’s because I’ve got so much in my head at the moment, I don’t know how to get anything out.

What I have been doing is a bit of cybernetic evangelism. It’s been great to take people to the Stafford Beer archive and watch their heads explode! It’s quite a predictable thing…

But I’m also thinking about Beer’s holism, and the way that his approach unfolds from nothing. It’s this “unfolding from nothing” which really fascinates me. It’s rather like my friend Peter Rowlands’s work on physics which expands from the idea of nothing (or his ‘nilpotent’ formulae). Is everything really nothing? What’s real about nothing?

I have some good friends visiting me at the end of the week who also know about this stuff. We’re going to do a session in the Beer archive. I’m hoping some colleagues from Liverpool also come along.

It’s weird how things start to fit together. I suppose my biggest interest in cybernetics at the moment is coherence: how do things fit together? The mechanistic/stochastic approach of cybernetics doesn’t address this question well. But David Bohm’s idea of the hologram (which Beer was at least aware of because he had a book on it), does.

All the things that fascinate me most, such as music, or conversation, or narrative, or biological form, are all coherent. Are they all holographic?

This question about holograms – particularly in music – has taken on a new dimension for me. I am also working with machine learning tools for a big medical project. We have a problem: how to adjust the judgement of a trained network without screwing up all the other judgements of the network. It’s not really how machine learning is meant to work.

We have a brute-force fix for the problem, but it’s rather unsatisfactory (and probably not reliable). It would be much better to have a better understanding of what is happening in a convolutional neural network (CNN)

It turns out that current thinking is that it is a kind of hologram. CNNs encode differences and orders of things in a fractal structure. That means that the nature of the problem we want to solve is “where to change the fractal/hologram so that a specific change in its ordering may be made”.

This is rather like asking “where to change the  fractal holographic image so that the hologram changes shape in a particular way”. The nature of the challenge can be seen by examining a holographic plate:

What it encodes is the interference pattern of light. That means actually that it encodes distance and time (time because interference involves frequency). Now if we can understand how the interference pattern arises, we might be able to understand where to manipulate it.
Music, I suspect has a holographic structure. I’m writing a paper at the moment where this structure is considered using the fractal images of anticipatory systems which were developed by Daniel Dubois. His images look like this:
In music’s hologram, I think what is encoded is the interference between different redundancies. Like light, this interference means that the hologram encodes time and difference. Because we can play with music in a very practical way, maybe there is a way in which insight gained from music can help with the practical problem of machine learning.
Let’s see.
In the meantime, I’ve felt the need to shake up my metasystem…

Complexity Across the Disciplines: Course Resources from Loren Demerath

From the excellent Human Current podcast at the NECSI conference: http://www.human-current.com/episode-109-the-social-pursuit-of-order

 

These resources from Loren Demerath – course resources with loads of links and content from a course rather modestly entitled

Soc 395: “The Emergence of Order: the Universe, Life, Consciousness, and Society”

Source: Complexity Across the Disciplines: Course Resources

Jeanne Hamming

English

Centenary

 

Loren Demerath

Sociology

Centenary

 

Dante Suarez

Economics

Trinity

 

Mark Goadrich

Computer Sci.

Hendrix

 

Steve Desjardins

Chemistry

Washington

& Lee

 

Scott Davis

Philosophy

Richmond

 

    In 2016 THE ASSOCIATED COLLEGES OF THE SOUTH

         funded a team of faculty to create an interdisciplinary course on complexity.

    This page archives the resources for that course, and are free for educational use.

Professor Interviews

     & Discussions

Powerpoints

    & Notes

Models &

Illustrations

RESOURCES FOR TEACHING COMPLEXITY

Readings

Videos

Podcasts

Visual Agility: Why We Model – Ruth Malan on LinkedIn

https://www.linkedin.com/pulse/visual-agility-why-we-model-ruth-malan/ (with illustrations)

Visual Agility: Why We Model
Published on June 23, 2018
Ruth Malan
Architecture Consultant at Bredemeyer Consulting

Context for this Discussion
Design of complex systems is hard — wickedly hard! It takes all the cognitive assist we can muster. Trade-offs must be made because there is interaction — not just interaction among components to create a capability, but interaction among properties. And interaction between the system and its users and containing systems(-of-systems). And more! These systems are evolving — the more agile, the more we try to take this co-evolution, this learning across boundaries, this symmathesy, into account.

This is responsive design, with an emphasis on responsive, and on design. Design in the classic Herbert Simon sense of design to make the system more the way we want it to be, more the way it ought to be. And responsive not just in the user interaction sense, but responsive to need, to changing understanding of need, and changing needs and contexts of use and operation.

That, after all, is what we mean by agility — sensing change and responding adaptively. Responding to emerging or re-envisaged need, to opportunity or threat, and adapting. Adapting as the context shifts, and as we see opportunities to inventively combine and improve capabilities, innovating into the adjacent possible.

When we think of design in this way, as not just a learning process, but a co-learning process, it’s clear that we want to learn in the cheapest medium that will produce learning that helps us resolve design direction, and the design decisions.

So let’s revisit modeling, and why we should bring it back to the agile design table. I’m focusing here on architectural design (significant design decisions shaping how the system is built), but much of the discussion applies also to design of what the system is (what we’d usually call requirements). [Our orientation to architecture is that there needs to be interaction/co-learning not just across what the system is and its containing systems (or how it is used), but interaction between the design of what the system is and how it is built (to affect desired outcomes; what it is made of; interactions and emergent properties; etc.). But that is another story, for another day.]

We Model: To Observe
The point that I want to draw out here, is that of sketching to observe and attend more closely. And sketching not only to see structure, but to explore and discover or uncover relationships (that may be obscured in or even by the code), and to investigate system behaviors. Placing an emphasis on understanding mechanisms by considering which parts work in concert to achieve some capability (function and properties), and how they do so. Adjusting our point of view, seeing from other perspectives. Seeing to understand, looking for surprises, for contradictions that unseat our assumptions. Building our theory of design, and this design, and the relationships between contextual demands and forces and the design and its outcomes.

We Model: To Think
Designing is thinking. Hard! Reasoning, relating, making trade-offs. Architectural design is thinking across the system — that’s a lot to hold in mind. Visual thinking expands our mental capacity, and sketching these images and models expands our processing power still more. The mind’s eye is resourceful, but it can’t readily see what it’s missing. Sketching creates external memory, so we can hold more information and see relationships, make connections, spot inconsistencies and gaps, and draw inferences about causalities and consequences, highlighting what we need to exercise or test further.

Sure, code externalizes thought too. And code (and TDD) is an effective design medium. Still, models help us think through aspects like overall structure, or particular design challenges, even before we have (all the) code. We play out ideas in our imagination and on paper/whiteboard/screen, to define, refine, re-imagine, and redefine the dominant architectural challenges and figure out architectural strategies to address them, identifying where to do more focused experiments in code. This may sound like a lot, but architectural decisions are make or break, structurally or strategically significant, and key to design integrity. They warrant closer attention. Judgment factors.

Drawing views of our system, helps us notice the relationships between structure and dynamics, to reason about relationships that give rise to and boost or damp properties. We pan across views, or out to wider frames, taking in more of the system and what it does where and how and why (again, because we must make tradeoffs we need to weigh value/import and risk/consequences and side effects).

We Model: To Think Together
We draw diagrams or model (some aspect of the system) to think, alone, and to create a shared thoughtspace where we can think together (and across time) about the form and shape and flow of things, considering how-it-works both before we have code and when the very muchness of the code obfuscates and it is all too much to hold in our head, yet we need to think, explore, reason about interactions, cross-cutting concerns, how things work together, and such. [That long sentence reifies how soon too much becomes cognitively intractable.]

Now we have more minds actively engaged in coming up with alternatives, testing and challenging and improving the models. But also more shared understanding. At least, we’re closer to a more reliably shared mental model of the system, with architectural views and maps to guide further design, and redesign (we’re actively learning, if we’re innovating), work.

We Model: To Test
Models help us try out or test our ideas — in an exploratory way when they are just sketches, and thought experiments, where we “animate” the models in mind and in conversation. Just sketches, so less is invested. Less ego. Less time.

We sketch-prototype alternatives to try them out in the cheapest medium that fits what we’re trying to understand and improve. We seek to probe, to learn, to verify the efficacy of the design elements we’re considering, under multiple simultaneous demands. We acknowledge we can misperceive and deceive ourselves, and hold our work to scrutiny, seeing it from different perspectives, from different vantage points but also with different demands in mind. We consider and reconsider our design for fit to context, and to purpose. We evolve the design. We factor and refactor; we reify and elaborate. We test and evolve. We make trade-offs and judgment calls. We bring in others with fresh perspective to help us find flaws. We simulate. We figure out what to probe further, what to build and instrument. We bring what we can to bear, to best enable our judgment, given the concerns we’re dealing with.

We humans are amazing; we invented and built all the tech we extend our capabilities with! And fallible; many failures, often costly, got us here, and we’re still learning of and from unanticipated side-effects and consequences. Software is a highly cognitive substance with which to build systems on which people and organizations depend. So. We design-test our way, with different media and mediums to support, enhance, stress and reveal flaws in our thinking. Yes in code. But not only in code.

In the Cheapest Medium that Fits the Moment
Along the way — early, and then more as fits the moment — we’re “mob modeling or “model storming” “out loud” “in pairs” or groups. And all that agile stuff. Just in the cheapest medium for the moment, because we need to explore options quick and dirty. Hack them — with sketches/models, with mock-ups, with just-enough prototypes. Not just upfront, but whenever we have some exploring to do, that we can do more cheaply than running experiments by building out the ideas in code. We do that too. Of course. But! We have the option to use diagrams and models to see what we can’t, or what is hard to see and reason about, with (just) code. Enough early. And enough along the way so that we course correct when we need to. So that we anticipate enough. So that we direct our attention to what is important to delve into and probe/test further. So that we put in infrastructure in good time. And avoid getting coupled to assumptions that cement the project into expectations that are honed for too narrow a customer need/group. And suchness.

In Sum, Then
When we, as a field, for the most part turned away from BDUF (big design upfront) toward Agile methods, we tended, unfortunately, to turn away from architecture visualization and modeling. We’ve argued here that sketching and modeling is indeed a way to be agile – to learn with the cheapest method that will uncover key issues, alternatives, and give us a better handle on our design approach. Not to do this as Big Modeling Upfront, but to have and apply our modeling skills early, to understand and shape strategic opportunity, and to set design direction for the system. And to return to modeling whenever that is the cheapest way to explore a design idea, just enough, just in time, to get a better sense of design bets worth investing more in, by building the ideas out and deploying at least to some in our user base, to test more robustly.

“Building software is an expensive way to learn*” — Alistair Cockburn
Image Credit: Sketch by @Stuartliveart and Words by Alistair Cockburn (@TotherAlistair). Used with permission.

  • Footnote: The more complete version includes: “How are you going to probe the world, don’t just think through the problem?” along with “Building software is an expensive way to learn.” It comes from Alistair Cockburn’s “Heart of Agile” workshop, at the point where he is addressing probing and learning about the world the problem serves (like doing surveys).

In this post, I have borrowed “Building software is an expensive way to learn” into the architecture space to remind us to learn as cheaply as fits the design problem of the moment. This post uses “probe” in its dual sense — (i) to explore (in this case, with our mind aided by sketches/diagrams/modeling tools) to understand/discover/investigate and (ii) to instrument to examine/learn. It addresses “how will we probe the world [of the technical design]?” with allusions to thought experiments, prototypes and building out design elements to more robustly test them, etc., but it would be useful to elaborate further in a post that focuses there. And then there’s all the posts that could address the more broad design context (design of what the system is and is becoming, going beyond design of the system internals). But next, I think I’ll address heuristics, again focused on architectural design.

Why “Many-Model Thinkers” Make Better Decisions

Source: Why “Many-Model Thinkers” Make Better Decisions

 

Why “Many-Model Thinkers” Make Better Decisions

NOVEMBER 19, 201
“To be wise you must arrange your experiences on a lattice of models.”  Charlie Munger

Organizations are awash in data — from geocoded transactional data to real-time website traffic to semantic quantifications of corporate annual reports. All these data and data sources only add value if put to use. And that typically means that the data is incorporated into a model. By a model, I mean a formal mathematical representation that can be applied to or calibrated to fit data.

Some organizations use models without knowing it. For example, a yield curve, which compares bonds with the same risk profile but different maturity dates, can be considered a model. A hiring rubric is also a kind of model. When you write down the features that make a job candidate worth hiring, you’re creating a model that takes data about the candidate and turns it into a recommendation about whether or not to hire that person. Other organizations develop sophisticated models. Some of those models are structural and meant to capture reality. Other models mine data using tools from machine learning and artificial intelligence.

The most sophisticated organizations — from Alphabet to Berkshire Hathaway to the CIA — all use models. In fact, they do something even better: they use many models in combination.

Without models, making sense of data is hard. Data helps describe reality, albeit imperfectly. On its own, though, data can’t recommend one decision over another. If you notice that your best-performing teams are also your most diverse, that may be interesting. But to turn that data point into insight, you need to plug it into some model of the world — for instance, you may hypothesize that having a greater variety of perspectives on a team leads to better decision-making. Your hypothesis represents a model of the world.

Though single models can perform well, ensembles of models work even better. That is why the best thinkers, the most accurate predictors, and the most effective design teams use ensembles of models. They are what I call, many-model thinkers.

In this article, I explain why many models are better than one and also describe three rules for how to construct your own powerful ensemble of models: spread attention broadly, boost predictions, and seek conflict.

The case for models

First, some background on models. A model formally represents some domain or process, often using variables and mathematical formula. (In practice, many people construct more informal models in their head, or in writing, but formalizing your models is often a helpful way of clarifying them and making them more useful.) For example, Point Nine Capital uses a linear model to sort potential startup opportunities based on variables representing the quality of the team and the technology. Leading universities, such as Princeton and Michigan, apply probabilistic models that represent applicants by grade point average, test scores, and other variables to determine their likelihood of graduating. Universities also use models to help students adopt successful behaviors. Those models use variables like changes in test scores over a semester. Disney used an agent-based model to design parks and attractions. That model created a computer rendition of the park complete with visitors and simulated their activity so that Disney could see how different decisions might affect how the park functioned. The Congressional Budget office uses an economic model that includes income, unemployment, and health statistics to estimate the costs of changes to health care laws.

In these cases, the models organize the firehose of data. These models all help leaders explain phenomena and communicate information. They also impose logical coherence, and in doing so, aid in strategic decision making and forecasting. It should come as no surprise that models are more accurate as predictors than most people. In head-to-head competitions between people who use models and people who don’t, the former win, and typically do so by large margins.

Models win because they possess capabilities that humans lack. Models can embed and leverage more data. Models can be tested, calibrated, and compared. And models do not commit logical errors. Models do not suffer from cognitive biases. (They can, however, introduce or replicate human biases; that is one of the reasons for combining multiple models.)

Combining multiple models

While applying one model is good, using many models — an ensemble — is even better, particularly in complex problem domains. Here’s why: models simplify. So, no matter how much data a model embeds, it will always miss some relevant variable or leave out some interaction. Therefore, any model will be wrong.

With an ensemble of models, you can make up for the gaps in any one of the models. Constructing the best ensemble of models requires thought and effort. As it turns out, the most accurate ensembles of models do not consist of the highest performing individual models. You should not, therefore, run a horse race among candidate models and choose the four top finishers. Instead, you want to combine diverse models.

For decades, Wall Street firms have used models to evaluate investment risk. Risk takes many forms. In addition to risk from financial market fluctuations, there exist risks from geopolitics, climactic events, and social movements, such as occupy Wall Street, not to mention, risks from cyber threat and other forms of  terrorism. A standard risk model based on stock price correlations will not embed all of these dimensions.  Hence, leading investment banks use ensembles of models to assess risks.

But, what should that ensemble look like?  Which models does one include, and which does one leave out?

The first guideline for building an ensemble is to look for models that focus attention on different parts of a problem or on different processes. By that I mean, your second model should include different variables. As mentioned above, models leave stuff out. Standard financial market models leave out fine-grained institutional details of how trades are executed. They abstract away from the ecology of beliefs and trading rules that generate price sequences. Therefore, a good second model would include those features.

The mathematician Doyne Farmer advocates agent-based models as a good second model. An agent-based model consists of rule based “agents” that represent people and organizations. The model is then run on a computer. In the case of financial risk, agent-based models can be designed to include much of that micro-level detail. An agent-based model of a housing market can represent each household, assigning it an income and a mortgage or rental payment. It can also include behavioral rules that describe conditions when the home’s owners will refinance and when they will declare bankruptcy. Those behavioral rules may be difficult to get right, and as a result, the agent-based model may not be that accurate — at least at first. But, Farmer and others would argue that over time, the models could become very accurate.

We care less about whether agent-based models would outperform other standard models than whether agent-based models will read signals missed by standard models. And they will. Standard models work on aggregates, such as Case-Shiller indices, which measure changes in prices of houses. If the Case-Shiller index rises faster than income, a housing bubble may be likely. As useful as the index is, it is blind to distributional changes that hold means constant. If income increases go only to the top 1% while housing prices rise across the board, the index would be no different than if income increases were broad based. Agent based models would not be blind to the distributional changes. They would notice that people earning $40,000 must hold $600,000 mortgages. The agent based model is not necessarily better. It’s value comes from focusing attention where the standard model does not.

The second guideline borrows the concept of boosting, a technique from machine learning. Ensemble classification algorithms, such as random forest models consist of a collection of simple decision trees. A decision tree classifying potential venture capital investments might say “if the market is large, invest.” Random forests are a technique to combine multiple decision trees. And boosting improves the power of these algorithms by using data to search for new trees in a novel way. Rather than look for trees that predict with high accuracy in isolation, boosting looks for trees that perform well when the forest of current trees does not. In other words, look for a model that attacks the weaknesses of your current model.

Here’s one example. As mentioned, many venture capitalists use weighted attribute models to sift through the thousands of pitches that land at their doors. Common attributes include the team, the size of the market, the technological application, and timing. A VC firm might score each of these dimensions on a scale from 1 to 5 and then assign an aggregate score as follows:

Score = 10*Team + 8*Market size + 7*Technology + 4*Timing

This might be the best model the VC can construct. The second best model might use similar variables and similar weights. If so, it will suffer from the same flaws as the first model. That means that combining it with the first model will probably not lead to substantially  better decisions.

A boosting approach would take data from all past decisions and see where the first model failed. For instance, it may be that be that investment opportunities with scores of 5 out of 5 on team, market size, and technology, do not pan out as expected. This could be because those markets are crowded. Each of the three attributes —team, market size, and workable technology — predicts well in isolation, but if someone has all three, it may be likely that others do as well and that a herd of horses tramples the hoped for unicorn. The first model therefore would predict poorly in these cases. The idea of boosting is to go searching for models that do best specifically when your other models fail.

To give a second example, several firms I have visited have hired computer scientists to apply techniques from artificial intelligence to identify past hiring mistakes. This is boosting in its purest form. Rather than try to use AI to simply beat their current hiring model, they use AI to build a second model that complements their current hiring model. They look for where their current model fails and build new models to complement it.

In that way, boosting and attention share something in common: they both look to combine complementary models. But attention looks at what goes into the model — the types of variables it considers — whereas boosting focuses on what comes out — the cases where the first model struggles.

Boosting works best if you have lots of historical data on how your primary model performs. Sometimes, we don’t. In those cases, seek conflict. That is, look for models that disagree. When a team of people confronts a complex decision, it expects — in fact it wants — some disagreement. Unanimity would be a sign of group think. That’s true of models as well.

The only way the ensemble can improve on a single model is if the models differ. To borrow a quote from Richard Levins, the “truth lies at the intersection of independent lies.” It does not lie at the intersection of correlated lies. Put differently, just as you would not surround yourself with “yes men” do not surround yourself with “yes models.”

Suppose that you run a pharmaceutical company and that you use a linear model to projects sales of recently patented drugs. To build an ensemble, you might also construct a systems dynamics model as well as a contagion model. Say that the contagion model results in similar long-terms sales but a slower initial uptake, but that the systems dynamics model leads to a much different forecast. If so, it creates an opportunity for strategic thinking. Why do the models differ?  What can we learn from that and how do we intervene.

In sum, models, like humans, make mistakes because they fail to pay attention to relevant variables or interactions. Many-model thinking overcomes the failures of attention of any one model. It will make you wise.


Scott E. Page is the Leonid Hurwicz Collegiate Professor of Complex Systems, Political Science, and Economics at the University of Michigan and an external faculty member of the Santa Fe Institute. He is the author of The Model Thinker (Basic Books, 2018).

Ivo Velitchkov on Twitter: “The nature of social organization of production: From firms to complex dynamics – Terra – – Systems Research and Behavioral Science – Wiley Online Library https://t.co/NgIXZ3XFdb… https://t.co/xNiO3B6qix”

of production: From firms to complex dynamics – Terra – – Systems Research and Behavioral Science – Wiley Online Library https://buff.ly/2KyiCUU 4:05 pm – 24 Nov 2018

Systems thinking for evaluation design

From Sjon van ‘t Hof’s always brilliant blog – and Bob Williams is always someone to look out for.

Transcript, summary, and concept map
During March 25-26, 2015, Bob Williams, the main author of Wicked Solutions and several other books on systems thinking, gave a workshop “Wicked Solutions: A Systems Approach to Complex Problems” on the use of the systems approach in evaluation design at the Research Institute for Humanity and Nature (RIHN) in Kyoto, Japan. RIHN is a development organization that conducts practical transdisciplinary studies of development problems and their solution. Some of the researchers participating in the workshop were involved in peatland management research on Bali and Sulawesi, Indonesia. Peatland is a major part of coastal wetland geomorphology around the world (4 million km2). The catch (22) is that peat grows naturally, but the process is reversed when the land is drained for agriculture. Peat covers half of the Netherlands, where the peatlands – which are often already below sea level – subside to ever lower levels, while sea levels are rising. A small part of the 2-day workshop was taped on video, edited, and posted on Youtube in two 30-minute parts youtu.be/lFcWhGE7moQ and youtu.be/5RRHpXl2hrw. They are not about peat but about the principles of systemic design. The transcription and slides have been combined in a single pdf. This post contains a concept map and summary. Have fun.

csl4d's avatarCSL4D

Transcript, summary, and concept map

During March 25-26, 2015, Bob Williams, the main author of Wicked Solutions and several other books on systems thinking, gave a workshop “Wicked Solutions: A Systems Approach to Complex Problems” on the use of the systems approach in evaluation design at the Research Institute for Humanity and Nature (RIHN) in Kyoto, Japan. RIHN is a development organization that conducts practical transdisciplinary studies of development problems and their solution. Some of the researchers participating in the workshop were involved in peatland management research on Bali and Sulawesi, Indonesia. Peatland is a major part of coastal wetland geomorphology around the world (4 million km2). The catch (22) is that peat grows naturally, but the process is reversed when the land is drained for agriculture. Peat covers half of the Netherlands, where the peatlands – which are often already below sea level –…

View original post 1,275 more words

Practice of Change – Doctoral Dissertation for review until March 2018 – Joichi Ito

Source: Practice of ChangeThe Practice of Change

How I survived being interested in everything

Doctoral dissertation, academic year 2018

Keio University

Graduate School of Media & Governance

Joichi Ito

Abstract

Over the last century civilization has systematically supported a market- based approach to developing technical, financial, social and legal tools that focus on efficiency, growth and productivity. In this manner we have achieved considerable progress on some of the most pressing humanitarian challenges, such as eradicating infectious diseases and making life easier and more convenient. However, we have often put our tools and methods to use with little regard to their systemic or long-term effects, and have thereby created a set of new, interconnected, and more complex problems. Our new problems require new approaches: new understanding, solution design and intervention. Yet we continue to try to solve these new problems with the same tools that caused them.

Therefore in my dissertation I ask:

How can we understand and effectively intervene in interconnected complex adaptive systems?

 In particular, my thesis presents through theory and practice the following contributions to addressing these problems:

 

  1. A post-Internet framework for understanding and intervening in complex adaptive systems. Drawing on systems dynamics, evolutionary dynamics and theory of change based on causal networks, I describe a way to understand and suggest ways to intervene in complex systems. I argue that an anti-disciplinary approach and paradigm shifts are required to achieve the outcomes we desire.
  2. Learnings from the creation and management of post-Internet organizations that can be applied to designing and deploying interventions. I propose an architecture of layers of interoperability to unbundle complex, inflexible, and monolithic systems and increase competition, cooperation, generativity, and flexibility. I argue that the Internet is the best example of this architecture and that the Internet has provided an opportunity to deploy this architecture in other domains. I demonstrate how the Internet has has made the world more complex but through lowering the cost of communication and collaboration has enabled new forms of organization and production. This has changed the nature of our interventions.
  3. How and why we must change the values of society from one based on the measurement of financial value to flourishing and robustness. The paradigm determines what we measure and generates the values and the goals of a system. Measuring value financially has created a competitive market-based system that has provided many societal benefits but has produced complex problems not solvable through competitive market-based solutions. In order to address these challenges, we must shift the paradigm across our systems to focus on a more complex measure of flourishing and robustness. In order to transcend  our current economic paradigm, the transformation will require a movement that includes arts and culture to transform strongly held beliefs. I propose a framework of values based on the pursuit of flourishing and a method for transforming ourselves.

 

Reflecting on my work experience, I examine my successes and failures in the form of learnings and insights. I discuss what questions are outstanding and conclude with a call to action with a theory of change; we need to bring about a fundamental normative shift in society through communities, away from the pursuit of growth for growth’s sake and towards a sustainable sensibility of flourishing that can draw on both historical examples and the sensibilities of some modern indigenous cultures, as well as new values emerging from theoretical and practical progress in science.

“Question authority and think for yourself.”

— Timothy Leary

Scheming with Timothy Leary in 1995″ style=”box-sizing: inherit; border-style: none; display: block; width: 529.912px;”>

Scheming with Timothy Leary in 1995

Keywords: Cybernetics, Systems Dynamics, Philosophy of Science, Internet, Cryptocurrency

This work is open for community review until March 4, 2019. After this date, readers are encouraged to continue commenting on and engaging with the text, but their comments will not be taken into consideration for the revision of the work. A revised version of Practice of Change will be published by the MIT Press in the fall of 2019.

Chapters

1. Introduction

by Joichi Ito

Nov 19, 2018

Description of thesis and overview of the structure and argument of the dissertation.

2. Requiring Change

by Joichi Ito

Nov 19, 2018

Descriptions of several systems that require interventions as a result of the increasing complexity of their environments.

3. Theory of Change

by Joichi Ito

Nov 19, 2018

A theory of change developed through the work of others and my own experience.

4. Practice of Change

by Joichi Ito

Nov 19, 2018

Describing through my own experience how to test and deploy the ideas.

5. Agents of Change

by Joichi Ito

Nov 19, 2018

Personal reflections and thoughts on how we might behave as individuals and institutions.

6. Conclusion

by Joichi Ito

Nov 19, 2018

Summary of the dissertation and exploration of future work.

Feature & Community Newsletter

Practice of Change

D J Stewart: Origins of Cybernetics

Update: site now dead but links at:
https://web.archive.org/web/20171128041809/http://www.hfr.org.uk:80/cybernetics-pages/origins.htm
https://web.archive.org/web/20171128045734/http://www.hfr.org.uk/cybernetics-pages/index.htm

Origins of Cybernetics

Source: D J Stewart: Origins of Cybernetics”

 

About Cybernetics

spacer

Index button
spacer

This Internet version © Copyright D J Stewart 2000. All rights reserved.
Original version © Copyright D J Stewart 1959.

An essay on the Origins of Cybernetics

D J STEWART

Contents

Norbert Wiener meets Arturo Rosenblueth
Prediction and voluntary control
The birth certificate of cybernetics
Classification of behaviour
Purpose equated with feedback
Application to cardiac muscle
Rosenblueth gives the first talk on cybernetics
The new conference group forms
Cybernetics is given its name
The group holds regular meetings
The theory of information
Automatic computing machinery

 

The human factors research website

http://www.hfr.org.uk/index.htm

also has:

 

D J Stewart

Point button Pages about Cybernetics
Point button Books on Cybernetics
Point button Pages about Ternality Theory
Point button Pages about Ternary Analysis
Point button References and abstracts of papers on
Ternality Theory and Ternary Analysis
Point button Pages about Understanding of Science
Point button References and abstracts of papers on
Understanding of Science
No Way button About D J Stewart
Links button Links to other relevant websites

 

 

Platforms, an emerging appreciation / the impact of platforms – Coevolving Innovations (David Ing)

When the term “platform” is used, today, what does that mean?

Source: Platforms, an emerging appreciation – Coevolving Innovations

 

Platforms have had only a short history, so much of the research is still definitional, with proposed frameworks.

Source: The impacts of platforms – Coevolving Innovations

 

A philosophy of “becoming with” as “becoming alongside” – Coevolving Innovations (David Ing)

 

Source: A philosophy of “becoming with” as “becoming alongside” – Coevolving Innovations

 

 

In foundational research, I went through a philosophical shift from “being” (in the sense of Hubert Dreyfus’ reading of Heidegger) towards “becoming”  — as I was writing a finalization of Open Innovation Learning in Chapter 9.  As I reflect more, my view of systems as living can be expressed as “becoming with“, and more precisely “becoming alongside“.

This is influenced not so much directly from philosophy, but from the ecological anthropology of Tim Ingold, as indicated in “Anthropology Beyond Humanity” in 2013.

I conclude with just two proposals.

First, every animate being is fundamentally a going on in the world. Or more to point, to be animate — to be alive — is to become. And as Haraway (2008: 244) stresses, ‘becoming is always becoming with—in a contact zone where the outcome, where who is in the world, is at stake’.

Thus whether we are speaking of human or other animals, they are at any moment what they have become, and what they have become depends on whom they are with. If the Saami have reindeer on the brain, it is because they have grown up with them, just as the reindeer, for their part, have grown up with the sounds and smells of the camp.  [….]

My preference […] would be to think of animate beings in the grammatical form of the verb. Thus ‘to human’ is a verb, as is ‘to baboon’ and ‘to reindeer’. Wherever and whenever we encounter them, humans are humaning, baboons are babooning, reindeer reindeering. Humans, baboons and reindeer do not exist, but humaning, babooning and reindeering occur — they are ways of carrying on (Ingold 2011: 174–175).

Secondly, my ‘anthropology beyond the human’ would be just that: it would be anthropology, not ethnography, and it would be beyond the human, not multispecies.

We have already seen that a relational approach to human and animal becoming refutes the logic of the multispecies. But it also tells us that in our inquiries we join with, and learn from, the human and animal becomings (Ingold 2013a: 6–9) alongside which we carry on our own lives.  [….]

Thus in anthropology we do not make studies of people, or indeed of animals. We study with them (Ingold 2013b: 2–4). The aim of such study is not to seek a retrospective account, looking back on what has come to pass. It is rather to move forward, in real time, along with the multiple and heterogeneous becomings with which we share our world, in an active and ongoing exploration of the possibilities that our common life can open up. And just as in life, becoming continually overtakes being, so in scholarship the scope of anthropology must forever exceed the threshold of humanity.  [Ingold 2013-05, pp. 20-21, editorial paragraphing added]

  • Haraway, D. 2008. When Species Meet. Minneapolis, MN: University of Minnesota Press.
  • Ingold, T. 2011. Being Alive: Essays on Movement, Knowledge and Description. Abingdon: Routledge.
  • Ingold, T. 2013a. Prospect. In T. Ingold and G. Pálsson (eds), Biosocial Becomings: Integrating Social
    and Biological Anthropology. Cambridge: Cambridge University Press.
  • Ingold, T. 2013b. Making: Anthropology, Archaeology, Art and Architecture. Abingdon: Routledge.

Thinking of relations between beings as verbs, rather than beings as nouns, gives more a feeling of time, if not motion.

This publication was officially presented as the Edward Westermarck Memorial Lecture at the Finnish Anthropological Society in May 2013.  A less formal reading of the paper was recorded at Macquarie University in October 2013.

Becoming-with doesn’t derive as cleanly from the metaphysics of being and becomingextending back to the ancient Greeks.  It relates alongside ecological anthropology, which can be placed alongside a more general context of ecological epistemology, for which a citable definition in philosophy is relatively recent.

Ecological epistemology (EE) demarcates an area of convergence between contemporary theories whose common core is the recognition of the agency of natural processes, objects, and materials. EE encompasses the knowledge emerging from the assumption of symmetry between things and thought, human and nonhuman beings, and historical and natural processes. The claim of a symmetrical ontology developed in the framework of the new philosophy of materialism has demanded intense work in order to overcome philosophical constructivism that takes knowledge as a mental construct, regardless of its material base. The idealist perspective in this approach takes knowledge as a representation of reality, which is processed through the logical operation of abstraction and detachment from its empirical object. The assumption of symmetry leads to a knowledge no longer “about” but “with” the other human and nonhuman beings. From this perspective, EE avoids diluting culture into nature or assimilating nature into culture but seeks to merge the human and natural histories considering all, nonhumans and humans, coresidents, and “co-citizens” of the same world. [Carvalho, 2016]

Ecological epistemology relates alongside ecological anthropology, that relates alongside the ecological psychology that introduced a theory of affordances.  Here’s footnote 310, from Open Innovation Learning section 9.2, that places Ingold alongside J.J. Gibson, alongside Gregory Bateson and an Ecology of Mind.

Ecological anthropology, as practiced by Tim Ingold, builds on the ecological psychology of J.J. Gibson.

Gibson wanted to know how people come to perceive the environment around them. The majority of psychologists, at least at the time when Gibson was writing, assumed that they did so by constructing representations of the world inside their heads….. The mind, then, was conceived as a kind of data-processing device, akin to a digital computer, and the problem for the psychologist was to figure out how it worked. But Gibson’s approach was quite different. It was to throw out the idea, that has been with us since the time of Descartes, of the mind as a distinct organ that is capable of operating upon the bodily data of sense. Perception, Gibson argued, is not the achievement of a mind in a body, but of the organism as a whole in its environment, and is tantamount to the organism’s own exploratory movement through the world. If mind is anywhere, then, it is not ‘inside the head’ rather than ‘out there’ in the world. To the contrary, it is immanent in the network of sensory pathways that are set up by virtue of the perceiver’s immersion in his or her environment. Reading Gibson, I was reminded of the teaching of that notorious maverick of anthropology, Gregory Bateson. The mind, Bateson had always insisted, is not limited by the skin (Bateson 1973: 429) (Ingold, 2000b, pp. 2–3).

  • Bateson, Gregory. 1972. “Form, Substance, and Difference.” In Steps to an Ecology of Mind, 1987 reprint, 454–71. Northvale, NJ: Jason Aronson.

These are background contexts for a paradigm of co-responsive movement, in Open Innovation Learning section 9.2.

Co-responsive movement is a joining with, in an ongoing sympathy of living things going along together. Joining with is an “interpenetration of lifelines in the mesh of social life … in a world where things are continually coming into being through processes of growth and movement” in a generative form when contrary forces of tension and friction are pulled tightly into a knot. This is in contrast with “joining up” as assemblies that can “be a readily decomposed as composed”. “Untying the knot … is not a disarticulation or decomposition. It does not break things into pieces. It is rather a casting off, whence lines once bound together go their separate ways”.320

320 Joining up can more formally be called interstitial differentiation. Joining with is exterior articulation, as in agencement traced to Gilles Deleuze and Felix Guattari, assemblage used by Manuel DeLanda, or compositionism advanced by Bruno Latour (Ingold, 2017, pp. 13–15).

The fine distinction between “becoming-with” and “becoming-alongside” shows up in a reference to Ingold (2017) in footnote 322 of Open Innovation Learning section 9.2.  While “with” is not exclusively restricted to beings and/or things at a single point in time, “alongside” better suggests parallel sequentiality of those beings with a passage of time.

Co-responding “is the process by which beings or things literally answer to one another over time, for example in the exchange of letters or words in conversation, or of gifts, or indeed in holding hands”321. Members co-responding with each other carry on alongside one another over time, answering contrapuntally.322 A theory of co-responding was foreshadowed in John Dewey’s social view of communication, meaning “the attainment of a certain ‘like-mindedness’, enabling those with different experiences of life, both young and old, to carry on together”.323 This sense of communication is “not about the exchange of information, as communication is often understood today; it is rather about forging a concordance”.

321I prefer the more active labels of co-responsive and co-responding, for which Ingold builds a theory of human correspondence. “I propose the term correspondence to connote their affiliation. Social life, then, is not the articulation but the correspondence of its constituents. [….] The sense in which I do intend the term differs from this precisely as filiation differs from alliance. It is not transverse, cutting across the duration of social life, but longitudinal, going along with it” (Ingold, 2017, p. 14).

322 Whereas articulation associates with “and“, co-responding associates with “with“. “The distinction between the kinds of work done here with these little words ‘and’ and ‘with’ is all-important. The logic of the conjunction is articulatory; that of the preposition differential. The limbs and muscles of the body, the stones and timbers of the cathedral, the voices of choral polyphony or the members of the family: these are not added to but carry on alongside one another. Limbs move, stones settle, timbers bind, voices harmonize, and family members get along through the balance of friction and tension in their affects. They are not ‘and . . . and . . . and’ but ‘with . . . with . . . with’, not additive but contrapuntal. In answering – or responding – to one another, they co-respond” (Ingold, 2017, p. 14).

323 Dewey saw life as coproduced with others, socially. “Since no living being can perpetuate itself indefinitely, or in isolation, every particular life is tasked with bringing other lives into being and with sustaining them for however long it takes for the latter, in turn, to engender further life. The continuity of the life process is therefore not individual but social” (Ingold, 2017, p. 14).

[Open Innovation Learning] can be seen as opening up communications, sharing artifacts in common and learning in a larger community.324 This takes up “an approach that understood how time, movement, and growth were together generative of the forms of living things rather than merely ancillary to their expression”.325

324 Ingold’s proposal of a theory of human correspondence is cited as concordant with pragmatic philosophy and theory of education. “Dewey was particularly struck by the affinity between the words ‘communication’, ‘community’, and ‘common’. This, he insisted, is not just an accident of etymology. It rather points to a fundamental condition for the possibility of social life. ‘Men live in a community’, he wrote, ‘in virtue of the things which they have in common; and communication is the way in which they come to possess things in common’ (Dewey 1966: 4) (Ingold, 2017, p. 14)

325 Tim Ingold cites Henri Bergson’s Creative Evolution (1911) as turning point in his research.
“The year was 1983, and I was in the throes of writing a book on the idea of evolution, and on how it had figured in theories of biology, history, and anthropology from the nineteenth century to the present. [….] It turned into a Bergson-inspired critique of the entire legacy of Darwinian historicism in the human sciences” (Ingold, 2014, p. 157).

Little words make a difference.  My philosophy focused on being; then becoming; then becoming-with; and has refined to becoming-alongside.  These are rather fine distinctions.  Scholarly writing drives precision.

References

Carvalho, Isabel. 2016. “Ecological Epistemology (EE).” In Encyclopedia of Latin American Religions, edited by Henri Gooren, 1–3. Springer. https://doi.org/10.1007/978-3-319-08956-0_19-1

Ingold, Tim. 2000b. “General Introduction.” In The Perception of the Environment: Essays on Livelihood, Dwelling and Skill, 1–7. Routledge.

Ingold, Timothy. 2013-05. “Anthropology beyond Humanity.” Suomen Antropologi: Journal of the Finnish Anthropological Society 38 (3): 5–23.

Ingold, Tim. 2013-10. Anthropology beyond Humanity. Web Video. Sydney, Australia: Macquarie University. https://www.youtube.com/watch?v=kqMCytCAqUQ

Ingold, Tim. 2014. “A Life in Books.” Journal of the Royal Anthropological Institute 20 (1): 157–159. https://doi.org/10.1111/1467-9655.12088.

Ingold, Tim. 2017. “On Human Correspondence.” Journal of the Royal Anthropological Institute 23 (1): 9–27. https://doi.org/10.1111/1467-9655.12541.

   October 17th, 2018

 Posted In: philosophy

 Tags: 

Complexity leadership theory – Emergence: Complexity and Organization – Lichtenstein et al, 2006 and 

Source: Complexity leadership theory – Emergence: Complexity and Organization

Complexity leadership theory

An interactive perspective on leading in complex adaptive systems

 ·
Authors

Abstract

Traditional, hierarchical views of leadership are less and less useful given the complexities of our modern world. Leadership theory must transition to new perspectives that account for the complex adaptive needs of organizations. In this paper, we propose that leadership (as opposed to leaders) can be seen as a complex dynamic process that emerges in the interactive “spaces between” people and ideas. That is, leadership is a dynamic that transcends the capabilities of individuals alone; it is the product of interaction, tension, and exchange rules governing changes in perceptions and understanding. We label this a dynamic of adaptive leadership, and we show how this dynamic provides important insights about the nature of leadership and its outcomes in organizational fields. We define a leadership event as a perceived segment of action whose meaning is created by the interactions of actors involved in producing it, and we present a set of innovative methods for capturing and analyzing these contextually driven processes. We provide theoretical and practical implications of these ideas for organizational behavior and organization and management theory.

 

 

 

Also

8-2009
The leadership of emergence: A complex systems
leadership theory of emergence at successive
organizational levels
Benyamin B. Lichtenstein
University of Massachusetts, Boston, b.lichtenstein@umb.edu
Donde Ashmos Plowman
University of Nebraska-Lincoln, dplowman2@unl.edu

pdf – https://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1065&context=managementfacpub

 

Published in The Leadership Quarterly 20:4 (August 2009), pp. 617–630; doi: 10.1016/j.leaqua.2009.04.006
Copyright © 2009 Elsevier Inc. Used by permission.
Published online May 29, 2009.1
The leadership of emergence: A complex systems leadership
theory of emergence at successive organizational levels
Benyamin B. Lichtenstein
Department of Management/Marketing, University of Massachusetts, Boston, 100 Morrissey Blvd. M-5/214,
Boston, MA 02215-3393, USA (Corresponding author: tel 617 287-7887, email B.Lichtenstein@umb.edu
Donde Ashmos Plowman
Department of Management, The University of Tennessee, 414 Stokely Management Center,
Knoxville, Tennessee 37996-0545, USA
Abstract
Complexity science reframes leadership by focusing on the dynamic interactions between all individuals, explaining
how those interactions can, under certain conditions, produce emergent outcomes. We develop a Leadership of
Emergence using this approach, through an analysis of three empirical studies which document emergence in distinct
contexts. Each of these studies identifies the same four “conditions” for emergence: the presence of a Dis-equilibrium
state, Amplifying actions, Recombination/“Self-organization”, and Stabilizing feedback. From these studies
we also show how these conditions can be generated through nine specific behaviors which leaders can enact, including:
Disrupt existing patterns through embracing uncertainty and creating controversy, Encourage novelty by
allowing experiments and supporting collective action, Provide sensemaking and sensegiving through the artful use
of language and symbols, and Stabilize the system by Integrating local constraints. Finally, we suggest ways for advancing
a meso-model of leadership, and show how our findings can improve complexity science applications in
management.

Keywords: complexity, self-organization, non-linear interactions, case study research, leadership behaviors

Complex Adaptive Systems: Emergence and Self-Organization. Tutorial Presented at HICSS-42 Big Island, HI January 5, PDF

A massive presentation on the topic

pdf: https://www3.nd.edu/~gmadey/Activities/CAS-Briefing.pdf

 

Also at:

Complex Adaptive Systems: Emergence and Self-Organization Tutorial Presented at HICSS-42 Big Island, HI January 5, 2009 Stephen H. Kaisler, D.Sc. And Gregory Madey, Ph.D. Who we are Steve Kaisler Laurel,

Source: Complex Adaptive Systems: Emergence and Self-Organization. Tutorial Presented at HICSS-42 Big Island, HI January 5, PDF

Patterns | Public Sphere Project

Massive resource for patterns and ‘the public sphere’.

 

Source: Patterns | Public Sphere Project

 

ABOUT THE PUBLIC SPHERE PROJECT

Without a thriving public sphere the people’s ability to manage public affairs equitably and effectively is impossible. Although new digital networked technologies are only part of this picture, they obviously represent a major source of opportunities — as well as challenges — for those interested in the public sphere.

The Public Sphere Project (PSP) is an initiative that is intended to help promote more effective and equitable public spheres all over the world. With this site we hope to ultimately support a community of researchers and activists and provide a broad framework for a variety of interrelated activities and goals.

Our activities will focus on the following objectives:

  • Advancing our understanding of opportunities and challenges of “public spheres” for democracy, education, education, social justice, economic development, and environmentalism;
  • Developing and acting on strategies for creating and strengthening equitable and effective public spheres;
  • Legitimizing and calling attention to these concern;
  • Building and supporting communities and networks of activists, researchers, and citizens;
  • Convening forums (both face-to-face and virtual) for sharing information, concerns, and ideas;
  • Developing and disseminating useful, high-quality information for citizens, activists, students, policy-makers, and researchers;
  • Evaluating and consulting with existing projects, systems, applications, and organizations all over the world;
  • Developing and evaluating relevant new interfaces, applications, systems, and organizations;
  • helping to provide forums for marginalized and submerged voices and issues;
  • Helping to build collaborative and deliberative systems;
  • Promoting fruitful interaction between the powerless and the powerful — and people in between; and
  • Engaging and encouraging individuals, NGOs, governments, and businesses.

Of the admittedly ambitious objectives listed above, we are currently putting our attention in the projects listed below.

Civic Intelligence
We are exploring and promoting the concept of civic intelligence as part of our research and action agenda. Civic intelligence is the type of collective intelligence that is focused on the effective, sustainable, and just resolution of collective problems. Civic intelligence is what makes society “smart”; it’s a form of distributed and principled innovation that is concerned with the well-being of the planet and the life on it. It is our contention that the concept of “civic intelligence” is a good candidate for a central paradigm and we’re inviting individuals and communities to help determine the usefulness of that paradigm in terms of research and action.

To further that work we have developed an online survey that will help us understand civic intelligence and help promote a research agenda but, perhaps more importantly, will help provide inspiration for activists who are working for positive social change in their communities.

Liberating Voices
The Liberating Voices pattern language project is a multi-year project to develop pattern languages for social change. We’re working together to develop one or more “pattern languages” which can help people think about, design, develop, manage and use information and communication systems that more fully meet human needs now — and in the future.

Our “pattern language” is a holistic collection of “patterns” that can be used together to address an information or communication problem. Each “pattern” in this pattern language, when complete, will represent an important insight that will help contribute to a communication revolution. The concepts of patterns and pattern languages were developed by architect Christopher Alexander and his colleagues to present collections of findings and insights that were intended to be used together to develop living spaces that were life-affirming and beautiful.

In late 2008, MIT Press published Liberating Voices: A Pattern Language for Communication Revolution. The book contains the first version of the Liberating Voices pattern language including several “context” chapters and 136 patterns, developed by 85 authors. These patterns are also available on this site, some in slightly different form, along with several hundred others, many still in draft form.

We are currently developing a set of cards based on the patterns in the book. We’ve been using these in a variety of in-person workshops that can be adapted to an online environment as well.

There are other important developmental efforts in work. The first is allowing people to comment on the patterns. These comments will include questions and suggestions about using the patterns, examples of patterns in use, and relevant references. Also, because each situation is different we are developing new online workspaces so that people and groups can develop their own pattern languages composed of patterns that already exist — possibly annotated — and new ones that are not to be found in the currently existing set. A final thought, though basically a gleam in our eyes is a general software effort to support all of the patterns. For example, an online platform based on the Public Domain Characters by John Thomas would enable people to upload their character descriptions onto the site which other people could freely use in their work.

e-Liberate
Since its inception the Internet has been touted as a medium with revolutionary potential for democratic communication. Although other media have not lived up to their democratic potential, it’s too early to dismiss the Internet as being just a tool for the powerful. Certainly civil society has been extraordinarily creative thus far in using the Internet for positive social change!

Although a very large number of communication venues exist in cyberspace, one critical function — deliberation — seems to have been largely ignored. The need for computer support for online deliberation can be shown by the fact that many online discussions seem to have satisfactory resolution. Motivated by a desire to help make online discussions more productive — particularly among civil society groups who are striving to create more “civic intelligence” in our society — Douglas Schuler proposed in his 1996 book New Community Networks that Roberts Rules of Order could be used as a basis for online deliberation. One of the most important criterion was that although every attendee would have opportunities to make his or her ideas heard the minority could not prevent the majority from making decisions.

Work on an online version began in 1999 at The Evergreen State College developed the first prototype of an online version of Roberts Rules of Order. After years of intermittent development and several iterations, there are now two basic versions: (1) e-Liberate, a online deliberative tool using Roberts Rules of Order; and (2) openDCN, a broad, evolving civic network framework that incorporated much of e-Liberate. OpenDCN includes a modularized version of Roberts Rules in which certain motions can be turned on or off. OpenDCN was developed by the Milan Civic Network within the openDCN e-participation environment.

We are now beginning to work with groups who are interested in trying the system to support actual meetings. We believe that face-to-face meetings are still very important but appropriate use of e-Liberate can help organizations with limited resources. Our hope is that non-profit groups will use e-Liberate to save time and money on travel and use the resources they save on other activities that promote their core objectives. We are enthusiastic about the system but we are well aware that the system as it stands may have problems that need fixing. It is for that reason that we plan to host a small number of meetings over the next few months and gather feedback from attendees. Please let us know if your organization would like to try our openDCN system.

We believe that the next few years will be critical and that the Obama administration in the U.S. is raising a variety of interesting points and we’re hoping that we will be able to demonstrate our work to them.

Miscellaneous
Finally, there are a few other areas in which we are working. One of which involved the 1996 book, New Community Networks by Douglas Schuler, which is now outdated in many areas. But many of the points are still just as valid as they were over a decade ago. We’re in the process of making the text of the book available as a Wiki so that people can update it as necessary. Secondly, we are engaged in developing a repository for relevant papers and other useful points of reference. Thirdly, we’re still interested in participating in events. Working with CPSR we developed the “Directions and Implications of Advanced Computing” conference that started in 1987. In the summer of 2010, the Public Sphere Project is co-sponsoring a conference on Online Deliberation in Leeds, UK.

The Public Sphere Project is a 501.c.3 non-profit organization incorporated in the United States. Donations to the Public Sphere Project are tax-deductible.

Tiny sample of content:

Title Pattern Text
Civic Intelligence
Douglas Schuler

Civic Intelligence describes how well groups of people address civic ends through civic means. It asks the critical question: Is society smart enough to meet the challenges it faces? Civic intelligence requires learning and teaching. It also requires meta-cognition — thinking about and actually improving how we think and work together.
The Commons
David Bollier

The human genome, seeds, and groundwater should belong to everybody —not corporations. The public library, community garden, farmer’s market, and land trust are familiar and highly effective local Commons. The emerging commons sector provides benefits that corporations can’t provide such as healthy ecosystems, economic security, stronger communities and a participatory culture.
The Good Life
Gary Chapman

People who hope for a better world feel the need for a shared vision of The Good Life. The environmental crises of the planet require a broad vision of a good life that harmonizes human aspirations and natural limits. A framework for the modern good life should be based on some form of humanism with room for a spiritual dimension that does not seek domin

Niklas Luhmann and Organization Studies 2006 – sample including 70 pages

Click to access 9788763003049.pdf

 

Niklas Luhmann and
Organization Studies

 

Edited by
David Seidl and
Kai Helge Becker

contents included in sample:
Acknowledgements………………………………………………………………. 7
Introduction: Niklas Luhmann and Organization Studies
David Seidl and Kai Helge Becker ……………………………………………. 8
PART I: THE THEORY OF AUTOPOIETIC SOCIAL SYSTEMS
1. The Basic Concepts of Luhmann’s Theory of Social Systems
David Seidl ……………………………………………………………………… 21
2. The Concept of Autopoiesis
Niklas Luhmann ………………………………………………………………. 54
3. The Autopoiesis of Social Systems
Niklas Luhmann ………………………………………………………………. 64

 

 

Researchgate (only contents page): (12) (PDF) Niklas Luhmann and Organization Studies