She Ji: The Journal of Design, Economics, and Innovation | Vol 5, Issue 2, Pages 75-162 (Summer 2019) | ScienceDirect.com

Ben Sweeting calls my attention to this via the CYBCOM mailing list:

This issue of the design journal She Ji will be of interest to this list. The issue comes out of the Relating Systems Thinking and Design conference series. The next conference is in Chicago in the autumn: http://www.rsd8.org

 

Source: She Ji: The Journal of Design, Economics, and Innovation | Vol 5, Issue 2, Pages 75-162 (Summer 2019) | ScienceDirect.com

 

Go to journal home page - She Ji: The Journal of Design, Economics, and Innovation

Volume 5, Issue 2

Pages 75-162 (Summer 2019)

Editorial

  1. Pages 75-84
    Download PDF
  2. Systems Thinking and Design Thinking: The Search for Principles in the World We Are Making

    Pages 85-104
    Download PDF
  3. Flourishing: Designing a Brave New World

    Pages 105-116
    Download PDF
  4. Design beyond Design

    Pages 117-127
    Download PDF
  5. Co-shaping the Future in Quadruple Helix Innovation Systems: Uncovering Public Preferences toward Participatory Research and Innovation

    Pages 128-146
    Download PDF
  6. Cognitive Point of View in Recursive Design

    Pages 147-162
    Download PDF

 

Source: She Ji: The Journal of Design, Economics, and Innovation | Vol 5, Issue 2, Pages 75-162 (Summer 2019) | ScienceDirect.com

VSON Manages Network Complexity – Ling Ling Sun

The “Viable system of networking” (VSON) utilizes cybernetics to manage networking complexity. Similar to the viable system model (VSM) of organizations, VSON can be used as a conceptual and functional tool to design a viable networking system.

 

err… I don’t know… maybe?

Looks the same as the VSM, maybe?

 

Source: VSON Manages Network Complexity – TvTechnology

 

VSON Manages Network Complexity

Introducing the “Viable System Of Networking”

The “Viable system of networking” (VSON) utilizes cybernetics to manage networking complexity. Similar to the viable system model (VSM) of organizations, VSON can be used as a conceptual and functional tool to design a viable networking system.

COEXIST AND CO-EVOLVE

Networks don’t exist in a vacuum—in order to be viable, they need to coexist and co-evolve with their environment. VSON consists of four basic units (Fig. 1):

  • Environment
  • Operation
  • Management
  • Boundary
VSON Fig. 1

The operation and management of a network interact with Environment. A viable network must be secure to be able to defend its Boundary from a hostile Environment. Separating Operation and Management in networks reduces Environment complexity at the Management level.

In the VSON, the complexity of the Environment is always larger than the complexity of the network, and the complexity of the network is always larger than the complexity of Management. Based on these relationships, complexity of Environment can be managed using three strategies: policies, variety engineering and recursion.

Policies specify goals, therefore network complexity is reduced to relevant complexity defined by the goals, and Environment is reduced to Relevant Environment, which is much less complex than the Environment as a whole.

Variety engineering manages complexity by attenuating unwanted varieties and amplifying requisite varieties. Placing a firewall at the Boundary to block security threats is an example of attenuation. An example of variety amplification is to increase services and capacities in Operation. Maintaining Control at the Management level to keep Operation in line with the goals is an example of mixed attenuation and amplification.

Recursion utilizes fractal structure to manage complexity of large networks. A fractal structure is self-similar across different scales: viable systems are made up of viable systems (Fig. 1). Each viable system manages its share of the total complexity that VSON has to manage. At the same time, the services of these viable systems must be controlled in order to contribute to the goals of VSON as a whole.

Similar to but not the same as VSM, VSON has six components (Fig. 1):

  • Operation
  • Orchestration
  • Control
  • Monitoring
  • Intelligence
  • Policy

VSON uses Operation, Orchestration, Control and Monitoring to realize its goals and Control, Monitoring, Intelligence and Policy to adapt them. Each of the services in Operation is a viable system, has its own goals, own Relevant Environment, own Operation and own Management. These services share some common resources, and may contribute to the same goals, therefore, Orchestration is needed to ensure that services don’t conflict with one another.

Operation and Orchestration are necessary but not sufficient for VSON, because each service can still pursue its own goals without contributing to the system as a whole; therefore, Control and Monitoring are needed. Through Control, the goals of VSON are translated into goals for the services in Operation and through Monitoring, quality, efficiency, security and reliability of services are guaranteed. Operation, Orchestration, Control and Monitoring are necessary as a cohesive whole to realize VSON goals, but they are not enough to make VSON viable. Intelligence analyzes information from both outside Future Environment and inside Monitoring feedback (via Control) to find patterns and predict trends and recommends new ways to adapt to Policy. By defining goals and coordinating the interaction between Control and Intelligence, Policy determines the identity of VSON and the ability to adapt. These six components, together with communications among them, are necessary and sufficient for VSON.

One example of VSON is software defined networking (SDN) where Policy, Control and Operation are similar to the Application, Control and Data layers in SDN. However, the Management and Hardware layer supporting VSON can be anything from SDN to legacy networking, or a combination of both.

In fact, networking in VSON is so broad that it covers, for example, social networks and knowledge networks as well. Embedded in the recursive structure of VSON, but absent in SDN, are the concepts of “Divide and Conquer” and subsidiarity. The Divide and Conquer strategy breaks a problem down into two or more sub-problems recursively to manage scale up complexity. The principle of subsidiarity resolves problems as close as possible to where they occur, and only pass along decisions elsewhere when it really needs to do so. Due to lack of Intelligence, SDN only has adaptability through programming.

Ling Ling Sun is the CTO for Nebraska Public TV. Many thanks to Tom Butts for help in editing this article.

Professor Daniel Bonevac on the OODA loop

 

Source: Not John Boyd – Slightly East of New

 

Not John Boyd

But a good video, nonetheless.

Here’s Prof. Daniel Bonevac giving an introductory lecture on the OODA loop:

Professor Bonevac is a member of, and was formerly chair of, the Philosophy Department at the University of Texas. I don’t know when this lecture was given, but the video was posted in April of this year. One of the interesting things about it is that Professor Bonevac is teaching a class on Organizational Ethics.

In Boyd’s scheme of things, strategy is a game of interaction and isolation on physical, mental, and moral levels.  The moral level of conflict is the most powerful: Morally isolate yourself, and you both run out of allies and de-motivate your own organization. That is, you give up and lose, no matter what weapons or how many forces remain on your side.

Moral strength rests on perceived adherence to an ethical code. Boyd claims in Strategic Game that:

Morally adversaries isolate themselves when they visibly improve their well being to the detriment of others (i.e. their allies, the uncommitted, etc.) by violating codes of conduct or behavior patterns that they profess to uphold or others expect them to uphold. Chart 47.

This may strike some readers as too materialistic (Boyd was writing at the height of the Cold War, where the Soviet Union and various communist-inspired national liberation movements were the big threat), but it works just as well if the leaders of any ideological movement are seen as improving their positions by betraying their beliefs.

I think you’ll find Prof. Bonevac’s lecture to be most interesting and highly entertaining.  He does subscribe to the OODA “loop” as a circular process, somewhat in contrast to my own interpretation (for which, see “Boyd’s Real OODA Loop” on the Articles page). But then, I’ve heard Boyd describe it in much the same way as Professor Bonevac, and if he were just going to repeat my views, what would be the point?

[The video cuts off after 45 minutes, so it’s entirely possible that he addresses my concerns later on. If anybody has a link to the rest of the lecture, please include it in the comments.

Tip of the hat to reader Max Moore, who brought this to my attention with a comment on the Contact page.]

Anna Randle on Twitter: “What short videos would you recommend as an intro to systems thinking? (We’ve got the wolves change rivers one already!) Grateful for any recommendations – YouTube has plenty, but many are a bit… weird :)

See thread – and add in your suggestions at the link!

Frontiers | The Fallacy of Univariate Solutions to Complex Systems Problems | Neuroscience

 

Source: Frontiers | The Fallacy of Univariate Solutions to Complex Systems Problems | Neuroscience

PERSPECTIVE ARTICLE

Front. Neurosci., 08 June 2016 | https://doi.org/10.3389/fnins.2016.00267

The Fallacy of Univariate Solutions to Complex Systems Problems

  • 1Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, USA
  • 2Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA
  • 3Department of Pediatrics, Washington University School of Medicine, St. Louis, MO, USA
  • 4Department of Radiology, Washington University School of Medicine, St. Louis, MO, USA
  • 5Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, USA

Complex biological systems, by definition, are composed of multiple components that interact non-linearly. The human brain constitutes, arguably, the most complex biological system known. Yet most investigation of the brain and its function is carried out using assumptions appropriate for simple systems—univariate design and linear statistical approaches. This heuristic must change before we can hope to discover and test interventions to improve the lives of individuals with complex disorders of brain development and function. Indeed, a movement away from simplistic models of biological systems will benefit essentially all domains of biology and medicine. The present brief essay lays the foundation for this argument.

Introduction

Non-invasive neuroimaging has invigorated a deep and abiding interest in understanding the human brain, the most complex biological system, in health and disease. This burgeoning research focus has impelled technological innovation in neuroimaging and application of a growing number of mathematical/computational approaches to analysis, which help visualize the complexity of the brain in greater depth than previously possible. From our current vantage point we are compelled to ask whether our capabilities have outstripped the paradigms we use for scientific research, and whether our conceptual and analytical frameworks have become a barrier to understanding complex systems.

A deep understanding of complex biological systems requires conceptual and analytical strategies that respect that complexity. Yet, there continues to be a dominating focus in experimental design and analysis on univariate, linear, and narrowly defined relationships. These approaches, including multivariate linear regression (which is an elaboration on the univariate linear framework), are gratifying because they are conceptually simple and align neatly with the traditional scientific method, in which emphasis is placed on a single isolatable dependent variable. However, the univariate/linear approach will necessarily fail when tasked with providing the basis for deep explanations for complex biological systems.

This essay highlights the need to recognize the fallacy of the univariate conceptual framework with respect to complex systems and to embrace complexity so as to align the problem to be solved with the approach taken. We contend that there are some effective ways to study complex systems through care in study design and sample ascertainment, deep phenotyping, and statistical approaches. However, the shift to individual-level analysis, the basis for personalized medicine, will require both methodological advances and a readiness for investigators and reviewers to eschew biologically implausible reductionist models of complex biology.

Continues in source: Frontiers | The Fallacy of Univariate Solutions to Complex Systems Problems | Neuroscience

Do Brains Operate at a Tipping Point? New Clues and Complications | Quanta Magazine

 

Source: Do Brains Operate at a Tipping Point? New Clues and Complications | Quanta Magazine

 

ABSTRACTIONS BLOG

Do Brains Operate at a Tipping Point? New Clues and Complications

New experimental results simultaneously advance and challenge the theory that the brain’s network of neurons balances on the knife-edge between two phases.

Photo of a Caucasian man wearing a blue plaid shirt holding a clear plastic case with 6 lab samples.

A tray of rat cortex slice cultures in John Beggs’ lab at Indiana University. Electrodes record cascades of activity known as neuronal avalanches.

Eric Rudd/Indiana University

A team of Brazilian physicists analyzing the brains of rats and other animals has found the strongest evidence yet that the brain balances at the brink between two modes of operation, in a precarious yet versatile state known as criticality. At the same time, the findings challenge some of the original assumptions of this controversial “critical brain” hypothesis.

Understanding how the huge networks of neurons that comprise our thinking organs process information about the world is a daunting mystery for neuroscientists. One part of that broad puzzle is how a single physical structure can be primed to deal with life’s myriad demands. “If the brain is completely disordered, it cannot process information,” explained Mauro Copelli, a physicist at the Federal University of Pernambuco in Brazil and a coauthor of the new research. “If it’s too ordered, it’s too rigid to cope with the variability of the environment.”

In the 1990s, the physicist Per Bak hypothesized that the brain derives its bag of tricks from criticality. The concept originates in the world of statistical mechanics, where it describes a system of many parts teetering between stability and mayhem. Consider a snowy slope in winter. Early-season snow slides are small, while blizzards late in the season may set off avalanches. Somewhere between these phases of order and catastrophe lies a particular snowpack where anything goes: The next disturbance could set off a trickle, an avalanche or something in between. These events don’t happen with equal likelihood; rather, small cascades occur exponentially more often than larger cascades, which occur exponentially more often than those larger still, and so on. But at the “critical point,” as physicists call the configuration, the sizes and frequencies of events have a simple exponential relationship. Bak argued that tuning to just such a sweet spot would make the brain a capable and flexible information processor.

The idea has had its ups and downs. The first empirical evidence for it came from rat brain slices in 2003. John Beggs, a biophysicist at Indiana University, found that chain reactions of firing neurons, termed “neuronal avalanches,” came in particular arrays of sizes characteristic of criticality. Namely, any size was possible, but — like a snowy slope at its critical point — the frequency of avalanches exponentially depended on their size. Beggs argued that this “power law” relationship meant the brain slice was critical, triggering a flood of follow-up research. Critics, however, eventually showed that the claim was premature, as power laws show up in random systems, too — such as the word frequencies produced by a monkey at a typewriter.

Proponents faced two other conundrums: The so-called critical exponent defining a power law — the number indicating, for example, how many smaller avalanches occur relative to the larger ones — varied by setup, belying the notion of a universal mechanism behind the brain’s responses. Moreover, experimentalists found stronger signs of criticality in synchronized neural waves, which occur most often during deep sleep, than in the more scattershot firing patterns of alert animal brains. This difference puzzled researchers, who didn’t predict a relationship between criticality and synchronicity.

To address these challenges, Copelli and his collaborators drugged rats using a particular anesthetic that lets brains swing between the extremes of synchronization, sometimes firing in a synchronized manner typical of sleep and other times resembling the random static of awake brains. Recording the swells of neural activity in the primary visual cortex with dozens of metal probes, the group found that the sizes and durations, and the relationships between the sizes and durations, of neuronal avalanches all fit power law distributions with varying critical exponents — similar to Beggs’ 2003 findings in brain slices of dead rats.

Going further, though, they showed that when neurons fired with a certain moderate level of synchronicity, these three exponents fit together according to a simple equation. This relationship among the exponents satisfied a more stringent test for criticality suggested by critics in 2017. The brains of the anesthetized rats spent most of their time close to this state, seemingly hovering near the dividing line between two phases.

“It’s a smoking gun; you can’t escape it anymore,” said Beggs, who was not involved in the research. “It’s very hard to say that this is random.”

When the team looked in detail at where the critical point fell, however, they found that the rat brains weren’t balanced between phases of low and high neuronal activity, as predicted by the original critical brain hypothesis; rather, the critical point separated a phase in which neurons fired synchronously and a phase characterized by largely incoherent firing of neurons. This distinction may explain the hit-or-miss nature of past criticality searches. “The fact that we have reconciled the data from earlier research really points to something more general,” said Pedro Carelli, Copelli’s colleague and a coauthor of the research, which appeared in Physical Review Letters in late May.

But an anesthetized brain is not natural, so the scientists repeated their analysis on public data describing neural activity in free-roaming mice. They again found evidence that the animals’ brains sometimes experienced criticality satisfying the new gold standard from 2017. However, unlike with the anesthetized rats, neurons in the mice brains spent most of their time firing asynchronously — away from the alleged critical point of semi-synchronicity.

Copelli and Carelli acknowledge that this observation poses a challenge to the notion that the brain prefers to be in the vicinity of the critical point. But they also stress that without running the awake-animal experiment themselves (which is prohibitively expensive), they can’t conclusively interpret the mouse data. Poor sleep during the experiment, for instance, could have biased the animals’ brains away from criticality, Copelli said.

They and their colleagues also analyzed public data on monkeys and turtles. Although the data sets were too limited to confirm criticality with the full three-exponent relationship, the team calculated the ratio between two different power-law exponents indicating the distributions of avalanche sizes and durations. This ratio — which represents how quickly avalanches spread out — was always the same, regardless of species and whether the animal was under anesthesia. “To a physicist, this suggests some kind of universal mechanism,” Copelli said.

Alain Destexhe of the National Center for Scientific Research (CNRS) in France, the critic who proposed the equation relating the three exponents as a test of criticality, called the universality of the results “astonishing,” but said he isn’t sure if it means what critical brain proponents say. He points out that because avalanches in alert brains scale similarly to those in brains under deep anesthesia — when they have no sensory input — criticality may have nothing to do with how the brain processes information, and could be due to some other aspect of brain dynamics.

Next, the Brazilian team hopes to study how the rats’ synchronous and asynchronous brain phases relate to behavior, a puzzle compounded by the fact that synchronized bursts are common during sleep, but also occur in awake brains.

Other research has linked sleep with restoring a destabilized brain to its critical point, and Beggs thinks further studies may someday establish deeper connections between mental health and the physics of the brain. But first, Copelli says, the criticality field needs to address more basic questions. “The current theory can’t explain the result,” he said, meaning his and his colleagues’ new findings, “so it opens the race again for models.

 

Source: Do Brains Operate at a Tipping Point? New Clues and Complications | Quanta Magazine

Frontiers | Breakdown of Modularity in Complex Networks | Physiology – Sergi Valverde

 

Source: Frontiers | Breakdown of Modularity in Complex Networks | Physiology

 

ORIGINAL RESEARCH ARTICLE

Front. Physiol., 13 July 2017 | https://doi.org/10.3389/fphys.2017.00497

Breakdown of Modularity in Complex Networks

  • 1ICREA-Complex Systems Lab, Universitat Pompeu Fabra, Barcelona, Spain
  • 2Institute of Evolutionary Biology (UPF-CSIC), Barcelona, Spain
  • 3European Centre for Living Technology, University Ca’ Foscari, Venezia, Italy

The presence of modular organization is a common property of a wide range of complex systems, from cellular or brain networks to technological graphs. Modularity allows some degree of segregation between different parts of the network and has been suggested to be a prerequisite for the evolvability of biological systems. In technology, modularity defines a clear division of tasks and it is an explicit design target. However, many natural and artificial systems experience a breakdown in their modular pattern of connections, which has been associated with failures in hub nodes or the activation of global stress responses. In spite of its importance, no general theory of the breakdown of modularity and its implications has been advanced yet. Here we propose a new, simple model of network landscape where it is possible to exhaustively characterize the breakdown of modularity in a well-defined way. Specifically, by considering the space of minimal Boolean feed-forward networks implementing the 256 Boolean functions with 3 inputs, we were able to relate functional characteristics with the breakdown of modularity. We found that evolution cannot reach maximally modular networks under the presence of functional and cost constraints, implying the breakdown of modularity is an adaptive feature.

Continues in source: Frontiers | Breakdown of Modularity in Complex Networks | Physiology

 

Hiding behind chaos and error in the double pendulum | Theory, Evolution, and Games Group – Artem Kaznatcheev

Aha! Here’s a good scientific argument why identification of ‘non-deterministic systems’ is dependent on perspective, level of analysis and (importantly) good old-fashioned accuracy – ‘indeterminacy’ is not an act of fiat at all levels in every sense based on certain systems conditions. I am pleased to see this in so concise a form.

 

as the author says:

“Of course, this doesn’t mean that there aren’t interesting discussions to be had on chaos and prediction. I’ve written before on computer science on prediction and the edge of chaos and on how stochasticity, chaos and computations can limit prediction. But we shouldn’t use chaos or complexity to stop ourselves from asking questions or making predictions. Instead, we should use apparent complexity as motivation to find the limits of more linear theories. And whatever system we work with, we should look for overarching global principles like the conservation of energy that we can use to abstract over the chaotic microdynamics.”

 

Source: Hiding behind chaos and error in the double pendulum | Theory, Evolution, and Games Group

 

Hiding behind chaos and error in the double pendulum

If you want a visual intuition for just how unpredictable chaotic dynamics can be then the go-to toy model is the double pendulum. There are lots of great simulations (and some physical implementations) of the double pendulum online. Recently, /u/abraxasknister posted such a simulation on the /r/physics subreddit and quickly attracted a lot of attention.

In their simulation, /u/abraxasknister has a fixed center (block dot) that the first mass (red dot) is attached to (by an invisible rigid massless bar). The second mass (blue dot) is then attached to the first mass (also by an invisible rigid massless bar). They then release these two masses from rest at some initial height and watch what happens.

The resulting dynamics are at right.

It is certainly unpredictable and complicated. Chaotic? Most importantly, it is obviously wrong.

But because the double pendulum is a famous chaotic system, some people did not want to acknowledge that there is an obvious mistake. They wanted to hide behind chaos: they claimed that for a complex system, we cannot possibly have intuitions about how the system should behave.

In this post, I want to discuss the error of hiding behind chaos, and how the distinction between microdynamics and global properties lets us catch /u/abraxasknister’s mistake.

A number of people on Reddit noticed the error with /u/abraxasknister’s simulation right away. But the interesting part for me was how other people then jumped in to argue that the correctors could not possibly know what they were talking about.

For example, /u/Rickietee10 wrote:

It’s based on [chaos] theory. … Saying it doesn’t [look] right isn’t even something you can say, because it’s completely random.

Or /u/chiweweman’s dismissing a correct diagnosis of the mistake with:

That’s possible, but also double pendulums involve chaos theory. It’s likely this is just a frictionless simulation.

These detractors were trying to hide behind complexity. They thought that unpredictable microdynamics meant that nothing about the system is knowable. Of course, they were wrong.

But their error is an interesting one. This seems like an unfortunately common misuse of chaos in some corners of complexology. We say that a some system (say the economy) is complex. Thus it is unknowable. Thus, people offering liner theories (say economists) cannot possibly know what they are talking about. They cannot possibly be right.

Have you encountered variants of this argument, dear reader?

This kind of argument is wrong. And in the case of the double pendulum, /u/GreatBigBagOfNope responded best:

You can’t just slap the word chaos on something and expect the conservation of energy to no longer apply

So let us use the conservation of energy to explain why the simulation is wrong.

From the initial conditions, we can get an estimate of the system’s energy. This is particularly easy in this case since the masses start at rest at some height — thus all energy is potential energy. From this — due to the time-invariance of the Hamiltonian specifying the double pendulum — we know by Noether’s theorem that this initial energy will be conserved. In this particular case, this means that we cannot ever have both of the masses above their initial position at the same time. If that happened then (just) the potential energy of this configuration will be strictly higher than the total initial energy. Since we see both of the masses simultaneously above their initial position in the gif, we can conclude that there is an error in /u/abraxasknister simulation.

I enjoy this kind of use of global abstract argument to reason without knowing the details of microdynamics. For me, this is the heart of theoretical computer science.

Based on this violation of energy conservation, many theories were discussed for what the error in the simulation might have been. And the possibility of energy-pumping from finite step size was a particularly exciting candidate. A ‘bug’ (that can be a ‘feature’) that I’ll discuss another day in the context of replicator dynamics.

The actual main mistake turned out to be much less exciting: a typo in the code. A psy instead of phi in one equation.

I’m sure that all of us that have coded simulations can relate. If only we always had something as nice as the conservation of energy to help us debug.

Of course, this doesn’t mean that there aren’t interesting discussions to be had on chaos and prediction. I’ve written before on computer science on prediction and the edge of chaos and on how stochasticity, chaos and computations can limit prediction. But we shouldn’t use chaos or complexity to stop ourselves from asking questions or making predictions. Instead, we should use apparent complexity as motivation to find the limits of more linear theories. And whatever system we work with, we should look for overarching global principles like the conservation of energy that we can use to abstract over the chaotic microdynamics.

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+and Twitter.

Source: Hiding behind chaos and error in the double pendulum | Theory, Evolution, and Games Group

 

Dialectical Systems Learning

csl4d's avatarCSL4D

During the past month or so I have been ruminating over my post of May 10, 2019, which was about my latest effort to come to a better understanding of the workings of a systems approach described in a workbook that I am co-writer of. Wicked Solutions, as it is called, uses three operable systems concepts to explain systems thinking in a nutshell and encourages learners to apply them directly on a ‘wicked’ problem of their own so as to gain a direct, hands-on experience of their usefulness. The three concepts are: inter-relationships, perspectives, and boundaries. Last week I had a discussion with two members of staff of Australia’s Southern Cross University, Ken Doust and Andrew Swan, who has used Wicked Solutions in one of his courses. They had several critical observations that set me thinking. One idea was to focus the dialectical systems approach of Wicked…

View original post 1,033 more words

Brief history of the cybernetic paradigm – Javier Livas on YouTube

GBSKIP NAVIGATION9+ 0:11 / 4:06BRIEF HISTORY OF THE CYBERNETIC PARADIGM46 views50SHARESAVEJavier Livas CantuPublished on 12 Jun 2019SUBSCRIBED 1.8KFrom the Bible to the Newtonian Paradigm, knowledge has finally come to depend on the CYBERNETIC PARADIGM. Most science today is done using models.

Uberty

[OMG. Proceed with caution, there could be pirated material here. But what a treasure trove!]

 

Source: Uberty

Uberty is a resource and research hub indexing documents and media related to New Rationalism, Accelerationism, and other developing theory and …

But it does not contribute to the uberty of reasoning which far more calls for solicitous care —C.S. Peirce

Brain of the Firm – full pdf

Click to access beer.pdf

 

Tagged ‘systems’

  • Author
  • Other Contributors
  • Document
  • Tags
  • Format

 

 

Connecting with Source, Self, System – deep immersion

Giles Hutchins's avatarThe Nature of Business

Connecting with Source, Self, System

5th September, 2019

with Giles Hutchins and Katherine Long

Katherine Long and Giles Hutchins are hosting a unique nature-immersion retreat, an opportunity for profound reflection which will renew, re-energise and regenerate – a day that will support you to deeply re- connect with your sense of purpose, the wider systems you are a part of, and the future you seek to co-create.

Together with other change practitioners (leaders, coaches, organization design and development practitioners, activists) we will explore questions such as:

  • What shifts in ourselves, our work, and in our professional communities are needed to respond to the scale of threat this planet faces?
  • What can we learn from living-systems about resilience, adaptability, collective intelligence and change?
  • How do we support healing and wholeness in a fragmented world?
  • What can we do we resource ourselves and each other to stay aligned to ‘deep purpose’ whilst…

View original post 126 more words

Gordon Pask’s Adaptive Teaching Machines

 

Source: Gordon Pask’s Adaptive Teaching Machines

The earliest teaching machines – those built by B. F. Skinner and Sidney Pressey, for example – were not adaptive. They did promise “personalization” of sorts by allowing students to move at their own pace through the lessons, but that path was quite rigidly scripted. The machines only responded to right or wrong, allowing students to proceed to the next question if they got the previous question right. And the point, particularly of machines designed around Skinner’s theory of “operant conditioning,” was for the student to get it right, that is to maximize the positive reinforcement. As Paul Saettler writes in his 1968 book, A History of Instructional Technology, “Effective Skinnerian programming requires instructional sequences so simple that the learner hardly ever makes an error. If the learner makes too many errors – more than 5 to 10 percent – the program is considered in need of revision.” These machines could not diagnose why a student got an answer wrong or right; again, according to behaviorist theory, the machines were designed so to make sure students got it right.

Despite initial excitement of learning with a new technology like one of Skinner’s teaching machines, many students found these devices to be quite boring. “The biggest problem with programmed instruction was simply that kids hated it,” writes Bob Johnstone in Never Mind the Laptops. “In fact, it drove them nuts – especially the brighter ones. The rigidity of the seemingly endless, tiny-steps, one-word-answer format bored clever students to tears. They soon found ingenious ways of circumventing the programs and even, in some cases, of sabotaging the machines. A well-placed wad of chewing gum could throw a whole terminal out of whack.”

Adaptive Teaching Machines

Best known for Conversation Theory, the British cybernetician Gordon Pask designed a different sort of teaching machine – an adaptiveteaching machine – patenting it in 1956. This patent provides the basis for the self-adaptive keyboard instructor (SAKI), which the theorist Stafford Beer described as “possibly the first truly cybernetic device (in the full sense) to rise above the status of a ‘toy’ and reach the market as a useful machine.”

The SAKI was designed to train people to use a Hollerith key punch, a manual device used to punch holes in cards used in turn for data processing. There was at the time quite a significant demand for keypunch operators – mostly women – as this was, until the 1970s, a common method for data entry.

Image credits: Gordon Pask, “SAKI: Twenty-five years of adaptive training into the microprocessor era”

Like many teaching machines (then and now), SAKI purported to function like a human tutor. But unlike earlier teaching machines, the adaptive component of Pask’s devices offers more than just an assessment of right or wrong: they identify and measure a student’s answers – accuracy, response time – and adjust the next question accordingly. That is, the difficulty of the questions are not pre-programmed or pre-ordained.

Continues in source…

Teaching Machines: An American Story (And the Case for Gordon Pask)

 

Source: Teaching Machines: An American Story (And the Case for Gordon Pask)

6 min read

One of the criticisms I get about my work is that it is too focused on education technology in the US. I typically hear this every December, when I publish my year-end review of the field. Although I recognize that Americans are prone to self-centeredness, I don’t purposefully overlook the rest of the world’s experiences out of any sense of nationalism. Rather, I believe that education technology is imagined, developed, and implemented in a particular context. And that context is shaped by a country’s school systems, educational policies, and larger social, economic, and political forces.

(I often say: if you want to write an annual ten-part series about how your country has experienced education technology, please do.)

As I’ve written previously, many histories of education technology have been written as though this context is irrelevant. They spend little time talking about what was happening in education (as an institution, for example). As such, new technologies seem to appear out of nowhere – a creation of a genius inventor, rather than a reflection some larger cultural forces.

Teaching Machines will be limited in its scope to a particular time period in a particular country – that is, to the mid–1920s thru the late 1960s in the US. I want to be able to contextualize the work of Sidney Pressey, B. F. Skinner, Norman Crowder, and others by addressing how their machines coincided with developments in educational psychology and standardized testing; how they were responses to changes in student demographics and to the launch of Sputnik; how these machines reflected a twentieth-century fascination with gadgetry and automation; how they were part of a much larger push by businesses to sell curriculum products to schools; how they underscored that most American of values, individualism, with their proponents calling for instruction to become more “individualized.”

Education technology is not solely an American story. But the one I’m writing will be.

There is (I think) one possible exception to the American setting and American cast of characters, and that’s the British cybernetician Gordon Pask.

Continues in source…

Dancing with systems, uncertainty & positive emergence – Daniel Christian Wahl

 

Source: Dancing with systems, uncertainty & positive emergence

 

Age of Awareness
Image of a whirling Dervish (source)

Dancing with systems & designing for positive emergence

This webinar was hosted by Deeanna Burleson, Ph.D. in the ‘Topics in the field of systems science’ series intended as a contribution to the International Society for Systems Science (here is a link to the original announcement of the webinar).

The webinar (video link below) starts with a 45 minute presentation offering reflections on nearly 20 years of experience in trying to apply whole systems thinking to the field of design for sustainability and more recently regenerative development practice.

I explore the limitations of a quantity focussed science and the need for a new ‘science of qualities’. I explore how the work and thinking of Donella Meadows evolved from ‘leverage points 1.0’ to ‘leverage points 2.0’ and on to ‘Dancing with systems. Nora Bateson’s ‘warm data’ approach is mentioned as a related example of this relationships and qualities focussed work with systems from the inside. Katia Laszlo’s framing from ‘systems thinking to systems being’ is also briefly addressed.

I go on to explore some of the most common mistakes made in intervening in complex adaptive systems and highlight the need for embracing the fundamental unpredictability of such systems as we move from being detached observers to being engaged participants of such systems. In this context I also speak to the power that lies in asking the right questions and asking questions rather than offering a list of principles to follow.

To ground these theoretical considerations and paradigmatic shifts in a practical example, I offer a brief summary of my work for Gaia Education on the ‘SDG Flashcards’, the ‘SDG Project Canvas’, the ‘SDG Training of Multipliers’ and the ‘Multipliers Handbook’ as an example of ‘designing for positive emergence’ and taking Bucky Fuller’s advice that “to change the way people think, don’t tell them what to think, give them a tool the use of which will change the way they think”. The flashcards by Gaia Education and UNESCO have been used successfully on 5 continents and translated into 6 languages.

… after the presentation there is another 45 minuter conversation or Q&A session that addresses a wide range of topics related to ‘Designing Regenerative Culture’ … enjoy and share!

s
search
c
compose new post
r
reply
e
edit
t
go to top
j
go to the next post or comment
k
go to the previous post or comment
o
toggle comment visibility
esc
cancel edit post or comment