Why I Am Not A Technocrat | RadicalxChange

 

Source: Why I Am Not A Technocrat | RadicalxChange

Why I Am Not A Technocrat

In the months leading up to the RadicalxChange conference in March, I wrote a series of critiques of prominent contemporary ideologies (capitalism, statism and nationalism) as well as an attempt to sketch the positive beliefs of the RxC movement. Since this time, however, it has become apparent that I omitted a critical contemporary ideology, perhaps the one with which RxC is most likely to be confused by outsiders (and which most RxC participants previous subscribed to): technocracy. Myself, I was socialized into a highly technocratic culture. In this blog post I try to fill this lacuna.

By technocracy, I mean the view that most of governance and policy should be left to some type of “experts”, distinguished by meritocratically-evaluated training in formal methods used to “optimize” social outcomes. Many technocrats are at least open to a degree of ultimate popular sovereignty over government, but believe that such democratic checks should operate at a quite high level, evaluating government performance on “final outcomes” rather than the means of achieving these. They thus believe the intelligibility and evaluability of technocratic designs by the broader public is of little value. Within these broad outlines, technocracy comes in many flavors. A couple of notable and less democratic version are the forms adopted by the Chinese communist party, the “neoreactionary” movement and its celebration of Lee Kwan Yew’s Singapore.

Yet perhaps the most prominent version, especially in democratic countries, is a belief in a technocracy based on a mixture of analytic philosophy, economic theory, computational power and high-volume statistical analysis, often using experimentation. This form of technocracy is a widely held view among much of the academic and high technology elites, among the most powerful groups in the world today. I focus on this tendency as I assume it will be the form of technocracy most familiar and attractive to my readers, and because the neoreactionary and Chinese Communist technocracies have much conceptually and intellectual historically in common with it. Some examples of more extreme versions of this view, likely to be popular among my readers, are common in the “rationalist” community and projects adjoining it such as effective altruism, mechanism design, artificial intelligence alignment and, to a lesser extent, humane design. I will critique each of these tendencies in detail as archetypes of technocracy.

Such rationalist projects are generally “outcome oriented” and utilitarian, have great faith in formal and quantitative methods of analysis and measurement. Their standard operating procedure is to take abstract goals related to human welfare, derive from these a series of more easily-measurable target metrics (ranging from gross domestic product to specific village level health outcomes) and use optimization tools and empirical analysis derived from economics, computer science and statistics to maximize these outcomes. This process is imagined as taking place overwhelmingly outside the public eye and is viewed as technical in nature. The public is invited to judge final outcomes only, and invited to offer input into the process only through formalisms such as “likes”, bets, votes, etc. Constraints on this process based on democratic legitimacy or explicability, “common sense” restrictions on what should or shouldn’t be optimized, unstructured or verbal input into the process by those lacking formal training, etc. are all viewed as harmful noise at best and as destructive meddling by ill-informed politics at worst.

The fundamental problem with technocracy on which I will focus (as it is most easily understood within the technocratic worldview) is that formal systems of knowledge creation always have their limits and biases. They always leave out important consideration that are only discovered later and that often turn out to have a systematic relationship to the limited cultural and social experience of the groups developing them. They are thus subject to a wide range of failure modes that can be interpreted as reflecting on a mixture of corruption and incompetence of the technocratic elite. Only systems that leave a wide range of latitude for broader social input can avoid these failure modes. Yet allowing such social input requires simplification, distillation, collaboration and a relative reduction in the social status and monetary rewards allocated to technocrats compared to the rest of the population, thereby running directly against the technocratic ideology. While technical knowledge, appropriately communicated and distilled, has potentially great benefits in opening social imagination, it can only achieve this potential if it understands itself as part of a broader democratic conversation.

My argument proceeds in six parts:

  1. Formal social systems intended to serve broad populations always have blind spots and biases that cannot be anticipated in advance by their designers.
  2. Historically, these blind spots often lead to disastrous outcomes if they are left unchecked by external input. If this input is left to the outcome stage, disasters must occur before the system is reconsidered rather than biases being caught during the process.
  3. Failures of technocracy in managing economic and computational systems today bear significant responsibility for widespread feelings of illegitimacy that threaten respect for the best-grounded science that technocrats believe is most important for the public to trust.
  4. Technical insights and designs are best able to avoid this problem when, whatever their analytic provenance, they can be conveyed in a simple and clear way to the public, allowing them to be critiqued, recombined, and deployed by a variety of members of the public outside the technical class.
  5. Technical experts therefore have a critical role precisely if they can make their technical insights part of a social and democratic conversation that stretches well beyond the role for democratic participation imagined by technocrats. Ensuring this role cannot be separated from the work of design.
  6. Technocracy divorced from the need for public communication and accountability is thus a dangerous ideology that distracts technical experts from the valuable role they can play by tempting them to assume undue, independent power and influence.

 

Continues in source: Why I Am Not A Technocrat | RadicalxChange

Excellent tweet stream for #RSD8 conference

https://twitter.com/hashtag/rsd8?src=hashtag_click

SystemViz Project by Elanica

THE PROJECT

SystemViz is a research project exploring how visuals can enhance systems thinking, especially as it relates to inter-disciplinary, collaborative design. Findings are expressed as visual codexes and other applied tools. Phase One of the project is an exploration of the visual notation techniques used to express systems across disciplines. Phase Two is an exploration of the theoretical literature of various disciplines—in the natural science, social sciences, design disciplines, and managerial disciplines—to distil the basic elements and dynamics. These elements and dynamics are then abstracted into generic types and displayed with an illustrative icon. This is called a Visual Vocabulary, which is being released under a Commons Free Culture license for all to use and modify. Watch this space in the coming days for kits to download. In the meantime, look at the first codex poster which can be downloaded here

Source: SystemViz Project by Elanica

 

 

How Complex Systems Fail

Source: How Complex Systems Fail

 

How Complex Systems Fail

(Being a Short Treatise on the Nature of Failure; How Failure is Evaluated; How Failure is Attributed to Proximate Cause; and the Resulting New Understanding of Patient Safety)

Richard I. Cook, MD
Cognitive Technologies Labratory
University of Chicago
  1. Complex systems are intrinsically hazardous systems.

    All of the interesting systems (e.g. transportation, healthcare, power generation) are inherently and unavoidably hazardous by the own nature. The frequency of hazard exposure can sometimes be changed but the processes involved in the system are themselves intrinsically and irreducibly hazardous. It is the presence of these hazards that drives the creation of defenses against hazard that characterize these systems.

  2. Complex systems are heavily and successfully defended against failure

    The high consequences of failure lead over time to the construction of multiple layers of defense against failure. These defenses include obvious technical components (e.g. backup systems, ‘safety’ features of equipment) and human components (e.g. training, knowledge) but also a variety of organizational, institutional, and regulatory defenses (e.g. policies and procedures, certification, work rules, team training). The effect of these measures is to provide a series of shields that normally divert operations away from accidents.

  3. Catastrophe requires multiple failures – single point failures are not enough.

    The array of defenses works. System operations are generally successful. Overt catastrophic failure occurs when small, apparently innocuous failures join to create opportunity for a systemic accident. Each of these small failures is necessary to cause catastrophe but only the combination is sufficient to permit failure. Put another way, there are many more failure opportunities than overt system accidents. Most initial failure trajectories are blocked by designed system safety components. Trajectories that reach the operational level are mostly blocked, usually by practitioners.

  4. Complex systems contain changing mixtures of failures latent within them.

    The complexity of these systems makes it impossible for them to run without multiple flaws being present. Because these are individually insufficient to cause failure they are regarded as minor factors during operations. Eradication of all latent failures is limited primarily by economic cost but also because it is difficult before the fact to see how such failures might contribute to an accident. The failures change constantly because of changing technology, work organization, and efforts to eradicate failures.

  5. Complex systems run in degraded mode.

    A corollary to the preceding point is that complex systems run as broken systems. The system continues to function because it contains so many redundancies and because people can make it function, despite the presence of many flaws. After accident reviews nearly always note that the system has a history of prior ‘proto-accidents’ that nearly generated catastrophe. Arguments that these degraded conditions should have been recognized before the overt accident are usually predicated on naïve notions of system performance. System operations are dynamic, with components (organizational, human, technical) failing and being replaced continuously.

  6. Catastrophe is always just around the corner.

    Complex systems possess potential for catastrophic failure. Human practitioners are nearly always in close physical and temporal proximity to these potential failures – disaster can occur at any time and in nearly any place. The potential for catastrophic outcome is a hallmark of complex systems. It is impossible to eliminate the potential for such catastrophic failure; the potential for such failure is always present by the system’s own nature.

  7. Post-accident attribution to a ‘root cause’ is fundamentally wrong.

    Because overt failure requires multiple faults, there is no isolated ‘cause’ of an accident. There are multiple contributors to accidents. Each of these is necessarily insufficient in itself to create an accident. Only jointly are these causes sufficient to create an accident. Indeed, it is the linking of these causes together that creates the circumstances required for the accident. Thus, no isolation of the ‘root cause’ of an accident is possible. The evaluations based on such reasoning as ‘root cause’ do not reflect a technical understanding of the nature of failure but rather the social, cultural need to blame specific, localized forces or events for outcomes. 1

    1 Anthropological field research provides the clearest demonstration of the social construction of the notion of ‘cause’ (cf. Goldman L (1993), The Culture of Coincidence: accident and absolute liability in Huli, New York: Clarendon Press; and also Tasca L (1990), The Social Construction of Human Error, Unpublished doctoral dissertation, Department of Sociology, State University of New York at Stonybrook

  8. Hindsight biases post-accident assessments of human performance.

    Knowledge of the outcome makes it seem that events leading to the outcome should have appeared more salient to practitioners at the time than was actually the case. This means that ex post facto accident analysis of human performance is inaccurate. The outcome knowledge poisons the ability of after-accident observers to recreate the view of practitioners before the accident of those same factors. It seems that practitioners “should have known” that the factors would “inevitably” lead to an accident. 2 Hindsight bias remains the primary obstacle to accident investigation, especially when expert human performance is involved.

    2 This is not a feature of medical judgements or technical ones, but rather of all human cognition about past events and their causes.

  9. Human operators have dual roles: as producers & as defenders against failure.

    The system practitioners operate the system in order to produce its desired product and also work to forestall accidents. This dynamic quality of system operation, the balancing of demands for production against the possibility of incipient failure is unavoidable. Outsiders rarely acknowledge the duality of this role. In non-accident filled times, the production role is emphasized. After accidents, the defense against failure role is emphasized. At either time, the outsider’s view misapprehends the operator’s constant, simultaneous engagement with both roles.

  10. All practitioner actions are gambles.

    After accidents, the overt failure often appears to have been inevitable and the practitioner’s actions as blunders or deliberate willful disregard of certain impending failure. But all practitioner actions are actually gambles, that is, acts that take place in the face of uncertain outcomes. The degree of uncertainty may change from moment to moment. That practitioner actions are gambles appears clear after accidents; in general, post hoc analysis regards these gambles as poor ones. But the converse: that successful outcomes are also the result of gambles; is not widely appreciated.

  11. Actions at the sharp end resolve all ambiguity.

    Organizations are ambiguous, often intentionally, about the relationship between production targets, efficient use of resources, economy and costs of operations, and acceptable risks of low and high consequence accidents. All ambiguity is resolved by actions of practitioners at the sharp end of the system. After an accident, practitioner actions may be regarded as ‘errors’ or ‘violations’ but these evaluations are heavily biased by hindsight and ignore the other driving forces, especially production pressure.

  12. Human practitioners are the adaptable element of complex systems.

    Practitioners and first line management actively adapt the system to maximize production and minimize accidents. These adaptations often occur on a moment by moment basis. Some of these adaptations include: (1) Restructuring the system in order to reduce exposure of vulnerable parts to failure. (2) Concentrating critical resources in areas of expected high demand. (3) Providing pathways for retreat or recovery from expected and unexpected faults. (4) Establishing means for early detection of changed system performance in order to allow graceful cutbacks in production or other means of increasing resiliency.

  13. Human expertise in complex systems is constantly changing

    Complex systems require substantial human expertise in their operation and management. This expertise changes in character as technology changes but it also changes because of the need to replace experts who leave. In every case, training and refinement of skill and expertise is one part of the function of the system itself. At any moment, therefore, a given complex system will contain practitioners and trainees with varying degrees of expertise. Critical issues related to expertise arise from (1) the need to use scarce expertise as a resource for the most difficult or demanding production needs and (2) the need to develop expertise for future use.

  14. Change introduces new forms of failure.

    The low rate of overt accidents in reliable systems may encourage changes, especially the use of new technology, to decrease the number of low consequence but high frequency failures. These changes maybe actually create opportunities for new, low frequency but high consequence failures. When new technologies are used to eliminate well understood system failures or to gain high precision performance they often introduce new pathways to large scale, catastrophic failures. Not uncommonly, these new, rare catastrophes have even greater impact than those eliminated by the new technology. These new forms of failure are difficult to see before the fact; attention is paid mostly to the putative beneficial characteristics of the changes. Because these new, high consequence accidents occur at a low rate, multiple system changes may occur before an accident, making it hard to see the contribution of technology to the failure.

  15. Views of ‘cause’ limit the effectiveness of defenses against future events.

    Post-accident remedies for “human error” are usually predicated on obstructing activities that can “cause” accidents. These end-of-the-chain measures do little to reduce the likelihood of further accidents. In fact that likelihood of an identical accident is already extraordinarily low because the pattern of latent failures changes constantly. Instead of increasing safety, post-accident remedies usually increase the coupling and complexity of the system. This increases the potential number of latent failures and also makes the detection and blocking of accident trajectories more difficult.

  16. Safety is a characteristic of systems and not of their components

    Safety is an emergent property of systems; it does not reside in a person, device or department of an organization or system. Safety cannot be purchased or manufactured; it is not a feature that is separate from the other components of the system. This means that safety cannot be manipulated like a feedstock or raw material. The state of safety in any system is always dynamic; continuous systemic change insures that hazard and its management are constantly changing.

  17. People continuously create safety.

    Failure free operations are the result of activities of people who work to keep the system within the boundaries of tolerable performance. These activities are, for the most part, part of normal operations and superficially straightforward. But because system operations are never trouble free, human practitioner adaptations to changing conditions actually create safety from moment to moment. These adaptations often amount to just the selection of a well-rehearsed routine from a store of available responses; sometimes, however, the adaptations are novel combinations or de novo creations of new approaches.

  18. Failure free operations require experience with failure.

    Recognizing hazard and successfully manipulating system operations to remain inside the tolerable performance boundaries requires intimate contact with failure. More robust system performance is likely to arise in systems where operators can discern the “edge of the envelope”. This is where system performance begins to deteriorate, becomes difficult to predict, or cannot be readily recovered. In intrinsically hazardous systems, operators are expected to encounter and appreciate hazards in ways that lead to overall performance that is desirable. Improved safety depends on providing operators with calibrated views of the hazards. It also depends on providing calibration about how their actions move system performance towards or away from the edge of the envelope.

Other Materials

Original Copyright © 1998, 1999, 2000 by R.I.Cook, MD, for CtL

Exciting job! Global Director of Systems Change Programmes | Forum for the Future

 

Source: Global Director of Systems Change Programmes | Forum for the Future

Location: London, New York, Mumbai or Singapore

Salary: Sector competitive

Benefits: attractive and flexible

We at Forum for the Future have a rare and exciting job opportunity for a recognized senior leader in the international sustainability field with a passion for creating positive change.

About us

Forum for the Future have partnered with businesses, governments and civil society for over 20 years, to accelerate the shift towards a sustainable future and to tackle some of the world’s most complex environmental and societal challenges.

With offices in London, New York, Singapore and Mumbai, Forum has been at the forefront of creating an international community of pioneers and change makers including partners and funders such as M&S, Unilever, the C&A Foundation and the Rockefeller Foundation.

About the role

This unique opportunity will see you leading the development and implementation of significant global programmes to catalyse change on key global challenges. Reporting to the Chief Executive, you will be a key member of the Senior Management Team, and will drive the strategy and change processes that are at the heart of the organisation’s work. You will be responsible for the development and testing of “Challenge Labs”, the new way in which we are focusing our programmes in order to enhance their impact by incorporating systemic design principles, and adopting experimental and collaborative approaches to solving complex problems. At the moment we are building and testing three Challenge Labs: Sustainable Nutrition, Sustainable Value Chains and Livelihoods, and 1.5°C. You will also guide our transformational strategies with leading organisations.

Forum is at an exciting time in its evolution – building off its extensive experience in working with large global organisations to drive wholesale systems change. As a dynamic, international leader with a deep commitment to creating a better future, you will play a pivotal role in creating this change.

To be successful in this position you will need to have:

  • Ability to think strategically at a business-wide and international scale
  • Significant experience in developing programmes using system change tools and approaches to drive sustainability either in a corporate, consultancy or similar non-profit
  • A deep understanding of sustainable development, evidenced by impact in at least one of the key Challenge Lab areas
  • A strong track record in managing projects effectively to delivery outcomes, on time and on budget
  • Excellent external communication skills: speaking, writing and engagement via social media
  • A good understanding of developing programmes which feature mechanisms for market transformation and business model innovation is desirable, as is an understanding of sustainable finance instruments
  • This is an incredibly exciting and rare opportunity to be part of one of the most influential organisations in sustainability, leading on developing Forum’s most impactful programmes which deliver true and tangible impact on a global scale.

We have partnered with Acre, a market-leader in sustainability recruitment, in our search for our new Global Director of Systems Change Programmes. To learn more about the role and to register your interest, please apply through the Acre website.

Source: Global Director of Systems Change Programmes | Forum for the Future

 

Seeing the system • Meaning Guide

 

Source: Seeing the system • Meaning Guide

Seeing the system

There’s something missing.

The newly released UK apprenticeship standard for Systems Thinking locates systems practice “in arenas where complex problems exist that cannot be addressed by any one organisation or person, but which require cross-boundary collaboration within and between organisations.” This is great. It neatly identifies why the language of systems is coming back into vogue: As the world becomes more complex, we’re waking up to the fact that problems can’t be simply pinned down to one person, one team, one organisation, one population.

But I want to look at this quotation more closely, because it highlights what for me is a big gap in the systems world. In particular, I want to pull out the two concepts of “complex problems” and “cross-boundary collaboration”. Firstly, let’s just pause to appreciate what a wonderful thing it is to be able to read these two phrases in the same sentence in a government-backed standard! So how do systems practitioners actually create “cross-boundary collaboration” to address “complex problems”? How does it actually work in practice? Well, one of the things a systems intervention will invariably involve is some form of collaborative modelling. I’m using modelling here in a very generic sense; even if nothing is written down, and the intervention simply amounts to a series of conversations across organisational boundaries, this will still have the effect of shaping the mental models of those involved in the conversation.

But that’s not most people’s experience of systems modelling…

Continues in source: Seeing the system • Meaning Guide

Syntegration/Team Syntegrity

This is a slightly random overview but drawing on some authoritative sources. The trademark stuff is because Stafford Beer sold his intellectual property to Fredmund Malik (or the Malik management centre or Malik consulting), which therefore owns some forms of trademarks for specific words like this, including the Malik VSM (R). It seems from a review of the material that some people respect this, some are licensed, some people run the same or similar processes using different names, and some just ignore it… however it’s clear the sale of intellectual property and knowledge base is completely legitimate.

 

Overview from Metaphorum: Syntegration – metaphorum

Stafford Beer’s Syntegration as a Renascence of the Ancient Greek Agora in Present-day Organizations, Gunter Nittbaur (Malik Management Zentrum St. Gallen, Switzerland: Stafford Beer’s Syntegration as a Renascence of the Ancient Greek Agora in Present-day Organizations

 

 

pdf: Team Syntegrity Background by Allenna D. Leonard, PhD: https://library.uniteddiversity.coop/Effective_Organising/Team_Syntegrity/Team%20Syntegrity%20Background.pdf

pdf: The coherent architecture of team syntegration from small to mega forms – Leonard, A, Truss, J, and Cullen, C

pdf: http://www.sympoetic.net/Conversations/structured_files/Truss%20%26%20Leonard%20Team%20Synteg.pdf

 

Malik SuperSyntegration(R) (MSS(R)) https://www.malik-management.com/malik-solutions/malik-tools-and-methods/malik-supersyntegration-mss/

 

Flash video overview of the process at http://www.syntegration.com/

 

pdf: From workshop to Syntegration(R): the genetic code of effective communication, Martin Pfiffner 2004

http://www.syntegration.com/_file/09_From+workshop+to+syntegration.pdf

 

 

 

s
search
c
compose new post
r
reply
e
edit
t
go to top
j
go to the next post or comment
k
go to the previous post or comment
o
toggle comment visibility
esc
cancel edit post or comment