Systems Leaders: We See You – Rachel Sinha

link.medium.com/vBbPf7DUkR

INITIATING CONVERSATIONS THAT COUNT Practical know‐how for prevention: self in the system By Monica Bensberg

[please excuse my google drive link, Monica sent this to me for distribution here so I’m sharing it the quickest and easiest way I know. Looks great]

https://drive.google.com/file/d/1Cc91vJOYh6eJMvfRP4GQBffpjR3m5hqD/view?usp=sharing

Intro:
This discussion paper is for health promotion
practitioners who are trying to resolve wicked
problems, such as obesity, suicide, family violence
and other persistent societal challenges.
This discussion paper aims to encourage you to
initiate conversations that count. They are the
meaningful conversations that help change
complex systems for the better.
The BIG IDEA and call to action here is that you,
from your place inside the prevention system,
consciously seek opportunities to use persuasive
actions to instigate lasting health-promoting
solutions. You need to intentionally put yourself (or
others) in front of the right person, in the right
place, at the right time with the right message to
challenge the status quo. This is about the
contribution that you can make by leading
conversations that shape everyday thinking and
actions (Abercrombie, Boswell et al. 2018).

CONTENTS
1. Networking human systems Page 4
1.1. Self in the system
1.2. Conversations mobilise action
2. Influencers’ practical know-how Page 6
2.1. Influencers make things happen
2.2. Influence grows
2.3. Types of influence
2.3.1. Personal awareness and direct influence
2.3.2. Peripheral awareness and indirect influence
2.3.3. Contextual awareness and situational influence
2.4. Top influencing tips
3. Other things to consider Page 14
3.1. Practising in partnership
3.2. Capacity-building
4. Embracing your powers of persuasion Page 15
5. Sources of inspiration Page 15
5.1. Systems concepts
5.2. More great ideas

Systems Innovation Toolkit – Complexity Labs

The Systems Innovation Toolkit is a process and set of tools for enabling systems-level change within complex organizations. It is designed to enable organizations of all kind to transform how they both think and operate. It brings together in an accessible way key ideas from complexity thinking and applies them to enabling systems-level transformation. This toolkit is designed as an aid for systems innovators to tackle wicked problems; at present there is a lot out there in terms of different methods and ideas, but it is simply parts, we combine these to create something coherent and actionable. We built this toolkit because there is nothing existing at present. This toolkit is created for designers and consultants to facilitate their clients, for use in workshops and organization change. This toolkit should be seen not as a formula but as an aid in application, as systems innovation is a fundamentally practical activity that is specific to each system and organization – thus the framework presented here is but a guide that needs adapting to the given context.
  • Publish Date: 24-10-2018

  • Length: 68 pages

  • Category: Systems Innovation

Designing stuff, revisited

CSL4D

In August 2014, I made a post (here) about thepresentation on design thinking by Harold G. Nelson at the Human-Computer Interaction Seminar on People, Computers, and Design at Stanford on April 16, 2010Since then I have grown convinced that design and systems thinking are more intricately linked than I thought back then (see also my post of 4 October 2018). Hence the need to revisit Harold Nelson’s work on design thinking. The problem is to find a suitable scaffold to construe my understanding of Nelson’s design thinking in terms of Churchman’s systems thinking. In Nelson’s major book on design, the design way, Churchman is referred to 5 times and his systems approach 8 times, which seems relatively little compared with its underlying influence. Nelson’s presentation at Stanford is an introduction to his book, which is an introduction to the fundamental understanding of design…

View original post 1,272 more words

a quick apology – lots of posts today and some of them are from the 60s, 80s…

I was getting through a backlog and some explanations, so there’s a lot here today – apologies for the deluge. I also took the decision not to identify the year in the title, which I do now rather regret! Many are old but classic or core subjects.

Partly, I was look for validation that von Foerster was under the kitchen table when Wittgenstein and the Vienna set were talking, by the way 🙂

interview with Heinz von foerster

interview

Heinz von Foerster

Stefano Franchi, Güven Güzeldere, and Eric Minch

 


stanford humanities review: The primary goal of this special issue of SHR is to promote a multidisciplinary dialogue on Artificial Intelligence and the humanities. We think you are most qualified to facilitate such a dialogue since you have trotted along many disciplinary paths in your career, ranging from mathematics and physics to biophysics and hematology, to pioneering work on cybernetics, to philosophy, and even family therapy. One could even say that “transdisciplinarity” has been your expertise. . . .

heinz von foerster: I don’t know where my expertise is; my expertise is no disciplines. I would recommend to drop disciplinarity wherever one can. Disciplines are an outgrowth of academia. In academia you appoint somebody and then in order to give him a name he must be a historian, a physicist, a chemist, a biologist, a biophysicist; he has to have a name. Here is a human being: Joe Smith — he suddenly has a label around the neck: biophysicist. Now he has to live up to that label and push away everything that is not biophysics; otherwise people will doubt that he is a biophysicist. If he’s talking to somebody about astronomy, they will say “I don’t know, you are not talking about your area of competence, you’re talking about astronomy, and there is the department of astronomy, those are the people over there,” and things of that sort. Disciplines are an aftereffect of the institutional situation.

My personal history has been different. If somebody asks me “Heinz, how is it that although you studied physics, mathematics, and all that, you are always with artists and musicians, etc.?” I think it is because I grew up in Vienna, at a fascinating time of Viennese history. I was born in 1911, looking back over almost the whole twentieth century, with just eleven percent missing at the beginning and a six percent missing at the end. So I had the pleasure of traveling through the twentieth century from its early stages. At that time — in the late nineteenth century — Vienna was an extraordinary place. It had a remarkable medical faculty, fascinating philosophers, it had great art (a new artistic revolution was taking place in Vienna under the name “Jugendstil” (or Art Nouveau, as it was then known in all of Europe); fascinating painters, an explosion of artistic activity, music, Mahler, dance, etc. In all fields there was a breakaway from the classic and from the standards of the nineteenth century perspective. I had the luck to be born into a family which was participating in all that activity. As a little boy I was already associated with them: I was sitting under the piano and listening while the grownups were talking with each other. It was always fascinating to listen to what they were saying. My mother’s and my father’s house was a very open house, but the really open house was the house of my maternal grandmother. She happened to be one of the early and leading feminists who published the first women’s journal in all Europe — Documents of Women. Politicians, writers, journalists, theater people, etc. were in my grandmother’s house. We, as kids, were of course always looking and listening; we were immersed in a world that had no specifics, no disciplines. I mean, everybody was connected and arguing about what art, politics, philosophy, should be. Growing up in such a world brings you into a state of affairs where you have difficulties looking at single disciplines. I see practically every sentence we are speaking as already like a millipede connected with five hundred — oh but a millipede, so a thousand — other notions. And to talk about a single notion has a built-in sterility which does not allow you to make the semantic connections to all those other concepts.

Continues in source: interview with Heinz von foerster

Heinz von Foerster and the second-order cybernetics – Emergence: Complexity and Organization

Heinz von Foerster and the second-order cybernetics

 · PDFXML

Author

Introduction

This issue of E:CO is reprinting a classic paper by Heinz von Foerster, one of the key players in the formation and development of cybernetics. Von Foerster was an Austrian/American engineer, scientist, science expositor, philosopher, and cultural commentator (rather than listing each reference which would render this document unwieldy, history of cybernetics were taken from Heims1, Kline2, Dupuy3, Cybernetics at Wikipedia4, Bio-von Foerster5. After WWII, Von Foerster worked with other World War II technocrats from both the Allied and the Axis Powers who aimed at continuing and expanding the unprecedented scientific, mathematical, and engineering advances that had been brought on by the unprecedented war effort. Cybernetics grew-up in this atmosphere out of the working and personal relationships, professional associations, research and theoretical collaborations that had been established during the war.

Central to the issues which cybernetics took-up were control/guidance systems such as found in ballistic missiles with built-in goal or purposive-seeking dynamics considered “teleological” designs despite the fact that strict Darwinian evolution had banned this notion from science altogether. The purpose of guided weaponry, of course, was hitting a target, largely accomplished through the quite clever understanding of and use of the new concept of feedback, sometimes called “back afferentiation” in the modeling of regulatory and self-regulatory processes. Control or guidance was achieved via a comparison of the pre-set with actual conditions and the consequent activation of “negative feedback” processes to close the difference between pre-set and actual. Thus, present measurements of velocity, distance, and other related variables are fedback (in a kind of causal loop of information) into a guidance system in order to correct by a readjustment of the guiding mechanisms.

The attainment of effective purposes on the part of such machines required a great deal of focus, resources, technical skills, and sheer imposing intellectual power which was made available via military and industrial research as well as university/academic participation. The awesome force of the two atomic bombs detonated on Japan to end the war demonstrated just how total this war was. Tragically, this horrifying force was not to depart at the end of the war but at least there were a plethora of peaceful ramifications of the same research which cybernetics was about to exploit for positive directions.

The actual cybernetics movement began in the US among a core group of scientists, mathematicians, engineers, and social scientists who formed and attended a series of Macy Conferences. The founders and participants were certainly the cream of the crop of Western intellect who had come together to fashion the most powerful warrior determination in world history, among whom cluded luminaries of the intellect no less than John von Neumann, Norbert Wiener, Claude Shannon; Benoit Mandelbrot, Humberto Maturana; Warren McCulloch, Walter Pitts, W. Grey Walther, W. Ross Ashby, Staffor Beer, Gregory Bateson, Arturo Rosenblueth, von Foerster, Enst von Glasersfeld (whose life and work was uncannily strangely similar to that of von Foerster (in my opinion at least), Gordon Pask, even Alan Turing showed at times. von Foerster’s own contribution covered many of the major themes of cybernetics in general, from its formation in the aftermath of World War II until his death in 2002, although as we’ll see below, he was mainly involved with what has been called “second-order” cybernetics Specific areas of interest within cybernetics on the part of von Foerster were:

  1. advances in control theory coming out of WWII ballistics and guided missile research (the “self-guided” missiles giving rise to the term cybernetics for “self-steering); explanations in terms of negative and positive feedback loops;

  2. the advent of and advances in modern electronic computers (covering such breakthroughs as computer storage and programming languages, machine languages and learning, artificial intelligence, and so forth);

  3. mass communication, network technologies, and information theory;

  4. the possibility of self-organizing systems characterized by the building-up of order/structure in the face of what was supposed to be only the Second Law’s increase of entropy or the degradation of order.

Just a brief glance at this list reveals the presence of themes that have also been taken-up in complexity theory. Although E:CO is well-known as a journal devoted to complexity science as such, this issue’s classic paper by von Foerster was authored not by a complexity researcher/theorist but rather a well-known and well-respected cybernetician. There is nothing untoward about this since cybernetics and complexity science have had a close long-term relationship with a great deal of overlapping, crossing-over, interacting among complexity researchers and cybernetics researchers even to the point where it can be quite difficult to discern differences between the fields. This closeness, of course, is also demonstrated by the overlapping of the personnel involved (it is common for one and the same person to credibly claim fealty to both traditions).

I think that it is too early in the game to accurately evaluate accurately exactly how cybernetics and complexity science are related . From my perspective, complexity science can be considered as having emerged from conceptual ground prepared by cybernetics even though, to an important extent, the two disciplines have remained separate endeavors. This can be seen, for example, in how certain themes from one side or the other have remained ensconced on one side or the other. A case in point is the epistemological perspective of radical constructivism (see below) which has served as a chief defining characteristic of the epistemological stance of so-called “second-order” cybernetics (see below) but does not have anywhere near that same status in complexity theory. Furthermore, complexity sciences tend to be more associated with academically-inclined mathematical/scientific/philosophical/cultural research pursuits whereas cybernetics began and continues to be more pragmatic in its wide-scope social and cultural issues. For instance, take the idea of self-organization, the topic of von Foerster’s classic paper. This is a phenomenon that has occupied pretty much the same interest on the part of both cybernetics and complexity theory so that any difference in attention between the two endeavors amounts mostly to a difference that does not make a difference, to adapt a phrase from the cybernetician Gregory Bateson.

Von Foerster: Biographical remarks

Heinz von Foerster was born in Vienna on Nov 13, 1911 into a family of engineers, architects, and artists, a family well-respected and comfortable in means. One can appreciate from the kinds of careers taken on by family members, i.e., a focus on designprinciples, that von Foerster would follow suit on being interested in design which indeed he does appear to have pursued. Moreover, his family and home environments were steeped in artistic, aesthetic, and creative endeavors in general and more specifically in the Viennese avant-garde, crucial factors in encouraging von Foerster’s general outlook. Noteworthy in this context is that much family time was spent in the artistic circles surrounding the great expressionist painter Oskar Kokoschka, and von Foerster even played with Ludwig Wittgenstein as a child and attended the famous Vienna Circle where Wittgenstein could be found at times.

In the early nineteen thirties, von Foerster enrolled at the Viennese Technische Hochschule to study “technical physics” while also attending lectures at the Vienna Circle. Von Foerster recalled that one of his favorite topics of discussion at the Vienna Circle was Wittgenstein’s Tractatus. Around the same time, von Foerster deepened his lifelong interest in the foundations of mathematics, a subject that was “hot” in Vienna and whose logical basis became a template for computational logic and thus figured in cybernetics. With all this familiarity with Wittgenstein, I find it rather strange that von Foerster did not have occasion to apply Wittgensteinian insight to some of the more extreme sounding, implausible epistemological claims associated with the “radical constructivism” (more on this below).

Getting back to the war, for a moment, there have been murmurings concerning von Foerster’s wartime record in Berlin. For instance, the claim has been made that one of von Foerster’s grandfathers was not of pure Aryan stock but rather a “Mischling”, i.e., mixed blood meaning part Jewish and that, as a result, von Foerster was in a sense hiding from the Nazis when he worked in Berlin. This is certainly a case of hiding in plain sight since I can hardly imaging a Jew (however fractionated his DNA as a Jew he really was) wanting to spend any time at all in Berlin. Moreover, if there was even the slightest hint that von Foerster was thereby a “Mischling” himself, how could he have possibly landed a technical, even war related job in Berlin (e.g., with Siemens or GEMA both of which did military work). There are also questions about his war record itself which appears to have disappeared, or been mislaid, or is still classified. All of this is troubling not because anyone seriously thinks of von Foerster was a brown shirt but rat her because of the aura of “hush hush” around the whole thing. I direct the reader’s attention to a review I wrote for E:CO several years back of a book about the general systems founder Ludwig Bertalannfy. Dear old Ludwig von, it turns out, was in fact a card-carrying member of the National Socialist Party although, of course, he made it a point to importunately deny he was not a true believer, only an opportunist Nazi. Why would anyone would think that was any better, genocide via opportunism or genocide via true belief? Becoming a representative of evil because it helps your career? Traveling in Germany in the late sixties, I was very surprised to hear how many of the Germans told me they were actually in Switzerland during the war or they were Swiss citizens (yet they lacked Swiss German accents!) I guess they meant they were climbing the hierarchy in the Swiss Regions of Bergen-Belsen or Dachau.

After the war, von Foerster returned to Austria where he worked for the large telephone and electrical engineering company there, and also became known as an effective science journalist and writer for the broadcasting company of the US occupation forces. At the same time, he worked in research involving quantum physics and memory capacity, notices of which did reach as far as the academies in the US where it would eventually help his career.

Von Foerster and his family then managed to emigrate to the US as part of the great exodus of German technocrats being gobbled up by the US and the Soviets. Through a combination of luck, skill at both administration and scientific method/theory, and his often remarked-upon feature of being personable, von Foerster built up a social network of quite high level researchers and thought leaders. At this time von Foerster was encouraged to pursue his research, refine his systemic approach, and try his hand at promulgating “popular” understandings of new scientific developments as he had done so in Austria and Germany. For instance, the preeminent researcher and cybernetician Warren McCulloch asked von Foerster to deliver a paper at one of the early Macy Conferences on the quantum physics of memory. As a result of his successes at such forums, von Foerster was asked to become the general secretary of the Macy conferences, an ideal position for a smart, ambitious and young (after all he was only 35 years old in 1946) engineer/scientist. It was also a time of relative prosperity as the technology spawned during the war led not only to important scientific findings but an incredible profusion of technology-driven culture changes, particularly in regards to computerization.

During the nineteen fifties von Foerster continued his work in electrical engineering and physics, but also began to switch his direction towards investigating and promoting the themes of homeostasis, self-organizing systems, system-environment-relationships, bionics, biologic machine communication and others. He became head of the Biological Computer Lab at the University of Illinois in Urbana founded in 1957. The juxtaposition of these three terms—“Biological”, “Computer” “Lab” says a lot about the “spirit of the times”.

During his tenure at the Biological Computer Lab, von Foerster focused on interdisciplinary endeavors and educational innovations, all within a systems-conceived learning environment. He became a professor emeritus in 1976, then moved to Pescadero, California where he became a key figure in the emerging synthesis of cybernetics, the counter-culture, the human potential movement, and new systemic frameworks for psychotherapy. It was also during this time that the so-called “radical constructivist” approach to epistemology in second-order cybernetics became solidified. Von Foerster died in 2002.

Von Foerster and second-order cybernetics

Heinz von Foerster was one of the principal architects of what became known as “second order” cybernetics whose ideational kernel had been gestating since the beginnings of cybernetics until it emerged during the nineteen seventies. Key to this phase was the conceptualization of observer/observed systems as involving a kind of circular causality linking agents and observers as integral and responsible components of the system which, in an important sense, stipulate her/his purpose. In fact, circular causality was not new in cybernetics, the title of their first meeting in 1946 affords insight into this direction: “Feedback Mechanisms and Circular Causal Systems in Biological and Social Systems,” a theme which remained in effect for the duration of the conferences until the last one in 19533. As pointed out by the polymath Gregory Bateson6, who, as a contributor to many of these Macy Conferences helped introduce systems ideas into the social sciences, a cybernetic system was one in which causal influence was traceable around a feedback circuit all the way back to any arbitrarily chosen starting point. Furthermore, this kind of circularity implied that any event in any position in the causal circuit of feedback would affect all other events at different positions, thus implying a self-referential structure since the causal influence was bent back on itself (we will come back to this crucially important theme later on).

Franciso Varela, who had with his mentor and colleague Humberto Maturana come up with the notion of autopoeitic systems (see below) around the same time, stated that von Foerster had formulated, with his new systemic category of “second-oder”, “a framework for the understanding of cognition. This framework is not so much a fully completed edifice, but rather a clearly shaped space, where the major building lines are established and its access clearly indicated”2. Von Foerster went even further by underscoring the self-referential structure of circular causality, an idea that eventually broke forth as the basis of radical constructivism.

The importance given to circular causality and its self-referential core was attributed to very influential research on the frog’s brain written-up in 1959 on the part of the cyberneticians Lettvin, Maturana, McCulloch, and Pitts7 notice that this is the same Maturana who had co-authored with Varela the idea of autopoeisis. The conclusion of this paper had it that the frog’s eye, instead of transmitting to the brain some more or less precise information stemming from the distribution of light on the receptor, “speaks” to the brain in a manner already organized and interpreted. An analogy was offered (versions of which are often found in radical constructivist circles): someone is observing cloud formations and reports his sightings to a weather station. This person’s reports won’t be couched in terms of light stimuli distributions but rather in categories coming from everyday impressions of the weather which are already understandable by those at the weather station. That is, the receivers of this report must already possess sufficient extant knowledge and it must be organized in such a way that the reporter’s descriptions should make enough sense that they can be acted upon. Similarly, because the purpose of a frog’s vision is to get food and avoid predators, the information coming from the light receptors has to be arranged, organized, prepared for correct interpretation and resulting action.

The crucial point is that the latter preparations must already be present to construct(hence “constructivism”) the meaning of the incoming data. In a sense we can conceive the constructional preparatory apparatus like Kant’s transcendental categories of the understanding (e.g., temporality, spatiality, causality), that is, “cognitive schemas” already present making sense of experience. Second-order cyberneticians contend that without these constructional “devices” we can’t experience anything at all and that in experience all we experience is the innate subjective schemas of experience with which we construct the world. But notice that although the conclusion of this research resembles Kant’s transcendental idealism, Kant never went to the extreme that radical constructivists go.

This early research on the frog’s brain was just the start of the radicalizing of von Foerster’s second-order epistemology. He was also strongly influenced along the way by several logical/mathematical interpretations of strong self-reference put forward by the respective conceptualizations of the German and Swedish logicians/philosophers Gotthard Günther8 and Lars Löfgren9 but most powerfully for von Foerster the notion of autopoiesis put forward by Maturana and Varela (1980).

Günther, for example championed his quasi-self-referential idea of “contexturality” as emblematic of living system. What exactly “contexturality” refers to is not at all clear to me, an obscurity not helped by Gunther’s overly idiosyncratic language: “It is an object; but it is also something utterly and inconceivably different from an object. There is no way to describe it as a contextural unit of thingness.” Perhaps one way to describe might be to not include in an explanation of something that something itself. Petition principiigalore! Beware the lexicons of novel neologisms.

Löfgren9 approached his explication of self-reference more from assumptions in mathematical logic. Indeed, he had demonstrated that his axiom of complete self-reference is independent from set theory and logic in a way related to Gödel’s theorems and Paul Cohen’s proof of independence regarding the Contiuum Hypothesis. For his exposition of the logic of self-reference, Löfgren appealed to his analyses of the role of self-reference in Gödel’s code (i.e., Gödel numbering) that was used in his completeness theorem.

The most influential rendition of self-reference inspiring von Foerster as he was conceptualizing second-order cybernetics was Maturana’s and Varela’s theory of autopoeisis (1980). Varela10,11,12, in particular, had turned to the work of the English mathematician Spencer-Brown whose novel approach to Boolean algebra incorporated self-referentiality as a fundamental element13. Emphasizing Spencer-Browne’s self-referential notion of reentry, Varela’s “calculus of self-reference” (his term) placed self-referentiality on the same logically primordial level as true and false. This move effectually pushed self-reference down into the core of nature, thereby making Varela’s approach into a kind of pan-self-referentialexperiencism.

Varied critiques of Varela’s strict notion of self-reference have been offered over the years14. Indeed, a troublesome side effect of the manner by which Varela’s formalism has its primary elements fold back on themselves (as in fixed point theorems) so stringently is a curtailment of the possibility for change, motion, evolution, morphogenesis, and so forth in such systems. On a purely formal level, Kaehr (cited in Reichel15 Kaehr was a student of Günther; also he has pointed to what he takes in Varela’s formalism as a plague of infinities that renders Varela’s self-referential systems as not operationalizable). Moreover, Schatten16 has argued that this kind of self-reference can only be applicable to single or micro-celled organisms and thereby not pertinent to more complex systems.

The claims of radical constructivism, the governing epistemological stance found in von Foerster’s work and second-order cybernetics in general, are said to emanate from the self-referential nature of observing/observer systems. It is not difficult to see radical constructivism as a radicalization of the self-enclosedness of self-referential systems. However, a careful philosophical reading of radical constructivism reveals a host of philosophy 101 mistakes, perhaps the most egregious being:

  • We know the world through our experience. Therefore the only thing we can know is experience

It seems to me at least, that this proposition is as invalid as:

  • We see the world through our eyes. Therefore the only thing we see are our eyes

As I remarked above, it is ironic that despite his close acquaintance with Wittgenstein, von Foerster (or for that matter von Glasersfeld) does not seem to have incorporated an iota of Wittgenstein’s philosophical method. Any good Wittgensteinian on the block could debunk radical constructivism in a short time for its numerous unsound inferences. I guess I can sort of sympathize why Austrians might want to glom onto a philosophical creed like radical constructivism; for it means that whatever horrors were experienced (or worse, helped perpetrate) are just in our head.

The respected contemporary cybernetician Stuart Umpleby2 has called attention to the fact that that von Foerster was such an adamant holder of second-order cybernetics principles that he and his close colleagues excluded from their associations anyone “who wasn’t a second-order cybernetician.” Umpleby has strongly attacked such a “brutal, intolerant” outlook, calling both von Foerster and the closely related track of Maturana as “closed and ungenerative”.

Von Foerster’s take on self-organization

The classic paper reprinted in this issue of E:CO comes from a talk on self-organization given by von Foerster in 1959. Self-organization has been a major interest in cybernetics since its early days. Foerster’s approach is not easy, containing as it does multifaceted explanatory “folds”. But right from the start he confronts one of the more problematic issues accompanying the reception of the idea of self-organization: the building-up of order or organization characterizing self-organization would seem to violate the increase of entropy and associated degradation of order according to the Second Law of Thermodynamics, at least in the Boltzmannian understanding of the law. In fact, this issue is one that all proponents of the idea of self-organization have needed to face since the Second Law is nearly unanimously accepted.

One tack by von Foerster was s to play around with how to conceive self-organization in relation to the “nearness/distance” of a system to the environments it is embedded in. The inspiration here is explicitly that of Schrödinger from his What is Life?, a work prescient in many ways, e.g., ”predicting” the need for something like DNA. As von Foerster put it when discussing the need for the self-organizing system to take-in the energy needed for its internal activity of building order: the system “eats” energy and order from its environment. That is, the environment has a certain amount of given structure. The “closer” the system is to its environment in the sense of interacting with “local” access to the environment, the more plausible it seems the order/structure can be introjected by the system, this flow of energized order is what runs the processes of self-organization. Again, this point is an echo from Schrödinger and von Foerster winds-up limiting what should truly be thought as a self-organizing process to those held within close environmental conditions. If the distance is too great, entropic tendencies will swamp local ordering. Thus there is plenty in this classic paper for both proponents as well as critics can munch over.

Von Foerster also appealed to Shannon’s famous definition of information along the lines of redundancy (and surprise), that self-organizing systems need to show an increase of order over time. The “negentropy” (Schrödinger’s term for the order/structure the systems takes in from the environment) needs to be amplified and this leads to an embrace of the “order through noise” thesis of Prigogine and his school of self-organizing physical systems.

We see therefore von Foerster going through the various ways that order and self-organization in the face of entropic decay. He offers varied thought experiments along the way to illustrate the outlines of various “mechanisms’ that might responsible for this building up of order, some of which are hard to follow or are unconvincing or ad hoc hand waving.

References

  • Heims, S. (1991). The Cybernetics Group, ISBN 9780262082006.
  • Kline, R. (2015). The Cybernetics Moment: Or Why We Call Our Age the Information Age, ISBN 9781421416717.
  • Dupuy, JP, (2000). The Mechanization of the Mind, Princeton University Press, ASIN: B011MEN932.
  • Cybernetics: Wikipedia
    REFERENCE LINK
  • Müller, A. (nd): “Heinz von Foerster: A short biography.”
    REFERENCE LINK
  • Bateson, G. (1985). Steps to an Ecology of the Mind, ISBN 9780226039053.
  • Lettvin, J.Y., Maturana, H.R., McCulloch, W.S. and Pitts , W.H. (1959). “What the frog’s eye tells the frog’s brain,” Proceedings of the Institute for Radio Engineers, ISSN 0731-5996, 47: 1940-1951.
  • Günther, G. (1991). “A new approach to the logical theory of living systems.”
    REFERENCE LINK
  • Löfgren, L. (1988). “Towards system: From computation to the phenomenon of language,” in M.E. Carvallo (ed.), Nature, Cognition and Systems I : Current System-Scientific Research on Natural and Cognitive Science, ISBN 9027727406, pp. 129-152.
  • Varela, F. (1974). “A calculus for self-reference,” International Journal of General Systems, ISSN 0308-1079, 2: 5-24.
  • Varela, F. (1979). Principles of Biological Autonomy, ISBN 9780135009505.
  • Varela, F. (1979). “The extended calculus of indications interpreted as a three-valued logic, Notre Dame,” Journal of Formal Logic, ISSN 0029-4527, 20: 141-146.
  • Robertson, R. (1999), “Some-thing from no-thing: G. Spencer-Brown’s Laws of Form,” Cybernetics and Human Knowing, ISSN 0907-0877, 6(4): 43-55.
  • Goldstein, J. (2003). “The construction of emergence order, or how to resist the temptation of hylozoism,” Nonlinear Dynamics, Psychology, and Life Sciences, ISSN 1090-0578, 7(4): 295-314.
  • Reichel, A. (2011). “Snakes all the way down: Varela’s calculus for self-reference and the praxis of paradise,” Systems Research and Behavioral Science, ISSN 1099-1743, 28: 646-662.
  • Schatten, M. (2008). “A critical review of autopoietic theory and its applications to living, social, organizational and information systems,” Journal of General Social Issues, ISSN 1330-0288, 19(4,5): 837-852.

Source: Heinz von Foerster and the second-order cybernetics – Emergence:

Complexity and Organization

Cybernetics in a Post-Structuralist Landscape – Simon Biggs

Cybernetics in a Post-Structuralist Landscape


by Simon Biggs, 1987
Abstract

An essay into the historical/philosophical relationship between Cybernetics (as developed through the work of Norbert Weiner and Alan Turing) and Structuralism (in the broad sense, including the work of Wittgenstein and Chomsky) and the later development or position of Cybernetics in a Post-Structuralist context – that which is often described as Post-Industrial or Post-Modern (as found in the work of Jean-Francois Lyotard and Jean Baudrillard). The essay explores the work of a number of artists of both historical and contemporary interest (including Iannis Xennakis, Alvin Lucier, Richard Teitelbaum and Felix Hess) relative to the general theme.

Introduction

Cybernetics was developed in part through the work of Norbert Weiner and Alan Turing from the 1930’s through to the 1950’s. Weiner’s ideas focused on the characteristics of control and communication in a variety of systems, including machines, animals, humans and social groups. He employed, in part, James Watt’s model of the automatic steam governor (basically a pressure operated valve action pressure release mechanism) as a metaphor to describe similar processes of automatic self-referential control systems operating in more sophisticated automated machines, biological systems, the mind and social structures.

Turing postulated the idea of a machine that was re-programmable, where functional parameters could be programmed into the system, a machine that could be any machine and control any machine. Turing further developed this concept to include the notion of a machine that could program itself, in effect creating a recursive, self-regulating and self-referential system.

Weiner’s concept of self-regulating systems and Turing’s idea of the self-programmable machine were further developed during the 1940′ and 1950’s by scientists, engineers and theorists who initially wished to model the nature of intelligence, both to come to a better understanding of it and in order to develop an “artificial intelligence”. Much of this work was carried out in the USA where successive governments funded related research, recognising its potential military and industrial applications. Major institutes specialised in this research thus emerged during the 1970’s and 1980’s.

The term Cybernetics came to stand for much of the work occurring in this field, particularly that of artificial intelligence (A.I.) research, although activity was also carried over into sociology, psychology and eventually a significant proportion of the technological development during this period that has led to many of our daily technological objects. A.I. drew not only on Weiner and Turing but to an extent on work carried out in aspects of linguistics and psychology. In particular, the work of Wittgenstein and Chomsky, which had been developed contemporaneously.

A.I. researchers recognised quite early in their work that central to the processes of intelligence and control were those of communication and language. Intelligence was seen to be the exercise of control, both internally and externally, through language, language being seen in turn as both the content and the medium through which control was realised.

Most, but notably not all, A.I. research was carried out through and applied to computer systems. The computer functioned as the concrete realisation of the Turing Machine and by the 1950’s it had far outstripped its conceptualisers vision. Research in this area continued throughout the 1960’s – again, chiefly in the USA, but also in the USSR and Europe. During this period a number of artists seriously turned their attention to what they saw as intriguing ideas and processes and their potentially far reaching implications . Of course, as with all new ideas, it also attracted many who were more interested in the effects side of this research, and thus their work was largely to effect only, without substance or inspiration (only proving that art cannot be produced to formula).

One of the first artists to turn their attention to A.I. and Cybernetics was Iannis Xennakis . For Xennakis music was simply organised sound and his fascination centred on the possibilities of this organisation and its relation to aesthetic experience. He developed a number of automated or semi-automatic compositional systems, thus foregrounding the systems themselves, derived from work in Cybernetics, including “game” based compositional techniques (based on Game Theory, a branch of mathematics concerned with the logical structure of problem solving) and stochastic methodologies (techniques arriving at statistically derived random structures). Xennakis’s work was in marked contrast to the cosmetically similar “random” music of John Cage, which originated out of ideas associated with the neo-Dada of Fluxus and was heavily influenced by Eastern philosophy as well as traditional African music and Jazz. Xennakis based his work on mathematics and seemed motivated by an almost classical, purest, desire to find the “perfect” musical composition, placing his work within the well defined tradition of European Avant-garde classical music.

During the 1960’s a number of other artists began to work in this area, often with sound or what came to be known as “sound-sculpture”. Given that the nature of the ideas they were working with were process based this called for time based media. At that time media such as video, robotics and various electro-mechanical systems were either largely unavailable or prohibitively expensive. However, the cheap electronic musical device, or synthesiser had become popularly available, along with various peripherals that were, in effect, specialised small computers. Although some interesting work did occur outside the sound media – for example, the work of Edward Ilhanowicz and Robert Breer (robotics), James Seawright (interactive sculpture) and Vera Molnar (visual arts) – for the economic reasons alluded to above the larger part of this activity was in the sonic arts. Amongst the artists working in this field at that time were David Tudor (often collaborating with John Cage), Robert Whitman, Steve Reich, David Rosenboom, Alvin Lucier and Richard Teitelbaum.

Cybernetics and Structuralism

The bringing together of Cybernetic and linguistic theory (particularly that of Wittgenstein and Chomsky) was, at the time, a powerful mixture. The late Empiricism of a high-Industrial culture (although already post-Industrial tendencies were evident), typified by Popperian rationalism, was well served and expressed by both the apparent logic implied in each discipline and their almost dream-like potential. Some of the more wild claims by some of A.I’.s adherents included suggestions that the “meaning of life” was to found in such totalising research. As we all know, this was eventually determined by Monty Python’s.

Chomsky and Wittgenstein’s ideas regarding language were certainly different, but both agreed that the codes that constitute a linguistic net are specific and that they are logical in their development. In the first belief they held distinct positions from their Structuralist contemporaries (particularly as found in the work emerging from the Prague School, following on from that of De Saussure) but in the latter they were mostly in general agreement. Regardless of the specificity of language – the fixed or unfixed relationship between the signifier and the signified – the various approaches at the time all centred on the logicality, or genealogical, internal relationships of language systems. As such, the idea that languages evolve according to set rules and that therefore the relationships between terms could be described logically and historically. In some ways this approach can be seen as a hang-over from Nineteenth Century Comparative Linguistics, which was largely occupied with describing the genealogy of languages and the classification of terms.

For the Cyberneticist this form of linguistic theory, and that of structuralism proper was just what was needed to flesh out their ideas remembering that they recognised the primacy of language in all systems of control and thus, by extension, intelligence.

To this end a great deal of A.I. research focused on language and particularly on understanding genealogical rules for the development of signifying systems. The initial objective was to develop a programmable formulation of parameters that would allow the computing system a degree of autonomy in the development of its own processes of signification. The value of the this approach was seen to be not only in the modelling of the structures of language (and thus systems of control and behaviour) but also the processes of learning (at the time considered in the field to be a “gestalt” process of linguistic development related to external stimuli). A.I. also drew on emerging discoveries in genetics, finding in the DNA model another metaphor to reinforce the A.I. model. The Darwinian implications were not lost to A.I. researchers either .

Cybernetics and Post-Structuralism

An attempt to describe Post-Structuralist thought as a coherent tendency in ideology and philosophy would be misplaced, given the characteristic plurality of a period notable for divergent praxis. During the 1960’s Structuralism did not so much decay as fragment, producing an ephemera of ideological shrapnel that has since come to form the irregular topography of an epoch often referred to as Post-Industrial or Post-Modern. This is a period concerned not so much with redefinition as de-definition, a stripping or deconstructing of meaning.

As such the high Modernist framework within which Cybernetics was constructed has fractured. The crisis in Modernism that occurred in the 1960’s and 1970’s saw a collapse in interest amongst philosophers and artists in the paradigms which Cybernetics largely depended upon for ideological support. The reasons for this abandonment of what had a been a dynamic tendency are complex, but central to them was the general loss of faith in the Modernist paradigms that constituted orthodoxy from the late Nineteenth Century until the mid-Twentieth.

However, although Cybernetics was rejected by many thinkers and practitioners, and especially by many on the Left, the possibilities suggested by it continued to be pursued in the “hard” sciences. This has since directly led to the development of, amongst other things, the Personal Computer and the Cruise Missile. Both of these developments grew directly out of A.I. research, the PC deriving from work on high-level software and User-Interface research and the Cruise from work in remote sensing, artificial vision and servo-control systems.

Given these, and other developments, it would seem that Cybernetics as a practice is still very much with us today. In point of fact, it has probably had a more profound and lasting effect in the 1980’s that in the previous thirty years . The danger here is that there are those who perhaps should have continued to address the implications of A.I., that is to say, those with a different outlook, who tended to be attracted to Post-Structuralism, have seemed to ignore it at their own, and others, peril.

Notably a few thinkers have sustained the development of ideas in relation to the possibilities and implications of Cybernetics, but within the general field of Post-Structuralist thought. Amongst them are Jean-Francois Lyotard and Jean Baudrillard. Lyotard has developed the notion of “l’immateriaux”, which is intended to address a culture that has dematerialised and come to be constituted as a culture of immaterial signals. Lyotard sees this process as intrinsic to the rapid evolution of information technologies – computers, satellites, holography, etc.

Related to the ideas of Lyotard are also those of Baudrillard, who has argued the concept of “sign value” . Baudrillard regards this as a recently evolved meta-discourse in our economic and cultural milieu, a discourse central to the contemporary production of meaning and value which recontextualises the Marxist concepts of “Use Value” and Exchange Value”. The idea of “Sign Value” seeks to postulate an ethic and aesthetic of production and consumption based on the “systems” components capacity to signify in relation to a systems totality. As such, the intent of production becomes not so much to make things we need or want to use or that can support an exchange based economy but rather to produce things that will function as signifiers relative to their producers and/or consumers.

Like Lyotard, Baudrillard sees this development as directly related to technological change and the shift in the nature of communication within our culture. Baudrillard contextualises this line of thought as a form of political-economy which, like Marx before him, is seen as inclusive of social value systems in general.

Both Lyotard and Baudrillard can be seen to have drawn to some degree on the work of Michel Foucault . A clear connection can be discerned between the idea of Lyotard and Baudrillard and Foucault’s concept of the panopticon. The panopticon, inspired by Bentham’s ideas on control and discipline in Nineteenth Century prisons, function for Foucault as a metaphor for how we, as individuals and as a social body, implement automatic control systems. This idea can be seen as closely related to those of Weiner in particular, although Foucault’s approach and style is far removed from the former. However, like Weiner, Foucault sees the panopticon metaphor as central to much of our recent technological development , and it is this possibility that both Lyotard and Baudrillard have drawn on and therefore, in a fashion, continued a critique of a cybernated culture.

Similar to Lyotard’s and Baudrillard’s development are the practices of a number of artists. Amongst them are Alvin Lucier, Richard Teitelbaum and Felix Hess. Both Lucier and Teitelbaum were active at the close the 1960’s and can be seen to have developed in relation to and from the prevalent structural and Cybernetic approached of the time in directions more cogniscent of recent post-Modern thought. Hess became active as an artist during the 1970’s and 80’s and therefore this development is less apparent in his work…although his background in the physical sciences heavily informs a practice predicated upon ideas arising from the same traditions. All of these three artists work with sound, fascinated by the processes involved in communication and control, and yet each has taken a distinct path in their practice.

Lucier has done much in the past twenty years in opening up our definitions of what we might call music, not only in new approaches to composition but also in performance. In doing this he has often worked with technology and, like the other artists discussed here, focused on problematising the artists role in relation to the audience. Lucier’s “Music on a Long Thin Wire” forces the viewer to confront both their own presence and their immediate environment. The piece itself consists of a long thin mono-filament passed through a magnetic “pick-up” at each end which function to amplify the wire’s interaction with its immediate environment. Stimuli can include air temperature, pressure, humidity, light and movement (including human) and the slightest change in the environment can profoundly effect its state and thus the character of the emergent soundscape. The implications of this work are numerous as it functions to deconstruct the relationships between artist, viewer, artwork and environment and render them in a fluid state.

Teitelbaum has for some years been exploring musical collaboration with automata, not dissimilar to Colon Nancarrow’s use of the Pianola, but in a far more complex and spontaneous manner. A Teitelbaum concert might consist of a musician sitting at a piano upon which they improvise around a theme. The player is surrounded by other similar pianos all of which are being played by electro-mechanical systems under computer control. The computer is programmed to interact with the human musician, just as a Jazz ensemble will feature the spontaneous interaction between human musicians, with the computer and musician in a constant improvisatory exchange that leads to the computer changing its responsive parameters throughout the performance.

For Teitelbaum these computer collaborators are akin to Shelley’ Frankenstein, or the older proto-Frankensteinian myth of the Golem. Indeed, the composer has even composed an opera on the Golem theme. In this can be seen the artists obsession with the process of creation and reproduction and humanities desire to reproduce itself, not only sexually but, if possible, through its artefacts and disciplines such as alchemy and A.I. The psycho-analytic aspects of this process – the artefact as a mirror of ourselves – describes a field that addresses itself directly to these issues, as also developed by Lyotard, dealing with technology as the central expression of the post-Modern; the computer as another expression of the Mirror Phase.

Felix Hess has been working on communication patterns amongst animals, particularly frogs, for many years and this has led him to release some of his field recordings of frogs, from far-flung sites around the world, in the form of “found” musical works. Hess does not claim these recordings are music as such – his initial interest was scientific (he is a physicist by profession) – or for that matter anything more than field recordings of frogs, however they have been received as music. This disparity between intention and consumption has more than an echo of Baudrillard’s thoughts on the simulacra.

As a scientist Hess began to model the emergent communication patterns he discovered through the recordings, initially on paper but later in real space. He took this to its logical conclusion by constructing a network of small electronic devices each of which could both hear and emit sound. Each box, a frog-like unit, was programmed so that it could discern between a sound in a certain “good” frequency (sounds like those it made itself) and sounds that were hostile (deeper sounds, actually closer to that of the human voice). Good sounds would encourage the unit to emit sounds whilst negative sounds would inhibit it. The result is a complex, although in principal very simple, ecology of sound that constantly swirls around in the space where it is installed, as various units triggered off other units in particular locales of the space and human visitors knowingly or unknowingly interacted with the work.

This work has been shown in galleries as sound installation and performed at concert-like events in theatres. The position of the artist is doubly problematised here as Hess renounces that what he is making is art and that he is not an artist, more a dispassionate observer modelling found behaviour, and at the same time due to the manner in which the work ignores or defers many of the basic characteristics that we associate with the exhibition or performance.

This querying of the authorial role takes on Derridean proportions as the creator is negated by both the manifestation of the work and their attitude towards it and their audience. This relationship is further complicated by the “frogs” responding to aural activity in the installation or performance space, with the “artist” having perhaps a finer appreciation of why things are as they are but in the end having a role relative to the final work little different to that of any member of the audience.

It is perhaps the plurality of intent and interpretation that these artists have introduced into their work that separates them from the Cybernetic art of the 1960’s as they engage not only with the technology and its underlying principles in areas such as A.I. but also with contemporary work in psycho-analytical theory, philosophy and the general field of post-Structuralist discourse. Each of these artists questions our relationship with our artefacts, our productions in a post-Industrial context, that reflects not only Lyotard’s l’immateriaux or Baudrillard’s “Sign Value” but also the broader and more specific social implications of technology. A fascination with language and control, with communication and power, is central to their work, contextualised within a broader framework of post-Structuralist thought, either consciously or unconsciously, and is thus more responsive to a deconstructing world.

1. Coined by Norbert Weiner in the 1930’s in his book “Cybernetics – Control and Communication in Animals, Machines and Society”.

2. A number of texts have covered this field, including Jonathon Benthall’s “Art, Science and Technology”, Jasia Reichhardt’s “Cybernetics and Art” and by other authors such as Douglas Davis and Gene Youngblood. Annual updates on work in the field can be found in the Ars Electronica (Linz, Austria) catalogues, published since 1981.

3. Xennakis, born in Greece but resident in Paris for much of his professional life, worked primarily with musical composition but also collaborated on architectural projects (most notably with Le Corbusier for the Brussels Expo), poetry and visual arts. His book, titled “Music and Mathematics” is a dense and comprehensive profile of his ideas and methods.

4. DNA can be regarded as a self-regulatory system capable of modifying its internal language-like structure in response to both internal and external stimuli. Darwin’s concept of natural selection and random genetic mutation was interpreted as another example of Cybernetic processes functioning at the level of biology and population.

5.Writers such as Richard Gregory (UK) and Paul Hablemarde (USA) are still producing influential work in the philosophy of A.I. Hablemarde’s “Mind Design”, although a well balanced collection of essays on A.I., documenting both its successes and failures, still functions within and is an apologia for the elemental Modernist paradigms that underlie Cybernetics. Generally speaking, Cybernetics has become the orthodoxy in the computer sciences, as evidenced by the role of institutions such as MIT and Stanford University and the pre-eminent positions of ideologists such as Marvin Minsky.

6. In books such as “The Post Modern Condition” and “Driftworks” Lyotard has addressed the implications of a technological culture and how this impacts upon how we see ourselves and value our knowledge. Lyotard also curated “Les Immateriaux”, a major exhibition at the Georges Pompidou Centre, Paris, which sought to develop his ideas in to the objects of his discourse – high art, design, technology and consumer items – with the objective of illustrating his central concept of a dematerialising culture constituted in its signals and messages rather than its things.

7. Jean Baudrillard’s “Towards a Critique of the Political Economy of the Sign”. His earlier “Mirror of Production” built the general ground work for this essential critique of Marxism and popular culture.

8. Michel Foucault, “The Order of Things” – an Archaeology of Knowledge” and “Discipline and Punish”.

9.Mark Poster’s “Foucault, Marxism and Technology” is an in depth study of these aspects of Foucault and although no direct reference is made to Weiner or Cybernetics it is difficult not to draw connections.

copyright Simon Biggs 1986

Source: Cybernetics in a Post-Structuralist Landscape

SECOND ORDER CYBERNETICS – Ranulph Glanville

CybernEthics Research, UK and Royal Melbourne Institute of Technology
University, Australia

Contents
1. Introduction: What Second Order Cybernetics is, and What it Offers
2. Background—the Logical Basis for Second Order Cybernetics
3. Second Order Cybernetics—Historical Overview
4. Theory of Second Order Cybernetics
5. Praxis of Second Order Cybernetics
6. A Note on Second Order Cybernetics and Constructivism
7. Cybernetics, Second Order Cybernetics, and the Future
Acknowledgements
Related Chapters
Glossary
Bibliography
Biographical Sketch

Summary
Second order Cybernetics (also known as the Cybernetics of Cybernetics, and the New Cybernetics) was developed between 1968 and 1975 in recognition of the power and consequences of cybernetic examinations of circularity. It is Cybernetics, when Cybernetics is subjected to the critique and the understandings of Cybernetics.
It is the Cybernetics in which the role of the observer is appreciated and
acknowledged rather than disguised, as had become traditional in western science: and is thus the Cybernetics that considers observing, rather than observed systems.
In this article, the rationale from and through the application of which, second order Cybernetics was developed is explored, together with the contributions of the main precursors and protagonists. This is developed from an examination of the nature of feedback and the Black Box—both seen as circular systems, where the circularity is taken seriously. The necessary presence of the observer doing the observing is established. The primacy of, for example, conversation over coding as a means of communication is argued-—one example of circularity and interactivity in second order cybernetic systems. Thus second order Cybernetics, understood as proposing an epistemology and (through autopoietic systems) an ontogenesis, is seen as connected to the philosophical position of Constructivism.
Examples are given of the application of second order Cybernetics concepts in practice in studies of, and applications in, communication, society, learning and cognition, math and computation, management, and design. It is asserted that the relationship between theory and practice is not essentially one of application: rather they strengthen each other by building on each other in a circularity of their own: the presentation of one before the other results from the process of explanation rather than a necessary, structural dependency.
Finally, the future of second order Cybernetics (and of Cybernetics in general) is considered. The possibility of escalation from second to third and further orders is considered, as is the notion that second order Cybernetics is, effectively, a conscience for Cybernetics. And the popular use of “cyber-” as a prefix is discussed

Continues in source (pdf): http://www.pangaro.com/glanville/Glanville-SECOND_ORDER_CYBERNETICS.pdf

 

 

Desirable Ethics: Ranulph Glanville

[Can’t find free online – anyone?]

Desirable Ethics

This column is concerned with the promotion of cybernetics through an examination of the ethical consequences (implications) of certain cybernetic devices. This journal is the ideal place to try to do this. Not only does it recognise that less usual arguments can have special value, but also it has a history of concern for the ethical. Indeed, in the very ?rst issue, Heinz von Foerster presented his thesis on Ethics and Second Order Cybernetics (von Foerster, 1992) and Ole Thyssen developed his interpretation of von Foerster’s notions in his Ethics as Second Order Morality (Thyssen, 1992).

Document Type: Research Article

Affiliations: CybernEthics Research, Southsea ., Email: ranulph@glanville.co.uk

Publication date: 01 January 2004

 

Source: Desirable Ethics: Ingenta Connect

On Seeing A’s and Seeing As – Douglas R Hofastadter

SEHR, volume 4, issue 2: Constructions of the Mind
Updated July 22, 1995


on seeing A’s and seeing As

Douglas R. Hofstadter


Because it began life essentially as a branch of the theory of computation, and because the latter began life essentially as a branch of logic, the discipline of artificial intelligence (AI) has very deep historical roots in logic. The English logician George Boole, in the 1850s, was among the first to formulate the idea–in his famous book The Laws of Thought–that thinking itself follows clear patterns, even laws, and that these laws could be mathematized. For this reason, I like to refer to this law-bound vision of the activities of the human mind as the “Boolean Dream.”1

Put more concretely, the Boolean Dream amounts to seeing thinking as the manipulation of propositions, under the constraint that the rules should always lead from true statements to other true statements. Note that this vision of thought places full sentences at center stage. A tacit assumption is thus that the components of sentences–individual words, or the concepts lying beneath them–are not deeply problematical aspects of intelligence, but rather that the mystery of thought is how these small, elemental, “trivial” items work together in large, complex (and perforce nontrivial) structures.

To make this more concrete, let me take a few examples from mathematics, a domain that AI researchers typically focused on in the early days. A concept like “5” or “prime number” or “definite integral” would be thought of as trivial or quasi-trivial, in the sense that they are mere definitions. They would be seen as posing no challenge to a computer model of mathematical thinking–the cognitive activity of doing mathematical research. By contrast, dealing with propositions such as “Every even number greater than 2 is the sum of two prime numbers,” establishing the truth or falsity of which requires work–indeed, an unpredictable amount of work–would be seen as a deep challenge. Determining the truth or falsity of such propositions, by means of formal proof in the framework of an axiomatic system, would be the task facing a mathematical intelligence. Of course a successful proof, consisting of many lines, perhaps many pages, of text would be seen as a very complex cognitive structure, the fruit of an intelligent machine or mind.

Another domain that appealed greatly to many of the early movers of AI was chess. Once again, the primitive concepts of chess, such as “bishop,” “diagonal move,” “fork,” “castling,” and so forth were all seen as similar to mathematical definitions–essential to the game, of course, but posing little or no mental challenge. In chess, what was felt to matter was the development of grand strategies involving arbitrarily complex combinations of these definitional notions. Thus developing long and intricate series of moves, or playing entire games, was seen as the important goal.

As might be expected, many of the early AI researchers also enjoyed mathematical or logical puzzles that involved searching through clearly defined spaces for subtle sequences or combinations of actions, such as coin-weighing problems (given a balance, find the one fake coin among a set of twelve in just three weighings), the missionaries-and-cannibals puzzle (get three missionaries and three cannibals across a river in the minimum number of boat trips, under the constraint that there are never more cannibals than missionaries either on the boat, which can carry only three people, or on either side of the river), cryptarithmetic puzzles (find an arithmetically valid replacement for each letter by some digit in the equation “SEND+MORE=MONEY”), the Fifteen puzzle (return the fifteen sliding blocks in a four-by-four array having one movable hole to their original order), or even Rubik’s Cube. All of these involve manipulation of hard-edged components, and the goal is to find complex sequences of actions that have certain hard-edged properties. By “hard-edged,” I mean that there is no ambiguity about anything in such puzzles. There is no question about whether an individual is or is not a cannibal; there is no doubt about the location of a sliding block; and so forth. Nothing is blurry or vague.

These kinds of early preconceptions about the nature of the challenge of modeling intelligence on a machine gave a certain clear momentum to the entire discipline of AI–indeed, deeply influenced the course of research done all over the world for decades. Nowadays, however, the tide is slowly turning. Although some work in this logic-rooted tradition continues to be done, many if not most AI researchers have reached the conclusion–perhaps reluctantly–that the logic-based formal approach is a dead end.

What seems to be wrong with it? In a word, logic is brittle, in diametric opposition with the human mind, which is best described as “flexible” or “fluid” in its capabilities of dealing with completely new and unanticipated types of situations. The real world, unlike chess and some aspects of mathematics, is not hard-edged but ineradicably blurry. Logic and its many offshoots rely on humans to translate situations into some unambiguous formal notation before any processing by a machine can be done. Logic is not at all concerned with such activities as categorization or the recognition of patterns. And to many people’s surprise, these activities have turned out to play a central role in intelligence.

It happens that as AI was growing up, a somewhat distinct discipline called “pattern recognition” (PR) was also being developed, mostly by different researchers. There was some but not much communication between the two disciplines. Researchers in PR were concerned with getting machines to do such things as read handwriting or typewritten text, visually recognize objects in photographs, and understand spoken language. In the attempts to get machines to do such things, the complexity of categories, in its full glory and in its full messiness, began slowly to emerge. Researchers were faced with questions like these: What is the essence of dog-ness or house-ness? What is the essence of ‘A’-ness? What is the essence of a given person’s face, that it will not be confused with other people’s faces? What is in common among all the different ways that all different people, including native speakers and people with accents, pronounce “Hello”? How to convey these things to computers, which seem to be best at dealing with hard-edged categories–categories having crystal-clear, perfectly sharp boundaries?

These kinds of perceptual challenges, despite their formidable, bristling difficulties, were at one time viewed by most members of the AI community as a low-level obstacle to be overcome en route to intelligence–almost as a nuisance that they would have liked to, but couldn’t quite, ignore. For example, the attitude of AI researchers would be, “Yes, it’s damn hard to get a computer to perceive an actual, three-dimensional chessboard, with all of its roundish shapes, varying densities of shadows, and so forth, but what does that have to do with intelligence? Nothing! Intelligence is about finding brilliant chess moves, something that is done after the perceptual act is completely over and out of the way. It’s a purely abstract thing. Conceptually, perception and reasoning are totally separable, and intelligence is only about the latter.” In a similar way, the typical AI attitude about doing math would be that math skill is a completely perception-free activity without the slightest trace of blurriness–a pristine activity involving precise, rigid manipulations of the most crystalline of definitions, axioms, rules of inference–a mental activity that (supposedly) is totally isolated from, and totally unsullied by, “mere” perception.

These two trends–AI and PR–had almost no overlap. Each group pursued its own ends with almost no effect on the other group. Very occasionally, however, one could spot hints of another possible attitude, radically different from these two. The book Pattern Recognition, written in the late 1960s by Mikhail Bongard, a Russian researcher, seemed largely to be a prototypical treatise on pattern recognition, concerned mostly with recognition of objects and having little to do with higher mental functioning.2 But then in a splendid appendix, Bongard revealed his true colors by posing an escalating series of 100 pattern-recognition puzzles for humans and machines alike. Each puzzle involved twelve simple line drawings separated into two sets of six each, and the idea was to figure out what was the basis for the segregation. What was the criterion for separating the twelve into these two sets? Readers are invited to try the following Bongard problem, for instance.

Of course, for each puzzle there were, in a certain trivial sense, an infinite number of possible solutions. For instance, one could take the six pictures on the left of any given Bongard problem and say, “Category 1 contains exactly these six pictures (and no others) and Category 2 contains all other pictures.” This would of course work in a very literal-minded, heavy-handed way, but it would not be how any human would ever think of it, except under the most artificial of circumstances. A psychologically realistic basis for segregation in a Bongard problem might be that all pictures in Category 1 would involve no curved lines, say, whereas all pictures in Category 2 would have at least one curved line. Or another typical segregation criterion would be that pictures in Category 1 would involve nesting (i.e., the presence of a shape containing another shape), and pictures in Category 2 would not. And so on. The following Bongard problems give a feeling for the kinds of issues that Bongard was concerned with in his work. Readers are challenged to try to find, for each of them, a very simple and appealing criterion that distinguishes Category 1 from Category 2.

   

The key feature of Bongard problems is that they involve highly abstract conceptual properties, in strong contrast to the usual tacit assumption that the quintessence of visual perception is the activity of dividing a complex scene into its separate constituent objects followed by the activity of attaching standard labels to the now-separated objects (i.e., the identification of the component objects as members of various pre-established categories, such as “car,” “dog,” “house,” “hammer,” “airplane,” etc.). In Bongard problems, by contrast, the quintessential activity is the discovery of some abstract connection that links all the various diagrams in one group of six, and that distinguishes them from all the diagrams in the other group of six. To do this, one has to bounce back and forth among diagrams, sometimes remaining within a single set of six, other times comparing diagrams across sets. But the essence of the activity is a complex interweaving of acts of abstraction and comparison, all of which involve guesswork rather than certainty.

By “guesswork,” what I mean is that one has to take a chance that certain aspects of a given diagram matter, and that others are irrelevant. Perhaps shapes count, but not colors–or vice versa. Perhaps orientations count, but not sizes–or vice versa. Perhaps curvature or its lack counts, but not location inside the box–or vice versa. Perhaps numbers of objects but not their types matter–or vice versa. Somehow, people usually have a very good intuitive sense, given a Bongard problem, for which types of features will wind up mattering and which are mere distractors. Even when one’s first hunch turns out wrong, it often takes but a minor “tweak” of it in order to find the proper aspects on which to focus. In other words, there is a subtle sense in which people are often “close to right” even when they are wrong. All of these kinds of high-level mental activities are what “seeing” the various diagrams in a Bongard problem–a pattern-recognition activity–involves.

When presented this way, visual perception takes on a very different light. Its core seems to be analogy-making–that is, the activity of abstracting out important features of complex situations (thus filtering out what one takes to be superficial aspects) and finding resemblances and differences between situations at that high level of description. Thus the “annoying obstacle” that AI researchers often took perception to be becomes, in this light, a highly abstract act–one might even say a highly abstract art–in which intuitive guesswork and subtle judgments play the starring roles.

It is clear that in the solution of Bongard problems, perception is pervaded by intelligence, and intelligence by perception; they intermingle in such a profound way that one could not hope to tease them apart. In fact, this phenomenon had already been recognized by some psychologists, and even celebrated in a rather catchy little slogan: “Cognition equals perception.”

Sadly, Bongard’s insights did not have much effect on either the AI world or the PR world, even though in some sense his puzzles provide a bridge between the two worlds, and suggest a deep interconnection. However, they certainly had a far-reaching effect on me, in that they pointed out that perception is far more than the recognition of members of already-established categories–it involves the spontaneous manufacture of new categories at arbitrary levels of abstraction. As I said earlier, this idea suggested in my mind a profound relationship between perception and analogy-making–indeed, it suggested that analogy-making is simply an abstract form of perception, and that the modeling of analogy-making on a computer ought to be based on models of perception.

A key event in my personal evolution as an AI researcher was a visit I made to Carnegie-Mellon University’s Computer Science Department in 1976. While there, I had the good fortune to talk with some of the developers of the Hearsay II program, whose purpose was to be able to recognize spoken utterances. They had made an elegant movie to explain their work, which they showed me. The movie began by graphically conveying the immense difficulty of the task, and then in clear pictorial terms showed their strategy for dealing with the problem.

The basic idea was to take a raw speech signal–a waveform, in other words, which could be seen on a screen as a constantly changing oscilloscope trace–and to produce from it a hierarchy of “translations” on different levels of abstraction. The first level above the raw waveform would thus be a segmented waveform, consisting of an attempt to break the waveform up into a series of nonoverlapping segments, each of which would hopefully correspond to a single phoneme in the utterance. The next level above that would be a set of phonetic labels attached to each segment, which would serve as a bridge to the next level up, namely a phonemic hypothesis as to what phoneme had actually been uttered, such as “o” or “u” or “d” or “t.” Above the phonemic level was the syllabiclevel, consisting, of course, in hypothesized syllables such as “min” or “pit” or “blag.” Then there was the word level, which needs little explanation, and above that the phrase level (containing such hypothesized utterance-fragments as “when she went there” or “under the table”). One level higher was the sentence level, which was just below the uppermost level, which was called the pragmaticlevel.

At that level, the meaning of the hypothesized sentence was compared to the situation under discussion (Hearsay always interpreted what it heard in relation to a specific real-world context such as an ongoing chess game, not in a vacuum); if it made sense in the given context, it was accepted, whereas if it made no sense in the context, then some piece of the hypothesized sentence–its weakest piece, in fact, in a sense that I will describe below–was modified in such a way as to make the sentence fit the situation (assuming that such a simple fix was possible, of course). For example, if the program’s best guess as to what it had heard was the sentence “There’s a pen on the box” but in fact, in the situation under discussion there was a pen that was in a box rather than on it, and if furthermore the word “on” was the least certain word in the hypothesized sentence, then a switch to “There’s a pen in the box” might have a high probability of being suggested. If, on the other hand, the word “on” was very clear and strong whereas the word “pen” was the least certain element in the sentence, then the sentence might be converted into “There’s a pin on the box.” Of course, that sentence would be suggested as an improvement over the original one only if it made sense within the context.

This idea of making changes according to expectations (i.e., long-term knowledge of how the world usually is, as well as the specifics of the current situation) was a very beautiful one, in my opinion, but it caused no end of complexity in the program’s architecture. In particular, as soon as the program made a guess at a new sentence–such as converting “There’s a pen on the box” into “There’s a pen in the box”–it took the new word and tried to modify its underpinnings, such as its syllables, the phonemes below them, their phonetic labels, and possibly even the boundary lines of segments in the waveform, in an attempt to see if the revised sentence was in any way justifiable in terms of the sounds actually produced. If not, it would be rejected, no matter how strong was its appeal at the pragmatic level. And while all this work was going on, the program would simultaneously be working on new incoming waveforms and on other types of possible rehearings of the old sentence.

The preceding discussion implies that each aspect of the utterance at each level of abstraction was represented as a type of hypothesis, attached to which was a set of pieces of evidence supporting the given hypothesis. Thus attached to a proposed syllable such as “tik” were little structures indicating the degree of certainty of its component phonemes, and the probability of correctness of any words in which it figured. The fact that plausibility values or levels of confidence were attached to every hypothesis imbued the current best guess with an implicit “halo” of alternate interpretations, any one of which could step in if the best guess was found to be inappropriate.

I am sure that the figurative language I am using to describe Hearsay II would not have been that chosen by its developers, but I am trying to get across an image that it undeniably created in me, since that image then formed the nucleus of my own subsequent research projects in AI. Some other crucial features of the Hearsay II architecture that I have hinted at but cannot describe here in detail were its deep parallelism, in which processes of all sorts operated on many levels of abstraction at the same time, and its uniquely flexible manner of allowing a constant intermingling of bottom-up processing(i.e., the building-up of higher levels of abstraction on top of fairly solid lower-level hypotheses, much like the construction of a building) and top-down processing (i.e., the attempt to build plausible hypotheses close to the raw data in order to give a solid underpinning to hypotheses that make sense at abstract levels, something like constructing lower and lower floors after the top floors have been built and are sitting suspended in thin air).

Not too surprisingly, my first attempt to turn my personal vision of how Hearsay II operated into an AI project of my own was the sketching-out, in very broad strokes, of a hypothetical program to solve Bongard problems.3 However, the difficulties in actually implementing such a program completely on my own (this was before I had graduate students!) seemed so daunting that I backed away from doing so, and started exploring other domains that seemed more tractable. What I was always after was some kind of microdomain in which analogies at very high levels of abstraction could be made, yet which did not require an extreme amount of real-world knowledge.

Over the years, I developed a number of different computer projects, each one centered on a different microdomain, and thanks to the hard work of several superb graduate students, many of these abstract ideas were converted into genuine working computer programs. All of these projects are described in considerable detail in the book Fluid Concepts and Creative Analogies,4 co-authored by me and several of my students.

Here I would like to present in very quick terms one of those domains and the challenges that it involved, a project that clearly reveals how deeply Mikhail Bongard’s ideas inspired me. The project’s name is “Letter Spirit,” and it is concerned with the visual forms of the letters of the roman alphabet. In particular, our goal is to build a computer program that can design all 26 lowercase letters, “a” through “z,” in any number of artistically consistent styles. The task is made even more “micro” by restricting the letterforms to a grid. In particular, one is allowed to turn on any of the 56 short horizontal, vertical, and diagonal line segments–“quanta,” as we call them–in the 2´6 array shown below. By so doing, one can render each of the 26 letters in some fashion; the idea is to make them all agree with each other stylistically.

To me, it is highly significant that Bongard chose to conclude his appendix of 100 pattern-recognition problems with a puzzle whose Category 1 consists of six highly diverse Cyrillic “A”s, and whose Category 2 consists of six equally diverse Cyrillic “B”s.

This choice of final problem is a symbolic message carrying the clear implication that, in Bongard’s opinion, the recognition of letters constitutes a far deeper problem than any of his 99 earlier problems–and the more general conclusion that a necessary prerequisite to tackling real-world pattern recognition in its infinite complexity is the development of all the intricate and subtle analogy-making machinery required to solve his 100 problems and the myriad other ones that lie in their immediate “halo.”

To show the fearsome complexity of the task of letter recognition, I offer the following display of uppercase “A”s, all designed by professional typeface designers and used in advertising and similar functions.

What kind of abstraction could lie behind this crazy diversity? (Indeed, I once even proposed that the toughest challenge facing AI workers is to answer the question: “What are the letters ‘A’ and ‘I’?”)

The Letter Spirit project attempts to study the conceptual enigma posed by the foregoing collection, but to do so within the framework of the grid shown above, and even to extend that enigma in certain ways. Thus, a Letter Spirit counterpart to the previous illustration would be the collection of grid-bound lowercase “a”s shown below, suggesting how intangible the essence of “a”-ness must be, even when the shapes are made solely by turning on or off very simple, completely fixed line segments.

I said above that the Letter Spirit project aims not just to study the enigma of the many “A”s, but to extend that enigma. By this I meant the following. The challenge of Letter Spirit is not merely the recognition or classification of a set of given letters, but the creation of new letterforms, and thereby the creation of new artistic styles. Thus the task for the program would be to take a given letter designed by a person–any one of the “a”s below, for instance–and to let that letter inspire the remaining 25 letters of the alphabet. Thus one might move down the line consecutively from “a” to “b” to “c,” and so on. Of course, the seed letter need not be an “a,” and even if it were an “a,” the program would be very unlikely to proceed in strict alphabetical order (if one has created an “h,” it is clearly more natural to try to design the “n” before tackling the design of “i”); but let us nonetheless imagine a strictly alphabetic design process stopped while under way, so that precisely the first seven letters of the alphabet have been designed, and the remaining nineteen remain to be done. Let us in fact imagine doing such a thing with seven quite different initial “a”s. We would thus have something like the 7´7 matrix shown below.

Implicit in this matrix (especially in the dot-dot-dots on the right side and at the bottom) are two very deep pattern-recognition problems. First is the “vertical problem”–namely, what do all the items in any given column have in common? This is essentially the question that Bongard was asking in the final puzzle of his appendix. The answer, in a single word, is: Letter. Of course, to say that one word is not to solve the problem, but it is a useful summary. The second problem is, of course, the “horizontal problem”–namely, what do all the items in any given row have in common? To this question, I prefer the single-word answer: Spirit. How can a human or a machine make the uniform artistic spirit lurking behind these seven shapes leap to the abstract category of “h,” then leap from those eight shapes to the category “i,” then leap to “j,” and so on, all the way down the line to “z”?

And do not think that “z” is really the end of the line. After all, there remain all the uppercase letters, and then all the numerals, and then punctuation marks, and then mathematical symbols… But even this is not the end, for one can try to make the same spirit leap out of the roman alphabet and into such other writing systems as the Greek alphabet, the Russian alphabet, Hebrew, Japanese, Arabic, Chinese, and on and on. Of course, the making of such “transalphabetic leaps” (as I like to call them) goes way beyond the modest limits of the Letter Spirit project itself, but the suggestion serves as a reminder that, just as there are unimaginably many different spirits (i.e., artistic styles) in which to realize any given letter of the alphabet, there are also unimaginably many different “letters” (i.e., typographical categories) in which to realize any given stylistic spirit.

In metaphorical terms, one can talk about the alphabet and the “stylabet”–the set of all conceivable styles. Both of these “bets” are infinite rather than finite entities. The stylabet is very much like the alphabet in its subtlety and intangibility, but it resides at a considerably higher level of abstraction.

The one-word answers to the so-called vertical and horizontal questions–“letter” and “spirit”–gave rise to the project’s name. There is of course a classic opposition in the legal domain between the concepts of “letter” and “spirit”–the contrast between “the letter of the law” and “the spirit of the law.” The former is concrete and literal, the latter abstract and spiritual. And yet there is a continuum between them. A given law can be interpreted at many levels of abstraction. So too with the artistic design problems of the Letter Spirit project: there are many ways to extrapolate from a given seed letter to other alphabetic categories, some ways being rather simplistic and down-to-earth, others extremely sophisticated and high-flown. The Letter Spirit project does not by any means grow out of the dubious postulate that there is one unique “best” way to carry style consistently from one category to another; rather, it allows many possible notions of artistically valid style at many different levels of abstraction. Of course this means that the project is in complete opposition to any view of intelligence that sees the main purpose of mind as being an eternal quest after “right answers” and “truth.” That the human mind can conduct such a quest, principally through such careful disciplines as mathematics, science, history, and so forth, is a tribute to its magnificent subtlety, but to do science and history is not how or why the mind evolved, and it deeply misrepresents the mind to cast its activities solely in the narrow and rigid terms of truth-seeking.

To convey something of the flavor of the Letter Spirit project, I offer the following sample style-extrapolation puzzle, which I hope will intrigue readers. Take the following gridbound way of realizing the letter “d” and attempt to make a letter “b” that exhibits the same spirit, or style.

One idea that springs instantly to mind for many people is simply to reflect the given shape, since one tends to think of “d” and “b” as being in some sense each other’s mirror images. For many “d”s, this simple recipe for making a “b” might work, but in this case there is a somewhat troubling aspect to the proposal: the resultant shape has quite an “h”-ish look to it, enough perhaps to give a careful letter designer second thoughts.

What escape routes might be found, still respecting the rigid constraints of the grid?

One possible idea is that of reversing the direction of the two diagonal quanta at the bottom, to see if that action reduces the “h”-ishness.

To some people’s eyes, including mine, this action slightly improves the ratio of “b”-ness to “h”-ness. Notice that this move also has the appealing feature of echoing the exact diagonals of the seed letter. This agreement could be taken as a particular type of stylistic consistency. Perhaps, then, this is a good enough “b,” but perhaps not.

Another way one might try to entirely sidestep “h”-ishness would involve somehow shifting the opening from the bottom to the top of the bowl. Can you find a way to carry this out? Or are there yet other possibilities?

I must emphasize that this is not a puzzle with a clearly optimal answer; it is posed simply as an artistic challenge, to try to get across the nature of the Letter Spirit project. When you have made a “b” that satisfies you, can you proceed to other letters of the alphabet? Can you make an entire alphabet? How does your set of 26 letters, all inspired by the given seed letter, compare with someone else’s?

The Letter Spirit project is doubtless the most ambitious project in the modeling of analogy-making and creativity so far undertaken in my research group, and as of this writing, it has by no means been fully realized as a computer program. It is currently somewhere between a sketch and a working program, and in perhaps a couple of years a preliminary version will exist. But it builds upon several already-realized programs, all of whose architectures were deeply inspired by the ideas of Mikhail Bongard and by principles derived from the architecture of the pioneering perceptual program Hearsay II.

To conclude, I would like to cite the words of someone whose fluid way of thinking I have always admired–the great mathematician Stanislaw Ulam. As Heinz Pagels reports in his book The Dreams of Reason, one time Ulam and his mathematician friend Gian-Carlo Rota were having a lively debate about artificial intelligence, a discipline whose approach Ulam thought was simplistic. Convinced that perception is the key to intelligence, Ulam was trying to explain the subtlety of human perception by showing how subjective it is, how influenced by context. He said to Rota, “When you perceive intelligently, you always perceive a function, never an object in the physical sense. Cameras always register objects, but human perception is always the perception of functional roles. The two processes could not be more different…. Your friends in AI are now beginning to trumpet the role of contexts, but they are not practicing their lesson. They still want to build machines that see by imitating cameras, perhaps with some feedback thrown in. Such an approach is bound to fail…”

Rota, clearly much more sympathetic than Ulam to the old-fashioned view of AI, interjected, “But if what you say is right, what becomes of objectivity, an idea formalized by mathematical logic and the theory of sets?”

Ulam parried, “What makes you so sure that mathematical logic corresponds to the way we think? Logic formalizes only a very few of the processes by which we actually think. The time has come to enrich formal logic by adding to it some other fundamental notions. What is it that you see when you see? You see an object as a key, a man in a car as a passenger, some sheets of paper as a book. It is the word ‘as’ that must be mathematically formalized…. Until you do that, you will not get very far with your AI problem.”

To Rota’s expression of fear that the challenge of formalizing the process of seeing a given thing as another thing was impossibly difficult, Ulam said, “Do not lose your faith–a mighty fortress is our mathematics,” a droll but ingenious reply in which Ulam practices what he is preaching by seeing mathematics itself as a fortress!

If anyone else but Stanislaw Ulam had made the claim that the key to understanding intelligence is the mathematical formalization of the ability to “see as,” I would have objected strenuously. But knowing how broad and fluid Ulam’s conception of mathematics was, I think he would have been able to see the Letter Spirit architecture and its predecessor projects as mathematical formalizations.

In any case, when I look at Ulam’s key word “as,” I see it as an acronym for “Abstract Seeing” or perhaps “Analogical Seeing.” In this light, Ulam’s suggestion can be restated in the form of a dictum–“Strive always to see all of AI as AS”–a rather pithy and provocative slogan to which I fully subscribe.

Previous Next Up  Comments

Notes

1 For more on this, see “Waking Up from the Boolean Dream,” Chapter 26 of my book, Metamagical Themas (New York: Basic, 1985).

2 See Mikhail Moiseevich Bongard, Pattern Recognition (New York: Spartan Books, 1970).

3 See Chapter 19 of my book Gödel, Escher, Bach (New York: Basic, 1979) for this sketched architecture.

4 Douglas R. Hofstadter and the Fluid Analogies Research Group, Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanism of Thought (New York: Basic, 1995).

Source: On Seeing A’s and Seeing As

ethics and second-order cybernetics – heinz von foerster

SEHR, volume 4, issue 2: Constructions of the Mind
Updated 4 June 1995


ethics and second-order cybernetics


von Foerster Heinz von Foerster

Ladies and Gentlemen:

I am touched by the generosity of the organizers of this conference, who not only invited me to come to your glorious city of Paris, but also gave me the honor of opening the plenary sessions with my presentation.[1]

And I am impressed by the ingenuity of our organizers, who suggested to me the title of my presentation. They wanted me to address myself to “Ethics and Second-Order Cybernetics.”

To be honest, I would never have dared to propose such an outrageous title, but I must say that I am delighted that this title was chosen for me.

Before I left California for Paris, others asked me, full of envy. “What am I going to do in Paris? What will I talk about?”

When I answered “I shall talk about Ethics and Second-Order Cybernetics,” almost all of them looked at me in bewilderment and asked “What is second-order cybernetics?” as if there were no questions about ethics.

I am relieved when people ask me about second-order cybernetics and not about ethics, because it is so much easier to talk about second-order cybernetics than it is to talk about ethics. In fact, it is impossible to talk about ethics. But let me explain that later, and let me now say a few words about cybernetics, and, of course, about cybernetics of cybernetics, or second-order cybernetics.

As you all know, cybernetics arises when effectors, say, a motor, an engine, our muscles, etc. are connected to a sensory organ which, in turn, acts with its signals upon the effectors.

It is this circular organization which sets cybernetic systems apart from others that are not so organized. Here is Norbert Wiener, who re-introduced the term “cybernetics” into scientific discourse. He observed:

The behavior of such systems may be interpreted as directed to the attainment of a goal.

That is, it looks as if these systems pursued a purpose! That sounds very bizarre indeed.

But let me give you other paraphrases of what cybernetics is all about by invoking the spirit of women and men who rightly could be considered the mamas and papas of cybernetic thought and action.

First, here is Margaret Mead, whose name is, I am sure, familiar to all of you. In one of her addresses to the American Society of Cybernetics she said:

As an anthropologist, I have been interested in the effects that the theories of cybernetics have within our society. I am not referring to computers or to the electronic evolution as a whole, or to the end of dependence on script for knowledge, or to the way that dress has succeeded the mimeographing machine as a form of communication among the dissenting young.

Let me repeat that:

I am not referring to the way that dress has succeeded the mimeographing machine as a form of communication among the dissenting young.

[And then she continues:]

I specifically want to consider the significance of the set of cross-disciplinary ideas which we first called ‘feed-back’ and then called ‘teleological mechanisms’ and then called ‘cybernetics’ — a form of cross-disciplinary thought which made it possible for members of many disciplines to communicate with each other easily in a language which all could understand.

And here is the voice of her third husband, the epistemologist, anthropologist, cybernetician, and, as some say, the papa of family therapy, Gregory Bateson:

Cybernetics is a branch of mathematics dealing with problems of control, recursiveness and information.

And here the organizational philosopher and managerial wizard Stafford Beer:

Cybernetics is the science of effective organization.

And, finally, here the poetic reflection of “Mister Cybernetics,” as we fondly call him, the cybernetician’s cybernetician, Gordon Pask:

Cybernetics is the science of defensible metaphors.

It seems that cybernetics is many different things to many different people, but this is because of the richness of its conceptual base. And this is, I believe, very good; otherwise, cybernetics would become a somewhat boring exercise. However, all of those perspectives arise from one central theme, and that is that of circularity.

When, perhaps a half century ago, the fecundity of this concept was seen, it was sheer euphoria to philosophize, epistemologize, and theorize about its consequences, its ramification into various fields, and its unifying power.

While this was going on, something strange evolved among the philosophers, the epistemologists and the theoreticians: they began to see themselves more and more as being themselves included in a larger circularity, maybe within the circularity of their family, or that of their society and culture, or being included in a circularity of even cosmic proportions.

What appears to us today most natural to see and to think, was then not only hard to see, it was even not allowed to think!

Why?

Because it would violate the basic principle of scientific discourse which demands the separation of the observer from the observed. It is the principle of objectivity: the properties of the observer shall not enter the description of his observations.

I gave this principle here in its most brutal form, to demonstrate its nonsensicality: if the properties of the observer, namely, to observe and to describe, are eliminated, there is nothing left: no observation, no description.

However, there was a justification for adhering to this principle, and this justification was fear. Fear that paradoxes would arise when the observers were allowed to enter the universe of their observations. And you know the threat of paradoxes: to steal their way into a theory is like having the cloven-hoofed foot of the Devil stuck in the door of orthodoxy.

Clearly, when cyberneticians were thinking of partnership in the circularity of observing and communicating, they were entering the forbidden land:

In the general case of circular closure, A implies B, B implies C, and — O! Horror! — C implies A!

Or in the reflexive case:

A implies B, and — O! Shock! — B implies A!

And now Devil’s cloven-hoofed foot in its purest form, in the form of self-reference:

A implies A.

— Outrage!

l would like to invite you now to come with me into the land where it is not forbidden, but where one is even encouraged to speak about oneself (what else can one do anyway?).

This turn from looking at things out there to looking at looking itself, arose — I think — from significant advances in neurophysiology and neuropsychiatry.

It appeared that one could now dare to ask the question of how the brain works; one could dare to write a theory of the brain.

It may be argued that over the centuries, since Aristotle, physicians and philosophers again and again developed theories of the brain. So what’s new about the efforts of today’s cyberneticians?

What is new is the profound insight that it needs a brain to write a theory of the brain. From this follows that a theory of the brain that has any aspirations for completeness, has to account for the writing of this theory. And even more fascinating, the writer of this theory has to account for her- or himself. Translated onto the domain of cybernetics: the cybernetician, by entering his own domain, has to account for his own activity; cybernetics becomes cybernetics of cybernetics, or second-order cybernetics.

Ladies and gentlemen, this perception represents a fundamental change not only in the way we conduct science, but also how we perceive of teaching, of learning, of the therapeutic process, of organizational management, and so on and so forth; and — I would say — of how we perceive relationships in our daily life.

One may see this fundamental epistemological change if one considers oneself first to be an independent observer who watches the world go by; or if one considers oneself to be a participant actor in the drama of mutual interaction, of the give and take in the circularity of human relations.

In the first case, because of my independence, I can tell others how to think and to act: “Thou shalt. . . .,” “Thou shalt not. . . .”: This is the origin of moral codes. In the second case, because of my interdependence, I can only tell to myself how to think and to act: “I shall. . . .,” “I shall not. . . .”

This is the origin of ethics.

This was the easy part of my presentation. Now comes the difficult part: I am supposed to reflect about ethics.

How to go about this? Where to begin?

In my search for a beginning I came across the lovely poem by Yveline Rey and Bernard Prieur that embellishes the first page of our program. Let me read to you the first few lines:

“Vous avez dit ethique?”
Deja le murmur s’amplifie en rumeur.
Soudain les roses ne montrent plus des epines.
Sans doute le sujet est-il brulant.
Il est aussi d’actualite

Let me begin with epines, with the thorns, and I hope a rose will emerge.

The thorns I begin with are Ludwig Wittgenstein’s reflections upon ethics in his Tractatus Logico-Philosophicus.

If I were to provide a title for this Tractatus, I would call it Tractatus Ethico-Philosophicus. However, I am not going to defend this choice; I rather tell you what prompts me to refer to Wittgenstein’s reflections in order to present my own.

I am referring to point Number 6 in his Tractatus where he discusses the general form of propositions. Almost at the end of this discussion he turns to the problem of values in the world and their expression in propositions. In his famous point number 6.421 he comes to a conclusion which I will read to you in the original German:

Es ist klar, dass sich Ethik nicht aussprechen lasst.

I only know two English translations which are both incorrect. Therefore, I will give you my translation into English:

It is clear that ethics cannot be articulated.

Now you understand why I said before: “My beginning will be thorns.” Here is an International Congress on Ethics, and the first speaker says something to the effect that it is impossible to speak about ethics. But please, be patient for a moment. I quoted Wittgenstein’s thesis in isolation, therefore it is not yet clear what he wanted to say. Fortunately, the next point 6.422, which I will read in a moment, provides a larger context for 6.421. To prepare you for what you are going to hear, you should remember that Wittgenstein was a Viennese. So am I. Therefore there is a kind of underground understanding which, I sense, you Parisians will share with us Viennese. Let me try.

Here is now point 6.422 in the English translation by Pears and McGuinness:

When an ethical law of the form “Thou shalt. . . .” is laid down, one’s first thought is “And what if I do not do it?”

When I read this, my first thought was that not everybody will share that first thought with Wittgenstein. I think here speaks his cultural background.

Let me continue with Wittgenstein.

It is clear, however, that ethics has nothing to do with punishment and reward in the usual sense of the terms. Nevertheless, there must indeed be some kind of ethical reward and punishment, but they must reside in the action itself.

“They must reside in the action itself!”

You may remember, we came across such self-referential notions earlier with the example “A implies A” and its recursive relatives of second-order cybernetics.

Can we take a hint from these comments for how to go about reflecting about ethics and, at the same time, adhering to Wittgenstein’s criterion? I think we can. I, for myself, try to follow the following rule:

For any discourse, I may have — say, in science, philosophy, epistemology, therapy, etc. — to master the use of my language so that ethics is implicit.

What do I mean by that? I mean by that to let language and action ride on an underground river of ethics, and to see to it that one is not thrown off, so that ethics does not become explicit, and so that language does not degenerate into moralization.

How can one accomplish this? How can one hide ethics from all eyes and still let her determine language and action?

Fortunately, Ethics has two sisters who allow her to remain unseen, because they create for us a visible framework, a tangible tissue within which, and upon which, we may weave the Gobelins of our life. And who are these two sisters?

One is Metaphysics. The other Dialogics.

My program now is to talk about these two ladies, and how they manage to allow Ethics become manifest without becoming explicit.

metaphysics

Let me first talk about Metaphysics. In order to let you see at once the delightful ambiguity that surrounds her, let me quote from a superb article on “The Nature of Metaphysics” by the British scholar W. H. Walsh. He begins his article with the following sentence:

Almost everything in metaphysics is controversial and it is therefore not surprising that there is little agreement among those who call themselves metaphysicians about what precisely it is they are attempting.

When I invoke today Metaphysics, I do not seek agreement with anybody else about her nature. This is because I want to say precisely what it is when we become metaphysicians, whether or not we call ourselves metaphysicians. I say we become metaphysician whenever we decide upon in principle undecidable questions. There are indeed among propositions, proposals, problems, questions, those that are decidable, and those that are in principle undecidable.

Here, for instance, is a decidable question: “Is the number 3,396,714 divisible by 2?” It will take you less than 2 seconds to decide that indeed this number is divisible by 2. The interesting thing here is that it will take you exactly the same short time to decide this question, if the number has not 7, but 7000 or 7 million digits.

Of course, I could invent questions that are slightly more difficult, for instance: “Is 3,396,714 divisible by three?” or more difficult ones. But there are also problems that are extraordinary difficult to decide, some of them having been posed more than 200 years ago and have still not been answered. Think of Fermat’s “Last Theorem” to which the most brilliant heads have put their brilliant minds and have not yet come up with an answer.

Or think of Goldbach’s “Conjecture” which sounds so simple that it seems a proof cannot be too far away:

All even numbers can be composed as the sum of two primes.

For example; 12 is the sum of the two prime numbers 5 and 7; or 20 = 17+3; or 24: 23+11; and so on and so forth. So far, no counterexample to Goldbach’s conjecture has been found. And even if all further tests would not refute Goldbach, it still would remain a conjecture, until a sequence of mathematical steps is found that decides in favor of his good sense of numbers. There is a justification for not giving up but to continue the search for finding a sequence of steps that would prove Goldbach. It is that the problem is posed in a framework of logico-mathematical relations which guarantees that one can climb from any node of this complex crystal of connections to any other node.

One of the most remarkable examples of such a crystal of thoughts is Bertrand Russell and Alfred North Whitehead’s monumental Principia Mathematica, which they wrote over a period of 10 years between 1900 and 1910. This magnum opus of 3 volumes and more than 1500 pages was to establish once and for all a conceptual machinery for flawless deductions. A conceptual machinery that would contain no ambiguities, no contradictions and no undecidables.

Nevertheless, in 1931, Kurt Godel, then 25 years of age, published an article whose significance goes far beyond the circles of logicians and mathematicians. The title of this article I will give now in English: “On formally undecidable propositions in the Principia Mathematica and related systems.”

What Godel does in his paper is to demonstrate that logical systems, even like those so carefully constructed by Russell and Whitehead, are not immune against undecidables to sneak in.

However, we do not need to go to Russell, Whitehead, Godel, or to other giants, to learn about in principle undecidable questions, we can easily find them all around.

For instance, the question about the origin of the universe is one of those in principle undecidable questions: nobody was there to watch it. Moreover, this becomes apparent by the many different answers that are given to this question. Some say it was a single act of creation some 4 or 5,000 years ago; others say there was never a beginning and there will be never an end, because the universe is a system in perpetual dynamic equilibrium; then there are those who claim that approximately 10 or 20 billion years ago the universe came into being with a “Big Bang,” whose faint remnants one is able to hear over large radio antennas; but I am inclined to trust most Chuang Tse’s report, because he is the oldest and was therefore the closest to this event. He says:

Heaven does nothing; this nothing-doing is dignity;
Earth does nothing; this nothing-doing is rest;
From the union of these two nothing-doings arise all
action
And all things are brought forth.

I could go on and on with other examples, because I have not told you yet what the Burmese, the Australians, the Eskimos, the Bushmen, the Ibos, etc., would tell us about their origins. In other words, tell me how the universe came about, and I will tell you who you are.

I hope I have made the distinction between decidable and in principle undecidable questions sufficiently clear, so that I can present you with a proposition I call the “metaphysical postulate.” Here it is:

Only those questions that are in principle undecidable, we can decide.

Why?

Simply because the decidable questions are already decided by the choice of the framework in which they are asked, and by the choice of rules of how to connect what we call “the question” with what we may take for an “answer.” In some cases it may go fast, in others it may take a long, long time, but ultimately we will arrive, after a sequence of compelling logical steps, at an irrefutable answer: a definite Yes, or a definite No.

But we are under no compulsion, not even under that of logic, when we decide upon in principle undecidable questions. There is no external necessity that forces us to answer such questions one way or another. We are free! The complement to necessity is not chance, it is choice! We can choose who we wish to become when we have decided on in principle undecidable questions.

This is the good news, American journalists would say. Now comes the bad news.

With this freedom of choice we are now responsible for whatever we choose! For some this freedom of choice is a gift from heaven. For others such responsibility is an unbearable burden: How can one escape it? How can one avoid it? How can one pass it on to somebody else?

With much ingenuity and imagination, mechanisms were contrived by which one could bypass this awesome burden. With hierarchies, entire institutions have been built where it is impossible to localize responsibility. Everyone in such a system can say: “I was told to do X.”

On the political stage we hear more and more the phrase of Pontius Pilate: “I have no choice but X.” In other words “Don’t make me responsible for X, blame others.” This phrase apparently replaces: “Among the many choices I had, I decided on X.”

I mentioned objectivity before and I mention it here again as another popular device of avoiding responsibility.

As you may remember, objectivity requires that the properties of the observer shall not enter the description of his observations. With the essence of observing, namely the processes of cognition, being removed, the observer is reduced to a copying machine, and the notion of responsibility has been successfully juggled away.

However, Pontius Pilate, hierarchies, objectivity, and other devices, are all derivations of a decision that has been made on a pair of in principle undecidable questions. Here is the decisive pair:

Am I apart from the universe?

That is, whenever I look I am looking as through a peephole upon an unfolding universe.

or

Am I part of the universe?

That is, whenever I act, I am changing myself and the universe as well.

Whenever I reflect upon these two alternatives, I am surprised again and again by the depth of the abyss that separates the two fundamentally different worlds that can be created by such choices.

Either to see myself as a citizen of an independent universe, whose regularities, rules and customs I may eventually discover, or to see myself as the participant of a conspiracy, whose customs, rules, and regulations we are now inventing.

Whenever I speak to those who have made their decision to be either discoverers or inventors, I am impressed again and again by the fact that neither of them realizes that they have ever made that decision. Moreover, when challenged to justify their position, a conceptual framework is constructed that, it turns out, is itself the result of a decision upon an in principle undecidable question.

It seems that I am telling you a detective story, but keeping silent about who is the good guy and who is the bad guy, or who is sane and who is insane, or who is right and who is wrong. Since these are in principle undecidable questions, it is for each of us to make this decision and to take the responsibility for it. There is a murderer; I submit it is unknowable whether he is or was insane. The only thing we know is what I say, what you say, or what the expert says he is. And what I say, what you say, and what the expert says about his sanity or insanity, it is my, it is your, and it is the expert’s responsibility. Again, the point here is not the question “Who is right and who is wrong.” This is an in principle undecidable question. The point here is freedom: freedom of choice. It is Jose Ortega y Gasset’s point:

Man does not have a nature, but a history. Man is no thing, but a drama. His life is something that has to be chosen, made up as he goes along, and a human consists in that choice and invention. Each human being is the novelist of himself, and though he may choose between being an original writer and a plagiarist, he cannot escape choosing. . . .. He is condemned to be free.

You may have become suspicious of my qualifying all questions as being in principle undecidable questions. This is by no means the case. I was once asked the question, of how the inhabitants of such different worlds as I sketched them before, the inhabitants of the world they discover, and the inhabitants of a world they invent, how can they ever live together? There is no problem to answer that. The discoverers will most likely become astronomers, physicists and engineers; the inventors family therapists, poets and biologists. And for all of them living together will be no problem either, as long as the discoverers discover inventors, and the inventors invent discoverers. Should difficulties develop, fortunately, we have this full house of family therapists who may help bring sanity to the human family.

I have a dear friend who grew up in Marrakech. The house of his family stood on the street that divide the Jewish and the Arabic quarter. As a boy he played with all the others, listened to what they thought and said, and learned of their fundamentally different views. When I asked him once, “Who was right?” he said, “They are both right.”

“But this cannot be,” I argued from an Aristotelian platform, “Only one of them can have the truth!”

“The problem is not truth,” he answered, “The problem is trust.”

I understood: the problem is understanding; the problem is understanding understanding; the problem is making decisions upon in principle undecidable questions.

At that point Metaphysics appeared and asked her younger sister, Ethics: “What would you recommend that I should bring back to my proteges, the metaphysicians, whether or not they call themselves such?” And Ethics answered: “Tell them they should always try to act so as to increase the number of choices; yes, increase the number of choices!”

dialogics

Now I would like to turn to Ethics’s sister Dialogics.

What are the means at her disposal so that through them Ethics can manifest herself without becoming explicit?

I think you may have guessed it already; it is, of course, language. I am not talking here about language in the sense of the noises that are produced by pushing air past our vocal cords, or language in the sense of grammars, syntax, semantics, semiotics, and the whole machinery of phrases, verb-phrases, noun-phrases, deep structure, etc. When I talk here about language, I talk about Language, the dance. Very much so when we say “It needs two to Tango,” I am saying, “It needs two to Language.”

When it comes to language, the dance, you, the family therapists are, of course, the masters, while I can only speak as an amateur. Since “amateur” comes from “Amour,” you know at once that I love to dance this dance.

In fact, the little I know to dance this dance I learned from you. My first lesson was when I was invited to sit in the observation room and to watch through the one-way mirror a therapeutic session in progress with a family of four. At one moment my colleagues had to leave, and I was by myself. I was curious as to what I would see when I could not hear what was said, so I turned the sound off.

I recommend to you to make this experiment yourself. Perhaps you will be as fascinated as I was. What I saw then, the silent pantomime, the parting and closing of lips, the body movements, the boy who only once stopped biting his nails. . . . What I saw then were the dance steps of language, the dance steps alone, without the disturbing effects of the music. Later I heard from the therapist that this session was very successful indeed.

What magic, I thought, must sit in the noises these people produced by pushing air past their vocal cords, and by parting and closing their lips.

Therapy! What magic indeed!

And to think that the only medicine at your disposal is the dance steps of language and its accompanying music.

Language! What magic indeed!

It is left to the naive to believe that magic can be explained. Magic can not be explained, Magic can only be practiced, as you all well know.

Reflecting upon the magic of language is similar to reflecting upon a theory of the brain. As much as one needs a brain to reflect upon a theory of the brain, one needs the magic of language to reflect upon the magic of language. It is the magic of those notions that need themselves to come into being. They are of second-order.

It is also the way language protects itself against explanation by always speaking about itself: There is a word for language, namely, “language”; there is word for word, namely, “word.” If you don’t know what “word” means, you can look it up in a dictionary. I did that. I found it to be an “utterance.” I asked myself, what is an “utterance”? I looked it up in the dictionary. The dictionary said it means: “to express through words.”

So we are back were we started. Circularity: A implies A.

But this is not the only way language protects itself again explanation. In order to confuse her explorer she always runs on two different tracks. If you chase language up one track, she jumps to the other. If you follow her there, she is back on the first.

What are these two tracks?

The one track is the track of appearance. It runs through the land that appears to be stretched out before us: the land we are looking at as through a peephole.

The other track is the track of function. It runs through the land that is as much part of us as we are part of it: the land that functions like an extension of our body.

When language is on the track of appearance it is monologue. There are the noises produced by pushing air past vocal cords, there are the words, the grammars, the syntax, the well-formed sentences. Along with these noises go the denotative pointings. Point to a table, make the noise “table” — point to a chair, make the noise “chair.”

Sometimes it does not work. Margaret Mead learned fast the colloquial languages of many tribes by pointing to things and waiting for the appropriate noises. She told me that once she came to a tribe, pointed to different things, but got always the same noises “chumulu.” A primitive language she thought, only one word! Later, she learned that “chu mulu” means “pointing with finger.”

When language switches to the track of function it is dialogic. There are of course these noises; some of them may sound like “table,” some others like “chair,” but there need not be any tables or chairs, because nobody is pointing at tables or chairs. These noises are invitations to the other to make some dance steps together. The noises “table” and “chair” bring to resonance those strings in the mind of the other which, when brought to vibration, would produce noises like “table” and “chair.” Language in its function is connotative.

In its appearance, language is descriptive. When you tell your story, you tell it as it was: the magnificent ship, the ocean, the big sky, and the flirt you had, that made the whole trip a delight.

But for whom do you tell it? That’s the wrong question. The right question is: With whom are you going to dance your story, so that your partner will float with you over the decks of your ship, will smell the salt of the ocean, will let the soul expand over the sky, and there will be a flash of jealousy when you come to the point of your flirt.

In its function, language is constructive, because nobody knows the source of your story. Nobody knows, or ever will know how it was: because as it was is gone forever.

You remember Rene Descartes, as he was sitting in his study, not only doubting that he was sitting in his study, but also doubted his existence. He asked himself: “Am I, or am I not?”

He answered this rhetorical question with the solipsistic monologue: “Je pense, donc je suis,” or in the famous Latin version, “Cogito ergo Sum.” As Descartes knew very well, this is language in its appearance; otherwise he would not have quickly published his insight for the benefit of others in his “Discourse de la methode.” Since he understood the function of language as well, in all fairness, he should have exclaimed: “Je pense, donc nous sommes,” “Cogito ergo sumus” or “I think, therefore we are!”

In its appearance, the language I speak is my language. It makes me aware of myself: this is the root of consciousness.

In its function, my language reaches out for the other: this is the root of conscience. And this is where Ethics invisibly manifests itself through dialogue. Permit me to read to you what Martin Buber says in the last few lines of his book Das Problem der Menschen:

Contemplate the human with the human, and you will see the dynamic duality, the human essence, together: here is the giving and the receiving, here the aggressive and the defensive power, here the quality of searching and of responding, always both in one, mutually complementing in alternating action, demonstrating together what it is: human. Now you can turn to the single one and you recognize him as human for his potential of relating. We may come closer to answering the question “What is human?” when we come to understand him as the being in whose dialogic, in his mutually present two-getherness, the encounter of the one with the other is realized and recognized at all times.

Since I cannot add anything to Buber’s words, this is all I can say about ethics, and about second-order cybernetics. Thank you very much.

Heinz von Foerster
August, 1994
Pescadero, CA

Previous Next Up Comments

Notes

1. Opening address for the International Conference, Systems and Family Therapy: Ethics, Epistemology, New Methods, held in Paris, France, October 4th, 1990, subsequently published (in translation) in Yveline Rey and Bernard Prieur, eds., Systemes, ethiques: Perspectives en therapie familiale (Paris: ESF Editeur, 1991) 41-54. Reprinted with permission from the original unpublished English version.

 

Source: ethics and second-order cybernetics

Try again. Fail again. Fail better: the cybernetics in design and the design in cybernetics Ranulph Glanville

Try again. Fail again. Fail better: the cybernetics in design and the
design in cybernetics

Ranulph Glanville
The Bartlett School of Architecture, UCL, London, UK, and
CybernEthics Research, Southsea, UK

Abstract
Purpose – The purpose of this paper is to explore the two subjects, cybernetics and design, in order to establish and demonstrate a relationship between them. It is held that the two subjects can be considered complementary arms of each other.

Design/methodology/approach – The two subjects are each characterised so that the author’s interpretation is explicit and those who know one subject but not the other are briefed. Cybernetics is examined in terms of both classical (first-order) cybernetics, and the more consistent second-order cybernetics, which is the cybernetics used in this argument The paper develops by a comparative analysis of the two subjects, and exploring analogies between the two at several levels.

Findings – A design approach is characterised and validated, and contrasted with a scientific approach. The analogies that are proposed are shown to hold. Cybernetics is presented as theory for design, design as cybernetics in practice. Consequent findings, for instance that both cybernetics and
design imply the same ethical qualities, are presented.

Research limitations/implications – The research implications of the paper are that, where research involves design, the criteria against  which it can be judged are far more Popperian than might be imagined. Such research will satisfy the condition of adequacy, rather than correctness. A secondary
outcome concerning research is that, whereas science is concerned with what is (characterised through the development of knowledge of (what is)), design (and by implication other subjects primarily concerned with action) is concerned with knowledge for acting.

Practical implications – The theoretical validity of second-order cybernetics is used to justify and give proper place to design as an activity. Thus, the approach designers use is validated as complementary to, and placed on an equal par with, other approaches. This brings design, as an approach, into the realm of the acceptable. The criteria for the assessment of design work are shown to be different from those appropriate in other, more traditionally acceptable approaches.

Continued in source (pdf): http://www.asc-cybernetics.org/systems_papers/C%20and%20D%20paper%200670360902.pdf

 

A (Cybernetic) Musing: Wicked Problems

Cybernetics and Human Knowing. Vol. 19, nos. 1-2, pp. 163-173
A (Cybernetic) Musing: Wicked Problems

Ranulph Glanville

Intentions

In this column, I explore what are known as Wicked Problems. I have recently come to understand that Wicked Problems can be placed in a central position that brings together a number of different ideas which are significant in second order cybernetics.
I will start by describing the context in which the Wicked Problem concept was developed. Then I move to explore connections between Wicked Problems and other cybernetic ideas, specifically undecidable questions, trivial and non-trivial machines, and the Black Box. I conclude by discussing the somewhat surprising benefits that we can gain from the concept, and position this with other second-order cybernetic concepts.

Source (pdf): https://www.uboeschenstein.ch/texte/Glanville-C&HK2012.pdf

A Systems Literacy Manifesto – Hugh Dubberly

In 1968, West Churchman wrote, “…there is a good deal of turmoil about the manner in which our society is run. …the citizen has begun to suspect that the people who make major decisions that affect our lives don’t know what they are doing.”[1] Churchman was writing at a time of growing concern about war, civil rights, and the environment. Almost fifty years later, these concerns remain, and we have more reason than ever “to suspect that the people who make major decisions that affect our lives don’t know what they are doing.” Examples abound.

In the 2012 United States presidential election, out of eight Republican party contenders, only Jon Huntsman unequivocally acknowledged evolution and global warming.[2] While a couple of the candidates may actually be anti-science, what is more troubling is that almost all the candidates felt obliged to distance themselves from science, because a significant portion of the U.S. electorate does not accept science. This fact suggests a tremendous failing of education, at least in the U.S.

But even many highly educated leaders do not understand simple systems principles. Alan Greenspan, vaunted Chairman of the U.S. Federal Reserve Board of Governors, has a PhD in economics; yet he does not believe markets need to be regulated in order to ensure their stability. After the financial disaster of 2008, Greenspan testified to Congress, “Those of us who have looked to the self-interest of lending institutions to protect shareholders’ equity, myself included, are in a state of shocked disbelief.”[3] Despite familiarity with the long history of bubbles, collapses, and self-dealing in markets, Greenspan expected people whose bonuses are tied to quarterly profits would act in the long-term interest of their neighbors. Like many Libertarians, Greenspan relies on the dogma of Ayn Rand, rather than asking if systems models (models of stability, disturbance, and regulation), like those described by James Clerk Maxwell in his famous 1868 paper, “On Governors,”[4] might be needed in economic and political systems.

Misunderstanding of regulation moved from the fringe right to national policy, when Ronald Reagan was elected President of the United States, convincing voters that “Government is not the solution to our problems; government is the problem.”[5] Reagan forgot that (under the U.S. system) “we, the people,” are the government. Reagan forgot the purpose of the U.S. government: “to form a more perfect Union, establish Justice, ensure domestic Tranquility, provide for the common defense, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity…” that is, to create stability.[6] And Reagan forgot that any state—any system—without government is by definition unstable, inherently chaotic, and quite literally out-of-control. We need to remember that “government” simply means “steering” and that its root, the Greek work kybernetes, is also the root of cybernetics, the study of feedback systems and regulation.

Churchman points out that decision makers “don’t know what they are doing,” because they lack “adequate basis to judge effects.” It is not stupidity. It is a sort of illiteracy. It is a symptom that something is missing in public discourse and in our schools.

We need systems literacy—in decision makers and in the general public.

 

Continues in source: A Systems Literacy Manifesto

s
search
c
compose new post
r
reply
e
edit
t
go to top
j
go to the next post or comment
k
go to the previous post or comment
o
toggle comment visibility
esc
cancel edit post or comment