Systems Design: Because everything* is systems – Alëna Louguina

Source: Systems Design: Because everything* is systems – Shopify UX

 

Systems Design: Because everything* is systems

This is a computer generated image of Stereocilia, the sensing organelles of hair cells found in the inner ear. They respond to motion for various functions, including hearing and balance inside human body. The hair cells turn the fluid pressure and other stimuli into electrical signals that travel to the brain where they are interpreted as sound. This image also looks like an extra-terrestrial landscape. *Note: ‘everything’ that is a set of connected things or parts forming a complex whole. For example, a pile of saw-dust is not a system, and a wooden dining table is.

The Warning of the Doorknob

All good product and service design begins with systems analysis. We are tasked with understanding not just one system that the product will be composed of (e.g. electrical, information, mechanical, hydraulic, etc.) or is designed for (transportation, manufacturing, social, natural, etc.) but many. Systems are fascinating but also a great source of anxiety, because they are uncertain, full of complex interpersonal relationships, indefinite, and difficult.

This anxiety is rooted in the hierarchical nature of systems and moves in two directions, infinite escalation and regression.

Have you ever wondered why we have such high resolution photos of space and still terribly grainy photos of cells and molecules?

John P. Eberhard, a neuroscientist who studies brain and how it experiences built environments, named this dichotomy of complexities “the warning of the doorknob”, described by Ed Yourdon in his book ‘Just Enough Structured Analysis’:

This has been my experience in Washington when I had money to give away. If I gave a contract to a designer and said, “The doorknob to my office doesn’t have much imagination, much design content. Will you design me a new doorknob?” He would say “Yes,” and after we establish a price he goes away. A week later he comes back and says, “Mr. Eberhard, I’ve been thinking about that doorknob. First, we ought to ask ourselves whether a doorknob is the best way of opening and closing a door.” I say, “Fine, I believe in imagination, go to it.” He comes back later and says, “You know, I’ve been thinking about your problem, and the only reason you want a doorknob is you presume you want a door to your office. Are you sure that a door is the best way of controlling egress, exit, and privacy?”

“No, I’m not sure at all.” “Well I want to worry about that problem.” He comes back a week later and says, “The only reason we have to worry about the aperture problem is that you insist on having four walls around your office. Are you sure that is the best way of organizing this space for the kind of work you do as a bureaucrat?” I say, “No, I’m not sure at all.” Well, this escalates until (and this has literally happened in two contracts, although not through this exact process) our physical designer comes back with a very serious face. “Mr. Eberhard, we have to decide whether capitalistic democracy is the best way to organize our country before I can possibly attack your problem.”

On the other hand is the problem of infinite regression. If this man faced with the design of the doorknob had say, “Wait. Before I worry about the doorknob, I want to study the shape of man’s hand and what man is capable of doing with it,” I would say, “Fine.” He would come back and say, “The more I thought about it, there’s a fit problem. What I want to study first is how metal is formed, what the technologies are for making things with metal in order that I can know what the real parameters are for fitting the hand.” “Fine.” But then he says, “You know I’ve been looking at metal-forming and it all depends on metallurgical properties. I really want to spend three or four months looking at metallurgy so that I can understand the problem better.” “Fine.” After three months, he’ll come back and say, “Mr. Eberhard, the more I look at metallurgy, the more I realize that it is atomic structure that’s really at the heart of this problem.” And so, our physical designer is in atomic physics from the doorknob.

That is one of our anxieties, the hierarchical nature of complexity.

Illustration of Dr. Eberhard’s design dilemma of escalation and regression in today’s complex systems (author: Alëna Iouguina)

Now we must not forget that Dr. Eberhard had his own complexity to deal with. From the perspective of escalation, he moves from being an H. sapiens in an urban habitat to a temperate climate zone of New York to our planet’s biosphere. And in another direction, Dr. Eberhard is composed of …

Dr. Eberhard has his own H. sapiens dilemma to deal with (author: Alëna Iouguina)

All these systems talk to each other. While Dr. Eberhard is dealing with his door handle, environmental agencies are trying to figure out how to revert the harmful effects of out-of-control manufacturing systems on Earth’s biosphere. And atoms of a door handle are peacefully swerving in the void along with the atoms of Dr. Eberhard’s hand.

Bridging the Distance Between Fundamental Rules and Final Phenomena

Years ago, still a design student, I peeked into John Buschek’s cozy office on the grounds of a project I was working on and ended up in one of his remarkably comfortable chairs, being questioned about the entire purpose of my project. He concluded:

“Designers often jump to a solution before ever asking a question of why the solution is needed in the first place. Don’t worry, scientists do it too.”

And he isn’t talking about a product or service, but more about the fundamental question of systems surrounding it. As a chemist, John engaged in many well funded endeavours of “creating chemistry for the sake of creating chemistry”. He called it ‘mundane science’. There is an excellent paper by Daniel Kammen and Michael Dove titled The Virtues of Mundane Science:

The prejudice against research on mundane topics has created a conceptual “cordon sanitaire” within many disciplines. In energy and development research, it appears as a disproportionate focus on advanced combustion systems, commercial fuels, and large centralized power facilities, even though more than 3 billion people rely on wood, charcoal, and other biomass fuels for the bulk of their energy needs.

Major obstacles to developing sound environmental practices are not principally technological. Instead, the primary stumbling block is the lack of integrative approaches to complex systems and problems. A mundane example — efforts to improve wood and charcoal burning cookstoves — illustrates the important advances that are possible from integrating scientific, engineering, and social science research with very practical implementation programs.

Here it is: lack of integrative approaches to complex systems and problems. There is a great video narrated by Richard Feynman, titled Curiosity:

My favourite phrase from the video:

“So much distance between fundamental rules and final phenomena.”

We all know about emergent behaviour, spontaneous organization, and collective decisions. But what do we really know about the rich web of interactions within all these phenomena?

John Holland once elegantly compared economic system to a natural one:

There is no master neuron in the brain nor is there a master cell within a developing embryo. If there is to be any coherent behaviour in the system, it has to arise from competition and cooperation among the agents themselves.

This is quite true in the economy as well: regardless of how much CEOs of companies are trying to cope with a stubborn recession, the overall behaviour of the economy is still the result of myriad economic decisions made every day by millions of individual people.

More importantly, it’s all about never-ending reshuffling and rearrangement of the said neurons and cells to ensure the resiliency of the system. Thus, an integrative approach and continuous questioning might open a designer’s mind to the world of heavy, complex problems that need solving; beyond safe, easily-accessible, peddled, and ubiquitous challenges that are so tempting to quickly tackle.

And the best part, the more you dive into a complex problem the more you start connecting the dots, branching out, and converging again. This brings about a whole array of inspirations and allows a designer to truly understand the space they are designing for, instead of generalizing and assuming its purpose and function. And, of course, the more we learn, the more humble we become. And isn’t that what designer should strive to be? As Richard Dawkins put it:

The world and the universe is an extremely beautiful place, and the more we understand about it the more beautiful does it appear. It is an immensely exciting experience to be born in the world, born in the universe, and look around you and realize that before you die you have the opportunity of understanding an immense amount about that world and about that universe and about life and about why we’re here.

We have the opportunity of understanding far, far more than any of our predecessors ever. That is such an exciting possibility, it would be such a shame to blow it and end your life not having understood what there is to understand.


After publishing this article, Timi Olotu wrote a thoughtful comment that turned out to be an eloquent summary to my often serpentine thoughts. So, I’m adding it here:

Your discussion of the anxiety attached with the concepts of infinite regression and escalation is basically a physical exploration of the metaphysical concept of an “existential crisis”.

An existential crisis, in other words, is a self-aware “system” attempting to identify which part of the existential spectrum it uniquely occupies… but failing and getting lost in either infinite regression or escalation.We identify systems as occupying a unique part of the existential spectrum based on where one system (or series of systems) ends and where another begins — i.e. part of knowing that a rock is a rock comes from knowing that it is not the same as sand (or some other thing).

This logic (in small doses) is also essential when solving complex problems of any kind […]. It’s a reminder for designers to neither live in blissful ignorance nor suffer a “design existential crisis” — to be aware of the complexity of systems but not get lost in them.

Finally, the art of murmuration to illustrate the difference between non-system and a system!


The best part about systems is, despite their complexity, they can be understood. Stay tuned for the follow-up article that will introduce methods of systems analysis and design.

Logical Levels

 

Source: Logical Levels

 

Logical Levels model

The concept of logical levels of learning and change was initially formulated as a mechanism in the behavioural sciences by anthropologist Gregory Bateson, based on the work of Bertrand Russell in logic and mathematics.  Robert Dilts first became acquainted with the notion of different logical types and levels of learning, change and communication whilst attending Gregory Bateson’s ecology of mind class at the University of California at Santa Cruz in 1976. Reflecting back, Robert says of attending Bateson’s classes, “These were one of the most transformative experiences of my life. I was sitting in his class, listening to his deep voice and distinctive Cambridge accent, which sounded to me like the voice of wisdom.”
The term logical levels, was adapted by Dilts from Bateson’s work and refers to a hierarchy of levels of processes within an individual or group. The function in each level is to synthesise, organise and direct the interactions on the levels below it. Something on an upper level could “radiate” downward, facilitating change on the lower levels.Something on a low-level could, but would not necessarily, effect the upper levels.

The life of people in any system, and indeed the life of the system itself, can be described and understood on a number of different levels: environment, behaviour, capabilities, values and beliefs, identity and purpose.

Environment

At the most basic level, managing the process of change must address the environment in which a system and its members act and interact. The environment refers to everything outside yourself: the place in which you work, the economy, people around you: your business, your friends and family, your customers. It’s about finding the right time and the right place. Environmental factors determine the context and constraints under which people operate. An organisation’s environment, for instance, is made up of such things as the geographical locations of its operations, the building and facilities which define the ‘workplace’, office and factory design, etc. In addition to the influence these environmental factors may have on people within the organisation, one can also examine the influence and impact the people within an organisation have upon their environment, and what products or creations they bring to their environment.

Behaviour

At another level, we can examine the specific behaviours and actions of a group or individual i.e. what the person or organisation does within the environment. Behaviour is all about what you actually say and do, what you consciously get up to. It is part of what can be seen and heard by other people. What are the particular patterns of work, interaction and communication? On an organisational level, behaviours may be defined in terms of general procedures. On the individual level, behaviours take the form of specific work routines, working habits or job related activities.

Capabilities

Another level of process involves the strategies, skills and capabilities by which the organisation or individual selects and directs actions within their environment i.e. how they generate and guide their behaviours within a particular context. Capabilities are your talents and skills and are increasingly becoming known as competencies. They are the resources that you have available to you. These range from behaviours that you do without any seemingly conscious effort e.g. walking and talking, to skills that you’ve learned more consciously, e.g. riding a bike or working a computer. Capabilities include cognitive strategies and skills such as learning, memory, decision-making and creativity, which facilitate the performance of a particular behaviour or task. At an organisational level, capabilities relate to the infrastructure that is available to support communication, innovation, planning and decision-making between members of the organisation.

Beliefs and values

Our beliefs and values provide the reinforcement that support or inhibit particular capabilities and behaviour. Beliefs determine how events are given meaning and are at the core of judgement and culture. These are the fundamental principles that shape our actions. This level contains statements about yourself, other people and situations that you hold to be true. They are emotionally held views not based on facts. We all hold numerous beliefs and values, some of which are known to us and others which sit outside of our consciousness. Sometimes our beliefs reveal themselves to us when we talk to someone who holds a different belief and we find ourselves drawn to defend our beliefs.

Values are the criteria against which you make decisions. These are qualities that you hold to be important to you in the way you live your life. They are also the rules that keep us on the socially acceptable road.
Beliefs and values are unique to each individual. We may also place a different priority on a belief or a value to our friends, family or work colleagues. Organisations also have beliefs and values, and seek to win over employees to share these beliefs and values.

Identity

Values and beliefs support the individual’s or organisation’s sense of identity i.e. the who behind the why, how, what, where and when. Identity describes your sense of who you are and contains statements that describe how you think of yourself as a person. Our identity is like the trunk of the tree – it is the core of our being. Internally our identities are supported by personal values, beliefs and capabilities as well as our physical being and our environment. Externally, our identity is expressed through our participation in the larger systems in which we participate: our family, professional relationships, community and a global system of which we are a member. A person’s identity is separate from their behaviour, you are more than what you do. For a company the mission statement seeks to define the identity of the organisation.

Purpose

This is the final level that is sometimes referred to as a spiritual level. This term can have a religious connotation but this is not the only meaning here. This level has to do with people’s perceptions of the larger systems to which they belong and within which they participate. These perceptions relate to a person’s sense of, for whom and for what, their actions are directed, providing a sense of meaning and purpose for their actions, capabilities, beliefs and identity.

This level leads organisations to define their vision and ambition; their raison d’être.

In summary, the process of managing change must address several levels or factors:

  • Environmental factors determine the external opportunities or constraints that individuals and organisations must recognise and react to. They   involve considering where and when the change is occurring.
  • Behavioural factors are the specific action steps taken in order to meet the desired state. They involve what, specifically, must be done or accomplished in order to appropriately manage change.
  • Capabilities relates to the mental maps, plans or strategies necessary for managing change. They direct how actions are selected and monitored.
  • Beliefs and values provide the reinforcement that supports or inhibits particular capabilities and actions. They relate to why a particular path is taken and the deeper motivations that drive people to act or persevere.
  • Identity factors relate to people’s sense of their role or mission. These factors are a function of who a person or group perceives themselves to be.
  • Purpose relates to people’s view of the larger system of which they are part. These factors involve for whom and for what a particular action step or path has been taken.

It is often easier to make change at the lower levels on the diagram than at higher levels. The value of the model is that it provides a structured approach to help understand what is happening to an individual, a team or an organisation. A key aspect of the model is it looks at the level of congruence an individual, team or organisation have across all the logical levels. When an individual is aligned, they are more likely to be described as comfortable in their own skin, charismatic, powerful and are true to themselves. Organisations are more likely to be described as authentic, cohesive and consistent. Knowing which levels are out of alignment provides an individual or an organisation with the best way forward for change.

In Studio – Helsinki Design Lab – recipes for systemic change (free 336-page book)

Helsinki Design Lab helps government see the ‘architecture of problems.’ We assist decision-makers to view challenges from a big-picture perspective, and provide guidance toward more complete solutions that consider all aspects of a problem. Our mission is to advance this way of working. We call it strategic design.

Source: In Studio – Helsinki Design Lab

pdf [Creative Commons  Attribution-NonCommercial-ShareAlike 2.0 licence.]

Click to access In_Studio-Recipes_for_Systemic_Change.pdf

In Studio: Recipes for Systemic change

The full book is available for download as PDF (12mb).

This book explores the HDL Studio Model, a unique way of bringing together the right people, a carefully framed problem, a supportive place, and an open-ended process to craft an integrated vision and sketch the pathway towards strategic improvement. It’s particularly geared towards problems that have no single owner.

It includes an introduction to Strategic Design, a “how-to” manual for organizing Studios, and three practical examples of what an HDL Studio looks like in action. Geoff Mulgan, CEO of NESTA, has written the foreword and Mikko Kosonen, President of Sitra, contributed the afterword.

Release | Media effects | Special Issue of Cybernetics and Human Knowing

Dr. Steffen Roth's avatarProf Steffen Roth

Heidingsfelder, M. and Roth, S. (2018) Media effects. Special Issue of Cybernetics and Human Knowing, Vol. 25 No. 4.

Contents

Table of contents

Foreword: Media Effects (M. Heidingsfelder)

Contingency Alert: Editorial Note on Necessary and Impossible Media (S. Roth)

Articles

The Mediality of Looseness (U. Stäheli)

Listening to Media in Cultural Theory, Sociology, and Management (D. Baecker)

Digitality with a Medium of Communication: With a Focus on Organizations as Systems of Decision-Making (A. Brosziewski)

Mousing, Swiping, Thinking: Magical Conquest Techniques in the Context of Electronic Communications Media (P. Fuchs)

New Media and Socio-Cultural Formations (J. Fuhse)

Regular Features

Column: Virtual Logic-The Erdos Machine (L. Kauffman)

Photo credit: Stefan M. Seydel, dfdu.org.

View original post

Productive Organisational Paradoxes – Ivo Velitchkov

 

 

This was Ivo’s presentation at the SCiO open event in London last week (www.scio.org.uk/events for more) – one of those real brain workouts which he’s so good at. I’m not sure how well others can follow from the slides, but depending on how the edit comes out, SCiO members and/or those following his blog – I think http://www.strategicstructures.com/ but maybe  https://eavoices.com/author/ivo/

 

 

 

Also well worth a look from the Strategic Structures website is

What can Social Systems Theory bring to the VSM?

In 2015, when the Metaphorum was in Hull, I tried to kick off a discussion about potential contributions from cognitive science, and particularly from the Enactive school. I shared some insights and hinted at other possibilities. This year the Metaphorum conference was in Germany for the first time. It was organised by Mark Lambertz and hosted by Sipgate in Düsseldorf. I saw in the fact that the Metaphorum was in Germany a good opportunity to suggest another combination, this time with the Social Systems Theory of Niklas Luhmann.

These are the slides from my talk and here you can also watch them with all animations.

Related posts:

Power of Community Summit Feb 1-10, 2019

Interesting speakers

 

Source: Power of Community Summit Feb 1-10, 2019

 

Power of Community Summit
Climate Change and Consciousness

Feb 1-10, 2019

Are you interested in hosting or joining a Hub for the Summit? This is an opportunity to follow the online Power of Community Summit with a group, friends or community.  In addition to signing up with your email for general information, please click on the button below to learn more about hubs. Thank you!

By registering, you will receive regular information about the Summit and daily information and links to interviews as it begins. As a gift, we will send you a copy of “Ecovillage: 1001 Ways to Heal the Planet.” Further information about the newsletter, registration, and data protection are provided in the data policy declaration.

Your data are safe with us!
If you have not received a confirmation mail from us, please use the following email:  summit@ecovillage.org

Sign up and get the bonus gift “Ecovillage: 1001 Ways to Heal the Planet” for free by email.

This Summit is just right for you if…

  • You are seeking inspiration and hope for the future facing today’s climate crisis.
  • You want to join others in right action by creating hubs of consciousness worldwide.
  • You want to reduce your carbon footprint while connecting with thought leaders worldwide.
  • You have a busy schedule and limited extra time.
  • You believe in the power of community, but you don’t really know where to start building or you want more tools to connect more deeply.
  • You sense at a deep level that solutions are urgently needed to regenerate our planet.
  • You want to join an established network of living laboratories catalysing multidimensional sustainability today.
Speakers
Total participants + you
Hubs

Get your FREE gift!

By registering, you will receive regular information about the Summit and daily information and links to interviews from the start. As a gift, we will send you a copy of “Ecovillage: 1001 Ways to Heal the Planet.” Further information about the newsletter, registration, and data protection are provided in the data policy declaration.

Your data are safe with us!
If you have not received a mail from us, please use the following Email: summit@ecovillage.org

My questions and some answers…

  • Are you looking for inspiration and hope?
  • Join the Global Ecovillage Network, world-renowned speakers and your neighbours today to build bridges and explore the leading edge of climate consciousness.
  • Would you like to explore ways to regenerate our planet?
  • Together we will create learning hubs, building bridges and exploring the power of community to co-create a sustainable future.
  • Join us in this free opportunity!

How it works

Sign up here on the free Summit page

Confirm your email address in your mailbox (be sure to check spam folder)

You will always receive further information by email

When and where is the Summit held?

The Summit takes place online in your home. From Feb 1-10, 2019 you will receive an email every day with links to the respective interviews. You can watch an interview for 24 hours free of charge on your computer, mobile phone, laptop or tablet.

After the 10 days of the Summit you can stay connected or unsubscribe again.

If you want to see the interviews and extras permanently at any time of your choosing after the Summit, then you have the opportunity to buy the Summit package.

If your registration did not work

Write an email to  summit@ecovillage.org and we’ll add you to the attendance list.

“You are invited not only to bring this inspiration into your own life but also to form hubs around you to share what inspiration you find here within a growing network. We have a rich experience in the Global Ecovillage Network of the power of community to expand our consciousness and transform the world around us. Join us!”

Get your FREE gift!

By registering, you will receive regular information about the Summit and daily information and links to interviews from the start. As a gift, we will send you a copy of “Ecovillage: 1001 Ways to Heal the Planet.” Further information about the newsletter, registration, and data protection are provided in the data policy declaration.

Your data are safe with us!
If you have not received a mail from us, please use the following Email:summit@ecovillage.org

When You Meet the Monster, Anoint Its Feet by Bayo Akomolafe — Emergence Magazine

I thought this was very interesting

 

Source: When You Meet the Monster, Anoint Its Feet — Emergence Magazine

 

Illustrations by Jia Sung

ESSAY

When You Meet the Monster, Anoint Its Feet

In the age of the Anthropocene and entrenched politics of whiteness, Bayo Akomolafe brings us face-to-face with our own unresolved ancestry, as it becomes more and more apparent that we are completely entwined with each other and the natural world.

A stunning invitation is in the air, urging us to rethink ourselves, our bodies, our hopes for justice, and how we respond to the politics of whiteness. In these times of painful displacements, unavertable crises, and unexpected entanglements (the Anthropocene), the logic of race and identity collides with genetic technologies and splinters into new emergent insights into how bodies come to be enfleshed—granting us hope for becoming otherwise.

The story I write here might have a neat beginning and an ending, but this story is really about the middle-ing spacethat gives birth to beginnings and endings. To be sure, it is about a good number of things—about race and racism, about black bodies, about the exterminations perpetrated in the name of superiority, about healing and decolonization, and about technology. And yet, it is at heart a letter about middles—not mathematical middles or the morality of balance in the way we often strive to find the golden mean between two extremes, but about how things interpenetrate each other, and how that leads us to interesting places. The middle I speak of is not halfway between two poles; it is a porousness that mocks the very idea of separation.

This is a tale about the brilliant betweenness that defeats everything, corrodes every boundary, spills through marked territory, and crosses out every confident line. The Yolngu people of Arnhem Land in northeastern Australia have a name for this “brilliance”: bir’yun (meaning “brilliance” or “shimmer”). It refers to a Yolngu aesthetic that is effected in paintings by crosshatching patterns and lines, which leave an optical impression of a shimmer. Bir’yun, more than just an artistic technique, speaks of ancestry cutting into the present, identities queered, tongues rendered unintelligible, and im/possibilities opening up. Bir’yun speaks of middles. And everything dies and begins in the middle.

When I was a child, I heard a story of beginnings from our Yoruba traditions about how the world came to be: they say there were once primal seas and raging waters below—and no land mass to counter their fury. Up above, the sky churned with the politics of a restless pantheon of Òrìshàs, non-human mythical beings who lived before humans. Olókun ruled the waters, and Olodumare—supreme above all—ruled the heavens. Between them, there was nothing. But, you see, “nothing” is never really as empty as some might think.

Obatálá, son of Olodumare—curious, restless, and uneasy with endless bliss—was inspired to create a people and the land they would rest on. With Olodumare’s blessings, he took leave of heavenly places and made his way down to the waters to begin his task. Just before he made his way, Obatálá consulted with Orunmila, Òrìshà of prophecy, who told him that he must prepare a chain of gold; gather palm nuts, with which he might hold the sand to be thrown over the waters; and obtain a sacred egg which contained a bird that would come in handy along the way. Obatálá did as instructed and secured these items. At the moment of departure, he fastened the golden chain to the sky and climbed down.

Can you take an instant to visualize this event? Imagine it for a moment: sky and swirling blue traversed by a shimmering chain that irrevocably and rudely links the heavens to the terrestrial, the divine to the mundane, the transcendent to the immanent, the infinite to the finite, nature to culture, masculine to feminine, beginnings to endings, unsettling both, re-configuring both just as well. In a sense, Obatálá’s epic adventure recreated everything.

On Obatálá’s golden chain, poised in the grand between, hangs not just a riveting account of beginnings-that-are-not-originary (or “middles”), but a figure of shocking intersections or transversal happenings—a figure that is particularly alive and much needed right now. This chain—like Obatálá’s golden chain—disturbs everything, remakes everything … rethinks everything. Its helixes weave together new practices that open up new considerations about how to ask questions related to identity and racial justice. I speak of deoxyribonucleic acid, or DNA.

Continues in source: When You Meet the Monster, Anoint Its Feet — Emergence Magazine

 

 

Systems Innovation 2019 – Complexity Labs – 30-31 March 2019, Barcelona

This should be fun – also your humble curator is speaking there 🙂

Source: Systems Innovation 2019 – Complexity Labs

Systems Innovation 2019

Overview

Systems Innovation 2019 will be a conference on the topics of Complexity Thinking & Systems Change taking place in Barcelona at the end of March 2019. This is an open forum for those organizations and individuals applying complexity and systems thinking in various areas of economy, society, technology or environment towards enabling systems innovation and change – it is for anyone who feels that complexity or systems thinking is central to what they do and wishes to engage with peer organizations and individuals in open discussion and ideas exchange.

The event will bring together up to 10-20 organizations and some 100 individual participants, for an active weekend of presentations, panel discussions and brainstorming sessions on applying the ideas of complexity and systems thinking. The forum will be an opportunity to meet and exchange ideas and perspectives with others in an open space, to foster collaboration and awareness across the community, to hear from our speakers and brainstorm on specific issues of interest to participants.

Basic Info

What

This will be a weekend conference of presentations, workshops and open discussion sessions

Who

This is for those interested in applying complexity thinking to tackling real-world issues and enabling systems change.

Where

The event will take place in Spaces co-working located in the @22 innovation district in the center of Barcelona.

When

Two full day events set for the weekend of the 30 – 31st of March.

Activities

Bring your questions, imagination and enthusiasm for a topic of your interest because this will not be a passive weekend break, but a user generated event. We are mindful that flying people around costs the planet, it costs time, energy and money, we expect everyone to be actively engage in creating an event that has real value and outcomes that move the discussion on systems innovation forward – we have designed the conference to facilitate active engagement from all parties.

#sibcn

Wardley maps

Not completely *systems*, but a genuinely interesting and useful tool concerned, I’d say, with pattern spotting and recognition, and exploitation,

Wardley map – Wikipedia

Other items:

Simon himself (slides and speaker notes): https://www.slideshare.net/swardley/an-introduction-to-wardley-maps

What do Wardley maps really map? A settler writes

https://blog.gardeviance.org/2015/02/an-introduction-to-wardley-value-chain.html

And now an interesting though not yet fully integrated pairing with Cynefin:

View at Medium.com

Creating greatness in the realm beyond systems thinking – Jack Martin Leith

“Left field consultant” Jack Martin Leith (Jack Martin Leith | Special Projects | Leith SP) is a curator of content after my own heart, and contributes his own content too.

This piece – pdf – http://jackmartinleith.com/documents/creating-greatness-in-the-realm-beyond-systems-thinking.pdf – builds on his personal biography (a useful reminder of what co-curator here, David Ing, says – there’s not so much ‘systems thinking’ as there are constellations of thinking, learning reading, personality, and influence around people). I draw particular attention to this because I usually give pretty short shrift to ‘post-systemic’ stuff, because it usually assumes that ‘systems thinkers’ are working in deterministic/mechanistic terms (and those who claim this often do so themselves). And occasionally, it points to ‘systems in the mind’ misleading versus ‘systems in the world’.

However, this piece gives a nice perspective on ‘post-systemic thinking’ which opens up some interesting possibilities.

 

 

 

 

Michael Feathers – The Universality of Postel’s Law

 

Source: Michael Feathers – The Universality of Postel’s Law

The Universality of Postel’s Law

Michael Feathers

September 21, 2015

UK Systems Society conference announced – 24 JUNE 2019, Bournemouth University

This came in from Pauline Roberts on behalf of UKSS:

Date: Monday 24th June 2019

Location: Executive Business Centre, Bournemouth University

This is a ‘hold the date’ message. Further details will be released as they are finalised. We look forward to seeing you there.

Indianapolis Systems Thinking Forum, March 29-30, presented by the Waters Foundation

 

Source: Institute Overview

 

 

 

The Indianapolis Systems Thinking Forum is a unique opportunity to come together to learn and share experiences using systems thinking in districts, schools, classrooms, businesses, communities, and more. Participants will have the opportunity to grow their systems thinking capacity and make valuable connections to many other systems in the region with the shared goal of fostering an environment that promotes success for young people and adults.

 

 

Forum Information

 

Dates:

  • One-Day Workshop Option, March 29
  • Two-Day Option (workshop and collaborative session), March 29 & 30

Lunch will be provided both days.

 

Venue: Butler University, Indianapolis, Indiana

 

Itinerary:

  • March 29: Fundamentals of Systems Thinking Workshop
  • March 30: Systems Thinking: Reflection and Collaborative Planning

For more detailed information, visit the Itinerary page. 

 

Who Should Attend?

  • Educators (PK-12, College and University)
  • District and School Administrators
  • Business Leaders
  • Government Leaders
  • Community Members
  • Youth Program Leaders/Facilitators
  • Students (High school and University)
  • Anyone interested in systems thinking and its role in education, community and business

 

Cost:

  • March 29 Only: $200/person
  • March 29 & 30: $250/person

All supplies and materials included.

Register now! Space is limited.

Brought to you in partnership with:

 

 

 

 

 

Download and share the Indianapolis Forum Flier.

 

Stay in the know! Email us to receive regular updates and to be added to the event mailing list.

 

Can Machines Be Conscious? – IEEE Spectrum 2008 – Koch and Tononi

 

Source: Can Machines Be Conscious? – IEEE Spectrum

Can Machines Be Conscious?

Yes—and a new Turing test might prove it

This is part of IEEE Spectrum’s SPECIAL REPORT: THE SINGULARITY

Would you sell your soul on eBay? Right now, of course, you can’t. But in some quarters it is taken for granted that within a generation, human beings—including you, if you can hang on for another 30 years or so—will have an alternative to death: being a ghost in a machine. You’ll be able to upload your mind—your thoughts, memories, and personality—to a computer. And once you’ve reduced your consciousness to patterns of electrons, others will be able to copy it, edit it, sell it, or pirate it. It might be bundled with other electronic minds. And, of course, it could be deleted.

That’s quite a scenario, considering that at the moment, nobody really knows exactly what consciousness is. Pressed for a pithy definition, we might call it the ineffable and enigmatic inner life of the mind. But that hardly captures the whirl of thought and sensation that blossoms when you see a loved one after a long absence, hear an exquisite violin solo, or relish an incredible meal. Some of the most brilliant minds in human history have pondered consciousness, and after a few thousand years we still can’t say for sure if it is an intangible phenomenon or maybe even a kind of substance different from matter. We know it arises in the brain, but we don’t know how or where in the brain. We don’t even know if it requires specialized brain cells (or neurons) or some sort of special circuit arrangement of them.

Nevertheless, some in the singularity crowd are confident that we are within a few decades of building a computer, a simulacrum, that can experience the color red, savor the smell of a rose, feel pain and pleasure, and fall in love. It might be a robot with a “body.” Or it might just be software—a huge, ever-changing cloud of bits that inhabit an immensely complicated and elaborately constructed virtual domain.

We are among the few neuroscientists who have devoted a substantial part of their careers to studying consciousness. Our work has given us a unique perspective on what is arguably the most momentous issue in all of technology: whether consciousness will ever be artificially created.

We think it will—eventually. But perhaps not in the way that the most popular scenarios have envisioned it.

Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality. That’s good news, because it means there’s no reason why consciousness can’t be reproduced in a machine—in theory, anyway.

In humans and animals, we know that the specific content of any conscious experience—the deep blue of an alpine sky, say, or the fragrance of jasmine redolent in the night air—is furnished by parts of the cerebral cortex, the outer layer of gray matter associated with thought, action, and other higher brain functions. If a sector of the cortex is destroyed by stroke or some other calamity, the person will no longer be conscious of whatever aspect of the world that part of the brain represents. For instance, a person whose visual cortex is partially damaged may be unable to recognize faces, even though he can still see eyes, mouths, ears, and other discrete facial features. Consciousness can be lost entirely if injuries permanently damage most of the cerebral cortex, as seen in patients like Terri Schiavo, who suffered from persistent vegetative state. Lesions of the cortical white matter, containing the fibers through which parts of the brain communicate, also cause unconsciousness. And small lesions deep within the brain along the midline of the thalamus and the midbrain can inactivate the cerebral cortex and indirectly lead to a coma—and a lack of consciousness.

To be conscious also requires the cortex and thalamus—the corticothalamic system—to be constantly suffused in a bath of substances known as neuromodulators, which aid or inhibit the transmission of nerve impulses. Finally, whatever the mechanisms necessary for consciousness, we know they must exist in both cortical hemispheres independently.

Much of what goes on in the brain has nothing to do with being conscious, however. Widespread damage to the cerebellum, the small structure at the base of the brain, has no effect on consciousness, despite the fact that more neurons reside there than in any other part of the brain. Neural activity obviously plays some essential role in consciousness but in itself is not enough to sustain a conscious state. We know that at the beginning of a deep sleep, consciousness fades, even though the neurons in the corticothalamic system continue to fire at a level of activity similar to that of quiet wakefulness.

Data from clinical studies and from basic research laboratories, made possible by the use of sophisticated instruments that detect and record neuronal activity, have given us a complex if still rudimentary understanding of the myriad processes that give rise to consciousness. We are still a very long way from being able to use this knowledge to build a conscious machine. Yet we can already take the first step in that long journey: we can list some aspects of consciousness that are not strictly necessary for building such an artifact.

Remarkably, consciousness does not seem to require many of the things we associate most deeply with being human: emotions, memory, self-reflection, language, sensing the world, and acting in it. Let’s start with sensory input and motor output: being conscious requires neither . We humans are generally aware of what goes on around us and occasionally of what goes on within our own bodies. It’s only natural to infer that consciousness is linked to our interaction with the world and with ourselves.

Yet when we dream, for instance, we are virtually disconnected from the environment—we acknowledge almost nothing of what happens around us, and our muscles are largely paralyzed. Nevertheless, we are conscious, sometimes vividly and grippingly so. This mental activity is reflected in electrical recordings of the dreaming brain showing that the corticothalamic system, intimately involved with sensory perception, continues to function more or less as it does in wakefulness.

Neurological evidence points to the same conclusion. People who have lost their eyesight can both imagine and dream in images, provided they had sight earlier in their lives. Patients with locked-in syndrome, which renders them almost completely paralyzed, are just as conscious as healthy subjects. Following a debilitating stroke, the French editor Jean-Dominique Bauby dictated his memoir, The Diving Bell and the Butterfly, by blinking his left eye. Stephen Hawking is a world-renowned physicist, best-selling author, and occasional guest star on “The Simpsons,” despite being immobilized from a degenerative neurological disorder.

So although being conscious depends on brain activity, it does not require any interaction with the environment. Whether the development of consciousness requires such interactions in early childhood, though, is a different matter.

How about emotions? Does a conscious being need to feel and display them? No: being conscious does not require emotion. People who’ve suffered damage to the frontal area of the brain, for instance, may exhibit a flat, emotionless affect; they are as dispassionate about their own predicament as they are about the problems of people around them. But even though their behavior is impaired and their judgment may be unsound, they still experience the sights and sounds of the world much the way normal people do.

Primal emotions like anger, fear, surprise, and joy are useful and perhaps even essential for the survival of a conscious organism. Likewise, a conscious machine might rely on emotions to make choices and deal with the complexities of the world. But it could be just a cold, calculating engine—and yet still be conscious.

Psychologists argue that consciousness requires selective attention—that is, the ability to focus on a given object, thought, or activity. Some have even argued that consciousness is selective attention. After all, when you pay attention to something, you become conscious of that thing and its properties; when your attention shifts, the object fades from consciousness.

Nevertheless, recent evidence favors the idea that a person can consciously perceive an event or object without paying attention to it. When you’re focused on a riveting movie, your surroundings aren’t reduced to a tunnel. You may not hear the phone ringing or your spouse calling your name, but you remain aware of certain aspects of the world around you. And here’s a surprise: the converse is also true. People can attend to events or objects—that is, their brains can preferentially process them—without consciously perceiving them. This fact suggests that being conscious does not require attention.

One experiment that supported this conclusion found that, as strange as it sounds, people could pay attention to an object that they never “saw.” Test subjects were shown static images of male and female nudes in one eye and rapidly flashing colored squares in the other eye. The flashing color rendered the nudes invisible—the subjects couldn’t even say where the nudes were in the image. Yet the psychologists showed that subjects nevertheless registered the unseen image if it was of the opposite sex.

What of memory? Most of us vividly remember our first kiss, our first car, or the images of the crumbling Twin Towers on 9/11. This kind of episodic memory would seem to be an integral part of consciousness. But the clinic tells us otherwise: being conscious does not require either explicit or working memory.

In 1953, an epileptic man known to the public only as H.M. had most of his hippocampus and neighboring regions on both sides of the brain surgically removed as an experimental treatment for his condition. From that day on, he couldn’t acquire any new long-term memories—not of the nurses and doctors who treated him, his room at the hospital, or any unfamiliar well-wishers who dropped by. He could recall only events that happened before his surgery. Such impairments, though, didn’t turn H.M. into a zombie. He is still alive today, and even if he can’t remember events from one day to the next, he is without doubt conscious.

The same holds true for the sort of working memory you need to perform any number of daily activities—to dial a phone number you just looked up or measure out the correct amount of crushed thyme given in the cookbook you just consulted. This memory is called dynamic because it lasts only as long as neuronal circuits remain active. But as with long-term memory, you don’t need it to be conscious.

Self-reflection is another human trait that seems deeply linked to consciousness. To assess consciousness, psychologists and other scientists often rely on verbal reports from their subjects. They ask questions like “What did you see?” To answer, a subject conjures up an image by “looking inside” and recalling whatever it was that was just viewed. So it is only natural to suggest that consciousness arises through your ability to reflect on your perception.

As it turns out, though, being conscious does not require self-reflection. When we become absorbed in some intense perceptual task—such as playing a fast-paced video game, swerving on a motorcycle through moving traffic, or running along a mountain trail—we are vividly conscious of the external world, without any need for reflection or introspection.

Neuroimaging studies suggest that we can be vividly conscious even when the front of the cerebral cortex, involved in judgment and self-representation, is relatively inactive. Patients with widespread injury to the front of the brain demonstrate serious deficits in their cognitive, executive, emotional, and planning abilities. But they appear to have nearly intact perceptual abilities.

Finally, being conscious does not require language. We humans affirm our consciousness through speech, describing and discussing our experiences with one another. So it’s natural to think that speech and consciousness are inextricably linked. They’re not. There are many patients who lose the ability to understand or use words and yet remain conscious. And infants, monkeys, dogs, and mice cannot speak, but they are conscious and can report their experiences in other ways.

So what about a machine? We’re going to assume that a machine does not require anything to be conscious that a naturally evolved organism—you or me, for example—doesn’t require. If that’s the case, then, to be conscious a machine does not need to engage with its environment, nor does it need long-term memory or working memory; it does not require attention, self-reflection, language, or emotion. Those things may help the machine survive in the real world. But to simply have subjective experience—being pleased at the sight of wispy white clouds scurrying across a perfectly blue sky—those traits are probably not necessary.

So what is necessary? What are the essential properties of consciousness, those without which there is no experience whatsoever?

We think the answer to that question has to do with the amount of integrated information that an organism, or a machine, can generate. Let’s say you are facing a blank screen that is alternately on or off, and you have been instructed to say “light” when the screen turns on and “dark” when it turns off. Next to you, a photodiode—one of the very simplest of machines—is set up to beep when the screen emits light and to stay silent when the screen is dark. The first problem that consciousness poses boils down to this: both you and the photodiode can differentiate between the screen being on or off, but while you can see light or dark, the photodiode does not consciously ”see” anything. It merely responds to photons.

The key difference between you and the photodiode has to do with how much information is generated when the differentiation between light and dark is made. Information is classically defined as the reduction of uncertainty that occurs when one among many possible outcomes is chosen. So when the screen turns dark, the photodiode enters one of its two possible states; here, a state corresponds to one bit of information. But when you see the screen turn dark, you enter one out of a huge number of states: seeing a dark screen means you aren’t seeing a blue, red, or green screen, the Statue of Liberty, a picture of your child’s piano recital, or any of the other uncountable things that you have ever seen or could ever see. To you, “dark” means not just the opposite of light but also, and simultaneously, something different from colors, shapes, sounds, smells, or any mixture of the above.

So when you look at the dark screen, you rule out not just ”light” but countless other possibilities. You don’t think of the stupefying number of possibilities, of course, but their mere existence corresponds to a huge amount of information.

Conscious experience consists of more than just differentiating among many states, however. Consider an idealized 1-megapixel digital camera. Even if each photodiode in the imager were just binary, the number of different patterns that imager could record is 21 000 000. Indeed, the camera could easily enter a different state for every frame from every movie that was or could ever be produced. It’s a staggering amount of information. Yet the camera is obviously not conscious. Why not?

We think that the difference between you and the camera has to do with integrated information. The camera can indeed be in any one of an absurdly large number of different states. However, the 1-megapixel sensor chip isn’t a single integrated system but rather a collection of one million individual, completely independent photodiodes, each with a repertoire of two states. And a million photodiodes are collectively no smarter than one photodiode.

By contrast, the repertoire of states available to you cannot be subdivided. You know this from experience: when you consciously see a certain image, you experience that image as an integrated whole. No matter how hard you try, you cannot divvy it up into smaller thumbprint images, and you cannot experience its colors independently of the shapes, or the left half of your field of view independently of the right half. Underlying this unity is a multitude of causal interactions among the relevant parts of your brain. And unlike chopping up the photodiodes in a camera sensor, disconnecting the elements of your brain that feed into consciousness would have profoundly detrimental effects.

To be conscious, then, you need to be a single integrated entity with a large repertoire of states. Let’s take this one step further: your level of consciousness has to do with how much integrated information you can generate. That’s why you have a higher level of consciousness than a tree frog or a supercomputer.

It is possible to work out a theoretical framework for gauging how effective different neural architectures would be at generating integrated information and therefore attaining a conscious state. This framework, the integrated information theory of consciousness, or IIT, is grounded in the mathematics of information and complexity theory and provides a specific measure of the amount of integrated information generated by any system comprising interacting parts. We call that measure Φ and express it in bits. The larger the value of Φ, the larger the entity’s conscious repertoire. (For students of information theory, Φ is an intrinsic property of the system, and so it is different from the Shannon information that can be sent through a channel.)

IIT suggests a way of assessing consciousness in a machine—a Turing Test for consciousness, if you will. Other attempts at gauging machine consciousness, or at least intelligence, have fallen short. Carrying on an engaging conversation in natural language or playing strategy games were at various times thought to be uniquely human attributes. Any machine that had those capabilities would also have a human intellect, researchers once thought. But subsequent events proved them wrong—computer programs such as the chatterbot ALICE and the chess-playing supercomputer Deep Blue, which famously bested Garry Kasparov in 1997, demonstrated that machines can display human-level performance in narrow tasks. Yet none of those inventions displayed evidence of consciousness.

Scientists have also proposed that displaying emotion, self-recognition, or purposeful behavior are suitable criteria for machine consciousness. However, as we mentioned earlier, there are people who are clearly conscious but do not exhibit those traits.

What, then, would be a better test for machine consciousness? According to IIT, consciousness implies the availability of a large repertoire of states belonging to a single integrated system. To be useful, those internal states should also be highly informative about the world.

One test would be to ask the machine to describe a scene in a way that efficiently differentiates the scene’s key features from the immense range of other possible scenes. Humans are fantastically good at this: presented with a photo, a painting, or a frame from a movie, a normal adult can describe what’s going on, no matter how bizarre or novel the image is.

Consider the following response to a particular image: “It’s a robbery—there’s a man holding a gun and pointing it at another man, maybe a store clerk.” Asked to elaborate, the person could go on to say that it’s probably in a liquor store, given the bottles on the shelves, and that it may be in the United States, given the English-language newspaper and signs. Note that the exercise here is not to spot as many details as one can but to discriminate the scene, as a whole, from countless others.

So this is how we can test for machine consciousness: show it a picture and ask it for a concise description [see photos, “A Better Turing Test”]. The machine should be able to extract the gist of the image (it’s a liquor store) and what’s happening (it’s a robbery). The machine should also be able to describe which objects are in the picture and which are not (where’s the getaway car?), as well as the spatial relationships among the objects (the robber is holding a gun) and the causal relationships (the other man is holding up his hands because the bad guy is pointing a gun at him).

The machine would have to do as well as any of us to be considered as conscious as we humans are—so that a human judge could not tell the difference—and not only for the robbery scene but for any and all other scenes presented to it.

No machine or program comes close to pulling off such a feat today. In fact, image understanding remains one of the great unsolved problems of artificial intelligence. Machine-vision algorithms do a reasonable job of recognizing ZIP codes on envelopes or signatures on checks and at picking out pedestrians in street scenes. But deviate slightly from these well-constrained tasks and the algorithms fail utterly.

Very soon, computer scientists will no doubt create a program that can automatically label thousands of common objects in an image—a person, a building, a gun. But that software will still be far from conscious. Unless the program is explicitly written to conclude that the combination of man, gun, building, and terrified customer implies “robbery,” the program won’t realize that something dangerous is going on. And even if it were so written, it might sound a false alarm if a 5-year-old boy walked into view holding a toy pistol. A sufficiently conscious machine would not make such a mistake.

What is the best way to build a conscious machine? Two complementary strategies come to mind: either copying the mammalian brain or evolving a machine. Research groups worldwide are already pursuing both strategies, though not necessarily with the explicit goal of creating machine consciousness.

Though both of us work with detailed biophysical computer simulations of the cortex, we are not optimistic that modeling the brain will provide the insights needed to construct a conscious machine in the next few decades. Consider this sobering lesson: the roundworm Caenorhabditis elegans is a tiny creature whose brain has 302 nerve cells. Back in 1986, scientists used electron microscopy to painstakingly map its roughly 6000 chemical synapses and its complete wiring diagram. Yet more than two decades later, there is still no working model of how this minimal nervous system functions.

Now scale that up to a human brain with its 100 billion or so neurons and a couple hundred trillion synapses. Tracing all those synapses one by one is close to impossible, and it is not even clear whether it would be particularly useful, because the brain is astoundingly plastic, and the connection strengths of synapses are in constant flux. Simulating such a gigantic neural network model in the hope of seeing consciousness emerge, with millions of parameters whose values are only vaguely known, will not happen in the foreseeable future.

A more plausible alternative is to start with a suitably abstracted mammal-like architecture and evolve it into a conscious entity. Sony’s robotic dog, Aibo, and its humanoid, Qrio, were rudimentary attempts; they operated under a large number of fixed but flexible rules. Those rules yielded some impressive, lifelike behavior—chasing balls, dancing, climbing stairs—but such robots have no chance of passing our consciousness test.

So let’s try another tack. At MIT, computational neuroscientist Tomaso Poggio has shown that vision systems based on hierarchical, multilayered maps of neuronlike elements perform admirably at learning to categorize real-world images. In fact, they rival the performance of state-of-the-art machine-vision systems. Yet such systems are still very brittle. Move the test setup from cloudy New England to the brighter skies of Southern California and the system’s performance suffers. To begin to approach human behavior, such systems must become vastly more robust; likewise, the range of what they can recognize must increase considerably to encompass essentially all possible scenes.

Contemplating how to build such a machine will inevitably shed light on scientists’ understanding of our own consciousness. And just as we ourselves have evolved to experience and appreciate the infinite richness of the world, so too will we evolve constructs that share with us and other sentient animals the most ineffable, the most subjective of all features of life: consciousness itself.

About the Authors

CHRISTOF KOCH is a professor of cognitive and behavioral biology at Caltech.

GIULIO TONONI is a professor of psychiatry at the University of Wisconsin, Madison. In “Can Machines Be Conscious?,” the two neuroscientists discuss how to assess synthetic consciousness. Koch became interested in the physical basis of consciousness while suffering from a toothache. Why should the movement of certain ions across neuronal membranes in the brain give rise to pain? he wondered. Or, for that matter, to pleasure or the feeling of seeing the color blue? Contemplating such questions determined his research program for the next 20 years.

To Probe Further

For more on the integrated information theory of consciousness, read the sidebar “A Bit of Theory: Consciousness as Integrated Information.” For a consideration of quantum computers and consciousness, read the sidebar “Do You Need a Quantum Computer to Achieve Machine Consciousness?”

The Association for the Scientific Study of Consciousness, of which Christof Koch is executive director and Giulio Tononi is president-elect, publishes the journal Psyche and holds an annual conference. This year the group will meet in Taipei from 19 to 22 June. See the ASSC Web site for more information.

For details on the neurobiology of consciousness, see The Quest for Consciousness by Christof Koch (Roberts, 2004), with a forward by Francis Crick.

For more articles, videos, and special features, go to The Singularity Special Report

Chinese Room Argument | Internet Encyclopedia of Philosophy

 

Source: Chinese Room Argument | Internet Encyclopedia of Philosophy

Chinese Room Argument

The Chinese room argument is a thought experiment of John Searle (1980a) and associated (1984) derivation. It is one of the best known and widely credited counters to claims of artificial intelligence (AI)—that is, to claims that computers do or at least can (someday might) think. According to Searle’s original presentation, the argument is based on two key claims: brains cause minds and syntax doesn’t suffice for semantics. Its target is what Searle dubs “strong AI.” According to strong AI, Searle says, “the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states” (1980a, p. 417). Searle contrasts strong AI with “weak AI.” According to weak AI, computers just simulatethought, their seeming understanding isn’t real understanding (just as-if), their seeming calculation is only as-if calculation, etc. Nevertheless, computer simulation is useful for studyingthe mind (as for studying the weather and other things).

Table of Contents

  1. The Chinese Room Thought Experiment
  2. Replies and Rejoinders
    1. The Systems Reply
    2. The Robot Reply
    3. The Brain Simulator Reply
    4. The Combination Reply
    5. The Other Minds Reply
    6. The Many Mansions Reply
  3. Searle’s “Derivation from Axioms”
  4. Continuing Dispute
    1. Initial Objections & Replies
    2. The Connectionist Reply
  5. Summary Analysis
  6. Postscript
  7. References and Further Reading

Continues in source: Chinese Room Argument | Internet Encyclopedia of Philosophy

s
search
c
compose new post
r
reply
e
edit
t
go to top
j
go to the next post or comment
k
go to the previous post or comment
o
toggle comment visibility
esc
cancel edit post or comment