If #AI is powerful at producing answers, what kind of system would be capable of asking better questions than the ones we currently know how to ask?
I missed Paul Pangaro, Jill Fain Lehman and Mike van de Wijnckel’s first ‘Re-Braiding Cybernetics and AI’ symposium because I was at hashtag#STSP26. But I’ve gone through the materials – it’s important.
Jill’s framing of the split itself is powerful: cybernetics took humans as an instance of self-organising systems, AI took humans as the self-organising system of concern. Tidy; enormous consequences. It changes what you think needs explaining, what you’re allowed to assume, what’s treated as background and what as the central problem. It’s where the trouble starts.
Once you narrow the question, you can get very good at modelling, classifying, predicting, generating and optimising, while tiptoeing away from purpose, observer, boundary, context and ethics. You can become extremely clever about the banana, even the stick, while losing interest in the cage, the shelf, the zookeeper, the audience, and the poor sod’s changing sense of what counts as freedom. Heinz von Foerster said, ‘cybernetics is not the banana’.
Pangaro’s opening definition: cybernetics is about information as feedback to effective action, and about purpose as something attributed by an observer. The observer is in the picture – so, therefore, is responsibility. So intervention isn’t just technical. It’s ethical and political. A much bigger challenge than ‘can the machine do the task?’ It asks who is deciding what the task is, from which world, and with what consequences.
Mike’s thread through von Foerster and Pask was excellent. Self-organisation, in this lineage, is not a magical property or a managerial slogan. It depends on interaction, coalition, adaptation, evolving boundaries, and non-zero-sum conditions. The system can’t optimise itself into wisdom. It has to become viable through relationship. Critical for anyone working in hashtag#publicservices, where our biggest failures come from treating living systems as if they were machinery.
The re-braiding question isn’t mainly about AI research, it’s about institutional design.
That’s what’s missing from a lot of the current conversation. The strands for the next symposium are ‘representation’ and ‘process’. Fair enough. But the questions that really bite in public life are purpose, power, boundaries, legitimacy, worlds, and agency – as substance, not optional.
This could be a great project, not just for history of ideas, but to help us ask a better practical question.
What would it mean to place AI inside purposeful, accountable, learning systems rather than bolt it onto broken ones? What would it mean to design for judgement and discretion within boundaries, over time, rather than automate transactions and call it progress? What would it mean to build public systems that can see themselves, and change themselves, instead of becoming more efficient at doing the wrong thing righter?
link to the project
https://docs.google.com/document/d/1uXgkltowcKjmcLOyOQUKtv852gvq0zx1UlhT2zeabKU/edit?tab=t.0
Benjamin Taylor