The Phenomenon of Path Dependence in child-youth Sport

footblogball-Mark O Sullivan's avatarfootblogball

 

IMG_3236

One of the main tenets of human complexity is that, for better or worse we find it hard to shake off history, leaving us vulnerable through time to an historical appeal that seemed perfectly logical at the time.For many years a dominant feature of child-youth football training has been an approach where a session would progress from an isolated drill with explicit demonstrations of how to execute the ‘correct’ technique (Williams & Hodges, 2005),to eventually a game, with explicit feedback from the coach (O’Connor, Larkin, & Williams, 2018). As highlighted by Mckay & O’ Connor (2018) team invasion sports training session typically comprise of deliberate structured sequential patterns and repetitive drills. This structured, prescriptive coach -centered approach (Ford et al., 2010) has been the dominant paradigm in child-youth football coaching and can be described as a path dependency. The phenomenon of path dependence as highlighted by John Kiely (2017)…

View original post 779 more words

Incommensurability, plain difference and communication in interdisciplinary research

Community Member's avatarIntegration and Implementation Insights

Community member post by Vincenzo Politi

Vincenzo Politi (biography)

Where does the term incommensurability come from? What is its relevance to interdisciplinarity? Is it more than plain difference? Does incommensurability need to be reconceptualized for interdisciplinarity?

Incommensurability: its origins and relevance to interdisciplinarity

‘Incommensurability’ is a term that philosophers of science have borrowed from mathematics. Two mathematical magnitudes are said to be incommensurable if their ratio cannot be expressed by a number which is an integer. For example, the radius and the circumference of a circle are incommensurable because their ratio is expressed by the irrational number π.

In philosophy of science, the term is used in a metaphorical sense: two competing scientific theories, paradigms or research projects are said to be incommensurable when there is no common ground for their rational comparison and choice. The effects of incommensurability become visible during debates surrounding scientific revolutions, when the…

View original post 914 more words

Systems of meaning all in flames | Meaningness

Source: Systems of meaning all in flames | Meaningness

Systems of meaning all in flames

The Crystal Palace burning down, 1936

The Crystal Palace burning down, 1936

The first half of the twentieth century was awful. Not just materially; Western systems of meaning—social, cultural, and psychological—were falling apart. The glorious accomplishments of the systematic era could not hold civilization together, and seemed likely to be lost entirely in a global conflagration.

Many people even came to think those systems were the cause of all the catastrophes. We who live in the aftermath—we who have never experienced an intact system—we cannot fully appreciate how awful that loss of meaning felt.

This page analyzes the first phase of meaning’s disintegration, roughly 1914–1964. It should help explain the new positive alternatives offered by the countercultures and subcultures, which came next, and also why those failed.

All the events I recount will be familiar, but the way I relate them to my central themes of eternalism and nihilism, and to problems of meaningin the domains of society, culture, and self, may seem novel.

We still have no adequate response to these issues. Any future approach—such as fluidity—must grapple with problems that first became obvious in the early twentieth century.

Society in crisis

Lenin addressing a crowd, 1920

Lenin addressing a crowd, 1920

The period was marked by two social crises: class conflict and world wars. The systematic ideologies that were supposed to resolve these horrible problems seemed, by the end, to have made them worse, or even to have been their principal causes.

Greatly increased division of labor during the 1800s created numerous specialized occupations. This drove great advances in the standard of living and enabled increasing cultural sophistication. However, it also created psychological alienation (discussed below) and social conflicts. The existing social system, which had been stable for hundreds of years, functioned only in an agrarian economy of peasants, aristocratic landowners, and a small class of skilled craftspeople. It had no way of accommodating the newly created classes, such as urban industrial workers and entrepreneurial commoners—who sometimes became richer and more powerful than most aristocrats.

Theorists proposed new systems of social organization: nationalist, socialist, democratic, totalitarian. Advocates made supposedly-rational arguments for why each was right; yet supporters mostly just chose the system that might benefit their in-group against others. Conflicts between them tore societies apart, often even into civil war.

Different countries tried each of the new systems, and all produced vast disasters:

  • nationalism led to World War I;
  • capitalism caused the world-wide Great Depression;1
  • fascism was to blame for World War II;
  • communism killed tens of millions with engineered famines and the mass murder of supposed dissidents.

WWI marked the end of naive faith in the systematic mode. Most countries went into the war confident of quick victory, confident of its necessity and ethical rightness, confident that war was an opportunity for glory, heroism, and unity. God was on our side.

For Europe, it was the first industrial war,2 with the new social and mechanical technologies of mass production turning out deaths instead of automobiles. Four years later, after tens of millions of casualties, extraordinary horror and suffering, the traumatized survivors asked not “was it worth it” but “what was that all about, anyway?”

In retrospect, WWI seemed completely pointless. Or, if it had any meaning, it was to point out that the pre-war systems of meaning must have been disastrously wrong. The 1800s had seemed an era of rapid moral progress as well as economic and scientific progress. That was no longer credible. This disillusionment increased support for alternatives, including socialist internationalism, fascism, explicit anti-modernism, and explicit nihilism.

One pointless, catastrophic world war might be a tragic accident. To fight another, even worse one—the worst human-created disaster ever—just twenty years later, goes beyond carelessness. When the victors of WWII immediately began preparing to fight WWIII among themselves—this time with potentially billions of deaths from nuclear weapons—it was widely regarded as a bad idea. Yet Cold War belligerents on both sides felt justified by their systems of meaning: benevolent socialist internationalism versus benevolent liberal democracy.

Systematicity itself was a major cause of the catastrophe. Leaders and peoples took their rational ideologies far too seriously, and acted on flawed theoretical prescriptions.

Why did they choose not to see the systems were failing? Eternalism. The only alternative to blind faith in the system seemed to be nihilism.

Continued in source: Systems of meaning all in flames | Meaningness

Cybersyn – metaphorum

Source: Cybersyn – metaphorum

The Cybersyn Project (1972, 1973)

On July 13th 1971 Stafford Beer received a letter from Fernando Flores, then President of the Instituto Technologico de Chile, and Technical General Manager of Chile’s equivalent of the National Enterprise Board, which had been charged with the wholesale nationalisation of the economy.  Flores spoke of the “complete reorganisation of the public sector of the economy and said he was “in a position from which it is possible to implement, on a national scale –  at which cybernetic thinking becomes a necessity – scientific views on management and organisation”.

They met in London the following month and Flores filled in the details.  Chile had recently elected Salvadore Allende, a Marxist, and was committed to a program of worker empowerment rather than the Soviet approach of absolute centralisation, and workers obediently following rigid national plans. Flores wanted Beer to take charge of this project, and he agreed to visit Chile in November 1971. During this visit Beer established a small team and over 8 days they had agreed a plan for the cybernetic regulation of the social economy of Chile:  it was named Cybersyn.

In Beer’s words: ‘the Cybersyn project aimed to acquire the benefits of cybernetic synergy for the whole industry, while developing power for the workers at the same time’ (see How Many Grapes, 1994, p. 322).

The entire story has been told on several occasions, but some accounts miss the essential nature of this, and indeed all VSM applications:  the key is to enhance and encourage autonomy at all levels (as the only way of dealing with environmental variety) but to ensure that the autonomous parts work together in a harmonious, coherent fashion and thus enjoy the synergy which comes when parts join together to create a whole-system.  Beer calls this the “explosion of potential” which happens in teams, and collaborative projects of all kinds.

…continues in source

The Viable Systems Model Guide 3e

 

Source: The Viable Systems Model Guide 3e

Copyright © 1991 by ICOM, CRU, CAG and Jon Walker. Copyright © 1998, 2018 by Jon Walker.

 

The Viable System Model
How to design a healthy business: The use of the Viable System Model in the diagnosis and design of organisational structures in co-operatives and other social economy enterprises
A manual for the diagnosis and design of organisational structures to enable social economy enterprises and function with increased efficiency without compromising democratic principles
Based on The Viable Systems Model Pack, originally published as part of the SMSE Strategic Management in the Social Economy training programme
carried out by ICOM, CRU, CAG and Jon Walker with the financial assistance of Directorate General XXIII of the Commission of the European Communities.
The original version was completed October 1991. This 3rd revised version incorporates new material.

This HTML version was constructed by John Waters, who also prepared the diagrams and the bibliography.

Copyright © 1991 by ICOM, CRU, CAG and Jon Walker. Copyright © 1998, 2018 by Jon Walker.
Version 3.1 – Last modified 9th April 2018 to incorporate some long-overdue corrections and updates. A more completely revised version will be released in due course.

 

Eating Sand & Tasting Textures of Communication in Warm Data

Nora Bateson's avatarnorabateson

IMG_6746

Nora Bateson 2019

For years I have written about the systemic crises of our times in terms of tenderness, and rawness. I have exposed my inner world in its morphing potential. I have felt it important to offset the many graphs and articles that blaze facts of climate change, people trafficking, addiction, immigration crisis, racism and wealth gap as statistics baked and served in varying analysis. I wanted to feel it, and to share the language of that sensorial exploration. I have been on the outside of corporate trends of language. I have been eating sand; doing gritty work, reaching into the frequencies that people felt were too far out of reach to be communicable. I do not have a single thing that is sellable on the market of solutions. But the sand has been good, it was formed by the tempests of both wind and sea. Response to today’s…

View original post 1,935 more words

The Critical Difference Between Complex and Complicated – Theodore Kinni – MIT Sloan Management Review

 

Source: The Critical Difference Between Complex and Complicated – MIT Sloan Management Review

 

The Critical Difference Between Complex and Complicated

  • Theodore Kinni
  • June 21, 2017

Featured excerpt from It’s Not Complicated: The Art and Science of Complexity for Business

It’s time to call out the real culprit in far too many business failures — Dr. Peter Mark Roget and his insidious thesaurus. Roget is long dead, but his gang of modern-day editors still assert that the words “complex” and “complicated” are synonyms. Unfortunately, as Rick Nason, an associate professor of finance at Dalhousie University’s Rowe School of Business, ably explains in his new book, It’s Not Complicated, if you manage complex things as if they are merely complicated, you’re likely to be setting your company up for failure.

Complicated problems can be hard to solve, but they are addressable with rules and recipes, like the algorithms that place ads on your Twitter feed. They also can be resolved with systems and processes, like the hierarchical structure that most companies use to command and control employees.

The solutions to complicated problems don’t work as well with complex problems, however. Complex problems involve too many unknowns and too many interrelated factors to reduce to rules and processes. A technological disruption like blockchain is a complex problem. A competitor with an innovative business model — an Uber or an Airbnb — is a complex problem. There’s no algorithm that will tell you how to respond.

This could be dismissed as an exercise in semantics, except for one thing: When facing a problem, says Nason, managers tend to automatically default to complicated thinking. Instead, they should be “consciously managing complexity.” In the excerpt that follows, which is edited for space, Nason explains how.

Jargon Party – Michael Zargham – Medium

 

Source: Jargon Party – Michael Zargham – Medium

Jargon Party

Some Definitions and References for terms I used frequently

Source: https://equitymates.com/pardon-the-jargon-1/

Preamble: Today like most days, I was throwing around phrases like bio-mimetic system and multi-mechanism differential game.These words have meanings beyond their buzzword value and I realize those meanings are often lost to listeners. Below is a quick attempt define them publicly, and to provide basic references.

[Gives definition and links for:]


Non-Linear System

Differential Equation

Event Driven Differential Equation

Hybrid System

Differential Game

Hybrid Differential Games

Multi-Player Differential Game

Multi-Mechanism Differential Game

Subpopulation Models

Policy or Dynamic Strategy

Bio-Mimetic System and Bio-Inspired Design

Separation of Time Scales

Engineering Design

Abstractions and Mathematical Models

Hidden and Unobservable States

Initial Conditions

Reachable States or Configuration Space

Boundary Conditions

Stochastic Process

Lyapunov Function or Generalized Energy Function

Analytical Stability

Gregory Bateson and the Counter-Culture

 

Source: Gregory Bateson and the Counter-Culture

Gregory Bateson and the Counter-Culture

Because of his duplicity in proclaiming spiritual benefits of “magic mushrooms” as psychoactive drugs, while simultaneously accepting CIA funding for his exploits, the newspaperman and banker Gordon Wasson could be considered a “Lifetime Actor” — that is, a person who cultivated a public image which was completely the opposite of his true agenda.
Another possible “Lifetime Actor” was the famous humanist Gregory Bateson. Bateson was an early supporter and teacher at Esalen, an organization devoted to personal growth, meditation, massage, Gestalt, yoga, psychology, ecology, spirituality, and organic food. Yet although Bateson cultivated this image during the Cold War period, he had earlier been a major participant in the creation of ‘Weaponized Anthropology’ for the OSS to control ‘inferior peoples’.
 The ‘weaponized Anthropology’ Bateson developed during WW2 was documented by Dr. David H. Price in his article, “Gregory Bateson and the OSS: World War II and Bateson’s Assessment of Applied Anthropology,” as well as his book Anthropological Intelligence: The Deployment and Neglect of Anthropology in the Second World War.
Price found that during the second world war, the OSS (direct institutional predecessor to the CIA) employed over two-dozen anthropologists including Gregory Bateson. By 1947, as many as three-fourths of professional anthropologists were “working in some war-related governmental capacity”, either full or part-time. In fact, what we know as the science of “applied” anthropology was a government project that began in the OSS to determine how to control civilian populations.
It is an established fact that these anthropologists were developing social science that could be used against civilian populations. As shown below, what has not been understood is that this science was used by the CIA against the American people in the creation of the 1960’s counter culture.

Schismogenesis and black propaganda

Price noted that “Bateson spent much of his wartime duty designing and carrying out ‘black propaganda’ radio broadcasts from remote, secret locations in Burma and Thailand, and also worked in China, India, and Ceylon.” Bateson was ideally qualified to pursue this work, since his earlier anthropological research was on the subject of “schismogenesis”, which is to say, the study of how societies become divisive and dysfunctional.
As Christian Hubert explains:



In his first major anthropological study, Bateson studied the Iatmul tribe in New Guinea. From his fieldwork, he concluded that an Iatmul village is nearly perpetually threatened by fission of the community because it is characteristic that intense and growing rivalries occur between two groups. It puzzled Bateson that usually the community does not disintegrate. He found that one elaborate event heading off a blowup is the elaborate “Naven” ceremony which entails tranvestism and buffoonery.

The nature of the ‘black propaganda’ Bateson developed during WWII needs to be completely understood by citizens because it was the basis for the present ‘mind control’ operations the government uses against them.  As Price wrote: “In this work Bateson applied the principles of his theory of schismogenesis to help foster disorder among the enemy.” Black propaganda is false information that purports to be from a source on one side of a conflict, but is actually from the opposing side.
The fact that the source of the propaganda must be credible is the basis of what we have named the ‘lifetime actor’ above. This is clear in the case of Wasson given above as certainly the public’s willingness to repeat his purported use of psychedelic drugs would have been tempered if it were aware that his journeys to Mexico were an MK Ultra project intended to determine how the government could control the minds of its citizens.
Bateson presented a narrative in which he claimed to be concerned over whether or not anthropologists would use their knowledge as a weapon. In 1942, he wrote that the war:

is now a life-or-death struggle over the role which the social sciences shall play in the ordering of human relationships. It is hardly an exaggeration to say that this war is ideologically about just this – the role of the social sciences. Are we to reserve the techniques and the right to manipulate peoples as the privilege of a few planning, goal-oriented and power hungry individuals to whom the instrumentality of science makes a natural appeal? Now that we have techniques, are we in cold blood, going to treat people as things?” (Bateson 1942, as quoted in Price.)

Taken in context, Bateson’s concern in this warning was that the Nazis would be the ones who would be applying social sciences towards evil ends. However, Price discovered that Bateson had no dilemma whatsoever in “treating people as things”. By using the FOIA, Price was able to discover a paper written by Bateson that was “not with the OSS archives, but the Central Intelligence Agency – the institution that did take over for the OSS at the war’s end.”
Bateson’s 1944 position paper below illuminates the “Black Propaganda” type of intelligence work he carried out for the OSS. 
As we cannot improve on Price’s analysis, we quote his text below.

Bateson’s primary concern in this OSS position paper was to advance the position that American diplomatic and intelligence policy makers should keep an

eye on longer range planning, we are here to promote such a state of affairs in [South Asia] that twenty years hence we may be able to rely on effective allies in this area (Bateson 1944:1).

He begins by arguing that “it will actually pay the Americans to influence the British towards a more flexible and more effective colonial policy” (1944:2). In this paper, Bateson envisions that the post-war period will mostly look and function like it had in the pre-war period. He identifies two significant “faults in the pre-war colonial system” (1944:2). Bateson wants to strive for a new and improved colonial system, and starts by asking if it is possible to: “diagnose remediable faults in the British and Dutch colonial systems and can we present our diagnosis to the British and the Dutch in such a way that the system will be improved?” (Bateson 1944).
These “two weaknesses of the imperial system” (1944:5) are labeled the “lack of communication upwards from the native population to the white [population]” (1944:2), and the British failure in the area of the “delegation of authority” (1944:4). Each of these two points are discussed separately below.
(1) Lack of communication upward
In discussing how British colonialists traditionally received information from “natives” he notes that, “In the late 19th century and up to 1914 it was customary in British colonial governments to conduct monumental surveys of language, population, religion, caste, [and] village industries” (1944:2). He argues that, while these efforts were often flawed in their methodology and results, at least under this system “every District Commissioner was compelled to go and interview people in the native communities” (1944:2). At a minimum, this traditional system forced colonial managers to undertake some level of participant-observational contact with native populations. Despite the awkwardness and artificial pitfalls of these meetings, Bateson argues that colonial managers did acquire

some vivid awareness of what native life is about. He might not be able to convey this awareness in his books but he learned to feel with his elbows the trend of native thought. (1944:2)

Bateson points out that after the First World War colonial managers abandoned these personal meetings with native populations, instead favoring more distant statistical approaches – and British managers suffered from this loss of first-hand interactive knowledge.
Next, Bateson discusses the past importance of information which colonialists gathered through intimate contact with their local mistresses. He notes that the strategic uses of these relationships have been relegated to the past due to a variety of factors.

With the improvement of transportation, the discovery of quinine, the development of sanitation, mosquito control and public health measures generally, it has become increasingly easy for the white man to have his white wife and even children with him in the colonies. The presence of large numbers of white women relieves the official from the pinch of loneliness which formerly drove him to the native woman and at the same time the white women not unnaturally use their influence to build up strong moral sanctions against the taking of native mistresses – even to the point of ostracizing the guilty officials. As a result the more durable and more educative type of relationship with the native women has been reduced to a minimum and only the casual, impermanent – and educational[ly] useless – types of relationship persist. (Bateson 1944:3)

In these passages, Bateson clarifies that the extent to which past British colonial authorities in India had established groundup communication networks – including those with their indigenous mistresses – helped them to understand and control some of the features of Indian village life. The loss of these relationships between colonizer and colonized is noted in the context of loss of information, with the clear implication being that post-war colonial authorities would be wise to re-introduce some variety of such “ground-up” communication networks.
2) The British delegation of authority: colonial codependency and paternalizing the white man’s burden
Next, Bateson discusses the overall British failure to delegate authority among the Indian population by drawing on startling imagery of Paternal-British-Colonialists and their Child-likeIndian Subjects. He begins by conjuring up caricatures of American and British differences in parenting dynamics to analyze the shortcomings of the British rule in India. He argues that the British could improve their colonial system by acting less like rigid British parents, and more like nurturing American parents. We are told that in Upper and Middle Class British households, parents “think of themselves as models who the children should watch and imitate,” while in America, many of the parents come from alien cultures, so they are more content to watch their children and to learn from their offspring who achieve great things in this world they (the parents) imperfectly understand. Bateson stretches this comparison even further by noting that “the American family thus constitutes, in itself, a “weaning machine” (1944:4). In diametrical opposition to this is the codependent

English family [which] does not contain this machinery for making the child independent and it is necessary in England to achieve this end by the use of an entirely separate institution-the boarding school. The English child must be drastically separated from his parents’ influence in order to let him grow and achieve initiative and independence. (1944:4)

Bateson’s analysis is arguing that the British would be more effective colonialists if they would become less like British parents and more like American parents. Though he does note the presence of indigenous anti-colonialist movements, he does not recommend moving towards dismantling the colonial system at war’s end. Instead, he offers advice on how to improve it functionally – that is, to reinforce its longevity. Bateson clarifies that the U.S. should not side with the growing liberation movement and he advises that “we ought not to think of altering the imperial institutions but rather of altering the attitudes and insights of those who administer these institutions” (1944:5). 
This is in some sense a culture and personality based analysis of the differences in British colonial and American neo-colonial approaches to the administration of global patron/client relationships. Bateson is advocating that the longevity of the British presence in India would be strengthened in the postwar period if British administrators would but change the “personality” of the administrative bureaucracy.
Bateson’s recommendations
In the paper’s conclusion, Bateson recommends that after the war the OSS should take four steps – to take advantage of these above mentioned “two weaknesses of the imperial system” (i.e., the lack of communication upward and the British delegation of authority). It is not exactly clear to what end these “two weaknesses” are to be put, but it is clear that they are not to be exploited as a means of ending the foreign-colonial rule of the Indian people.
Bateson recommends that: First, the OSS should gather as much intelligence as possible from British sources – while the wartime alliance is in place; Second, they need to undertake detailed analysis of pop culture – especially in terms of content analysis of Indian popular films – as a way of gauging popular sentiment; Third, and most importantly, America must learn from Russia’s successes in conquering ethnic minorities by praising and co-opting aspects of their culture – on this point he specifically suggests that it might be possible to co-opt some components similar to the symbolic capital that Gandhi has used so successfully; and finally, Bateson suggests that the postwar OSS be sure to continue with its wartime education programs for colonialist authorities. Of course, the OSS was disbanded at the end of war. Or more accurately, it was transformed into the Central Intelligence Agency – the agency which kept the copy of Bateson’s report until I gained a copy of it under the Freedom of Information Act ….
Bateson’s comments on point three reveal much about the tone of his wartime OSS work and are reproduced in full below:

(3) The most significant experiment which has yet been conducted in the adjustment of relations between “superior” and “inferior” peoples is the Russian handling of their Asiatic tribes in Siberia. The findings of this experiment support very strongly the conclusion that it is very important to foster spectatorship among the superiors and exhibitionism among the inferiors. In outline, what the Russians have done is to stimulate the native peoples to undertake a native revival while they themselves admire the resulting dance festivals and other exhibitions of native culture, literature, poetry, music and so on. And the same attitude of spectatorship is then naturally extended to native achievements in production or organization. In contrast to this, where the white man thinks of himself as a model and encourages the native people to watch him in order to find out how things should be done, we find that in the end nativistic cults spring up among the native people. The system gets overweighed until some compensatory machinery is developed and then the revival of native arts, literature, etc., becomes a weapon for use against the white man (Phenomena, comparable to Ghandi’s spinning wheel may be observed in Ireland and elsewhere). If, on the other hand, the dominant people themselves stimulate native revivalism, then the system as a whole is much more stable, and the nativism cannot be used against the dominant people.

OSS can and should do nothing in the direction of stimulating native revivals but we might move gently towards making the British and the Dutch more aware of the importance of processes of this kind (Bateson 1944:6-7).

Dr. Price was unable, of course, to recognize the importance of Bateson’s recommendation above concerning an archaic revival in controlling populations because he was unaware that the government had created the ‘psychedelic counterculture’. However, every citizen should study the concluding quote from Bateson carefully. Bateson’s recommendation can certainly be understood as having led directly what the psychedelic drug guru Terence McKenna described as the ‘archaic revival’. In other words, the counter culture in the 1960’s was created by using ‘black propaganda’ to bring about an archaic revival of America’s youth and thereby make them easier to control, as had been determined by the secret anthropological experiments that Bateson somehow knew about.
The documents obtained through the FOIA reveal a clear and sinister trajectory. That anthropologic science that was developed to enslave Russia’s Asiatic tribes by bringing about a Native Revival was used against the American people. Bateson brought his science with him when he helped developed the MK Ultra program which then created the counter culture based upon the elements that the Russians had used to enslave the Asiatic tribes – the Shaman, psychedelic drugs, ‘trance music’ and dance were combined with the archaic appearances of the music idols to convey the message that the feudal past was where a young person should head — rather than a future with the technology and thinking power that might threaten the oligarchs.

Bateson, the CIA, and MK Ultra

Following the war (as Price explains), Bateson claimed to have become “uneasy” with his wartime role as an OSS operative and black propagandist, as he cultivated relationships within the human-potential movement. However, there are reasons to doubt Bateson’s sincerity in this regard.
First, let us note that Gregory Bateson played a significant role in the creation of the CIA. After the war Truman wished that the OSS be disbanded. Its head, William Donovan, wrote to Truman’s budget director, and presented him with a rationale that the organization be not only kept in existence but expanded. At least part of this rationale was written by Gregory Bateson. In an article at the CIA website entitled “The Birth of Central Intelligence”, Arthur Darling states that Bateson argued as follows:

…the bomb would shift the balance of warlike and peaceful methods of international pressure. It would be powerless, he said, against subversive practices, guerrilla tactics, social and economic manipulation, diplomatic forces, and propaganda either black or white. The nations would therefore resort to those indirect methods of warfare. The importance of the kind of work the Foreign Economic Administration, the Office of War Information, and the Office of Strategic Services had been doing would thus be infinitely greater than it had ever been. The country could not rely upon the Army and Navy alone for defense. There should be a third agency to combine the functions and employ the weapons of clandestine operations, economic controls, and psychological pressures.

In spite of Donovan’s protest, Truman disbanded the OSS in 1945. However, in 1947, Bateson and Donovan’s recommendations emerged victorious, and the various US intelligence agencies (including those that had been split off from the former OSS) were re-assembled as the new Central Intelligence Agency. Given Bateson’s argument for its existence, it is no surprise that it immediately began to perfect the science of social control. One project in this vein had the name MK Ultra and funded (among other criminal activities) Wasson’s above-mentioned trip to harvest magic mushrooms.
The claim that the US government’s interest in LSD began with MK Ultra (which was started in 1953) is incorrect. The US Navy’s Medical Research Institute had been experimenting with psychedelics in their CHATTER program under the direction of Charles Savage, whose research report from 1951 was revealed by an FOIA request. The MK Ultra project, however, represented a considerable broadening of this earlier interest. On the Senate floor in 1977, Senator Ted Kennedy said:

The Deputy Director of the CIA revealed that over thirty universities and institutions were involved in an “extensive testing and experimentation” program which included covert drug tests on unwitting citizens “at all social levels, high and low, native Americans and foreign.” Several of these tests involved the administration of LSD to “unwitting subjects in social situations.” At least one death, that of Dr. Olson, resulted from these activities. The Agency itself acknowledged that these tests made little scientific sense. The agents doing the monitoring were not qualified scientific observers.

Bateson apparently maintained at least a casual involvement in the CIA’s ongoing drug research and promotion activities, as explained by John Marks in “The Search for the Manchurian Candidate“:

[CIA contractor] Harold Abramson apparently got a great kick out of getting his learned friends high on LSD. He first turned on Frank Fremont-Smith, head of the Macy Foundation which passed CIA money to Abramson. In this cozy little world where everyone knew everybody, Fremont-Smith organized the conferences that spread the word about LSD to the academic hinterlands. Abramson also gave Gregory Bateson, Margaret Mead’s former husband, his first LSD. In 1959 Bateson, in turn, helped arrange for a beat poet friend of his named Allen Ginsberg to take the drug at a research program located off the Stanford campus. No stranger to the hallucinogenic effects of peyote, Ginsberg reacted badly to what he describes as “the closed little doctor’s room full of instruments,” where he took the drug. Although he was allowed to listen to records of his choice (he chose a Gertrude Stein reading, a Tibetan mandala, and Wagner), Ginsberg felt he “was being connected to Big Brother’s brain.” He says that the experience resulted in “a slight paranoia that hung on all my acid experiences through the mid-1960s until I learned from meditation how to disperse that.”
Anthropologist and philosopher Gregory Bateson then worked at the Veterans Administration Hospital in Palo Alto. From 1959 on, Dr. Leo Hollister was testing LSD at that same hospital. Hollister says he entered the hallucinogenic field reluctantly because of the “unscientific” work of the early LSD researchers. He refers specifically to most of the people who attended Macy conferences. Thus, hoping to improve on CIA and military-funded work, Hollister tried drugs out on student volunteers, including a certain Ken Kesey, in 1960. Kesey said he was a jock who had only been drunk once before, but on three successive Tuesdays, he tried different psychedelics. “Six weeks later I’d bought my first ounce of grass,” Kesey later wrote, adding, “Six months later I had a job at that hospital as a psychiatric aide.” Out of that experience, using drugs while he wrote, Kesey turned out One Flew Over the Cuckoo’s Nest. He went on to become the counterculture’s second most famous LSD visionary, spreading the creed throughout the land, as Tom Wolfe would chronicle in The Electric Kool-Aid Acid Test.

It is also very interesting that for his postwar research, Bateson chose topics which were of crucial interest to another of MK Ultra’s goals, which was to use drugs and hypnosis to create dissociative personalities. Bateson’s interest in double binds and the development of schizophrenia was perfectly analogous to this MK Ultra agenda. As noted by the Swiss journal “Current Concerns“, in its comments accompanying a reproduction of Price’s article about Bateson:

Metalog technology, future workshops and pseudo appreciation of “more indigenous” cultures
The American Gregory Bateson, highly-praised guru of the European future workshop scene, once developed models of communication theory for use in the military, in a circle of “chosen ones”, the Palo Alto group. Their civilian waste-products have today seeped into everyday-life vocabulary, as for instance the terms “metacommunication” and “double bind”. The term “metalog”, which the strategists of the “future workshops” use, originates in Bateson’s work and means something as harmless as the fact that the contents of a discussion are always to be connected with the form of the discussion. 
Among other things Bateson was active in the research and therapy of schizophrenia. He demonstrated the conditions in which human beings can become schizophrenic, i.e. mentally confused, so that they slip off into a psychosis and are no longer able to master their lives. In the mainstream literature on Bateson, his work is highly praised as being to the benefit of people, in particular to those who acquired a form of psychological disorder. It was not the work in the Californian Esalen institute that made him an esoteric, but it deepened his knowledge of group dynamics and large group control, the mainstream media report about him. So far, so good.
Research into schizophrenia – what for?
If one reads, however, the accompanying text of David H. Price on Bateson’s activities for the OSS (predecessor of the CIA) during World War II and his suggestions, how the colonial peoples are to be subjugated even after the war in a more effective way than the British and the Dutch had ever done it, some doubt arises on the integrity of the psychological researcher Bateson. Was it not interesting for the military to use the results of schizophrenia research in order to shatter the minds of prisoners of war and drive them mad, in order to be able to rebuild their personality again – or do so with whole subpopulations in “enemy nations”, or even in the[ir] own country? Bateson used his anthropological knowledge not only to the advantage of but also directed against human beings. We therefore have to assume that during the Cold War and probably still today power strategists use the findings of his schizophrenia and/or disorder research to direct them against human beings.

Neuro-Linguistic Programming

Finally, as we also noted recently elsewhere at this website, Bateson was also involved in the development of neuro-linguistic programming (NLP), another important technology for propaganda.

Bateson had established a scholarly relationship with hypnotist Milton Erickson as early as 1932. …Bateson would have been fascinated with  Erickson’s research, which involved the idea that hypnotically effective trance states could be established in the course of ordinary life activities such as reading, talking to a therapist, or watching motion pictures, especially if intense and traumatic emotional states could be evoked by the experience. During such trance states, Erickson believed, the subconscious mind of the the target could be accessed by means of hypnotic suggestion….
This idea was later taken up by Bateson proteges Richard Bandler and John Grinder, who commercialized it as the system of “Neuro-Linguistic Programming”, described in their 1975 work “The Structure of Magic“. They drew on Noam Chomsky’s theory of transformational grammar to explain that the subliminal messages could be formed within a deep linguistic structure lurking beneath the surface interpretation.

While we cannot demonstrate a direct relationship between Bateson and the CIA during the postwar period (that is, after the termination of Bateson’s contract with the OSS), nevertheless the pattern of his research interests creates a reasonable doubt that Bateson never deviated from his agenda to promote ‘superior’ people in their quest to subjugate the ‘inferior’ ones.
Following the war Bateson headquarters was at the Palo Alto VA hospital were the CIA developed the MK Ultra project, which had earlier sent Gordon Wasson to Mexico and began the psychedelic drug movement. Also in Palo Alto, the CIA-funded drug research program introduced the individuals who would later lead America’s youth off a cliff to LSD – Alan Ginsberg, the Grateful Dead member Robert Hunter and novelist Ken Keasy.
Thus, when we see the visual images of the ‘rock idols’ that helped to create counter culture we can now understand their purpose. Below is a photograph of David Crosby, a member of the Byrds whose 1966 hit ‘Eight Miles High’ virtually created the LSD-inspired ‘acid rock’ genre. He is sitting congenially next to his father, Annapolis graduate and former OSS member (and Oscar-winning cinematographer?!), Floyd Crosby. A picture is worth a thousand words:
crosby

Discuss in Forum! (Bateson cartoon credit: https://comunidad3h.wordpress.com/tag/gregory-bateson/)

Dynamic organization of flocking behaviors in a large-scale boids model

cxdig's avatarComplexity Digest

A simulation of a half-million flock is studied using a simple boids model originally proposed by Craig Reynolds. It was modeled with a differential equation in 3D space with a periodic boundary. Flocking is collective behavior of active agents, which is often observed in the real world (e.g., starling swarms). It is, nevertheless, hard to rigorously define flocks (or their boundaries). First, even within the same swarm, the members are constantly updated, and second, flocks sometimes merge or divide dynamically. To define individual flocks and to capture their dynamic features, we applied a DBSCAN and a non-negative matrix factorization (NMF) to the boid dataset. Flocking behavior has different types of dynamics depending on the size of the flock. A function of different flocks is discussed with the result of NMF analysis.

 

Dynamic organization of flocking behaviors in a large-scale boids model
Norihiro Maruyama Daichi Saito Yasuhiro Hashimoto Takashi Ikegami

View original post 7 more words

Cybernetic Serendipity: History and Lasting Legacy

Source: Cybernetic Serendipity: History and Lasting Legacy

 

Cybernetic Serendipity: History and Lasting Legacy

Catherine Mason considers the ICA’s groundbreaking computer art exhibition of 1968 and looks at how it has shaped digital art in the 50 years since

by CATHERINE MASON

The impact of the pioneering exhibition Cybernetic Serendipity at London’s Institute of Contemporary Arts in 1968 should not be underestimated. It is still considered to be the benchmark computer art exhibition for its influence on many pioneers, as well as for introducing the subject to a wider audience. Fifty years on, the historical relevance of this groundbreaking show continues in our digitally driven world in almost everything we visually (and even aurally) consume – from CGI and special effects seen in Hollywood blockbusters to design encountered in our everyday lives and, of course, in fine arts. There can be few artists working in the 21st century who are not touched by some aspect of digital presence in their work.


James Faure Walker. View, 2016. Archival inkjet print, 58 x 66 cm.

Cybernetic Serendipity (CS) was the first comprehensive international exhibition in Britain devoted to exploring the relationship between new computing technology and the arts. There had been exhibitions of machines before, but CS was the first gallery show of its type. Uniquely in the UK in a gallery setting, it featured collaborations between artists and scientists, and showed these to be on an equal footing. The breaking down of barriers between disciplines was an important factor. Machines were shown alongside artworks and no differentiation was made between object, process, material or method, nor between the backgrounds of makers, whether art-school educated or not. One of the aims of CS was to show the scope of what was possible, emphasising the optimistic and celebratory nature of the project. Although its subject matter was avant garde, presenting a topic and style of artwork that was outside the mainstream of British art at this time, CS was facilitated and inspired by a postwar spirit of optimism in the positive power of new technologies.


Paul Brown. Ceiling Detail from the House of Signs, 1996. Giclée print, 19.68 x 19.69 in. Included in Creativity and Collaboration: Revisiting Cybernetic Serendipity, National Academy of Sciences, Washington DC, 2018.

The curator Jasia Reichardt, armed with a letter of introduction to IBM USA, was able to access important artists and corporate giants such as Boeing and General Motors.

The exhibition was opened by the then minister of technology, Tony Benn. The Ministry of Technology was set up under Prime Minister Harold Wilson’s “white heat of technology” government, to promote industrial efficiency and the use of new technology in industry. At the 1963 Labour party conference, Wilson set out Labour’s plan for science, promising a Britain “forged in the white heat of this revolution” with “no place for restrictive practices of or outdated methods”. Science and technology was seen as the engine of progress, a driving force for industrial innovation and economic prosperity. It was the Ministry of Technology that forced the series of mergers that created ICL – Britain’s biggest computer manufacturer, as competition for the US’s IBM.

The burgeoning subject of computer arts or cyberart was thus introduced to a younger generation of artists in a positive social and political climate. This generation subsequently laid the foundation for decades of advancement in the arenas of digital image-making, animation, interactivity, intermedia and cross-disciplinary collaboration in the arts, which is a feature of much contemporary art today.


Vera Molnár. Structure de Quadrilatères (Square Structures), 1987. Computer drawing with white ink on salmon-coloured paper, 11 3/4 x 16 1/2 in. Courtesy Senior & Shopmaker Gallery, New York.

The complexity and rarity of computers during this period meant that any artform based around them was bound to be a specialised branch of art, highly dependent on support and funding. This was not least because of the expensive, large-scale nature of much early equipment and the resulting technical expertise required to operate it. Before the onset of user-friendly systems, proprietary software and personal computers, artists had to build relationships with scientists, learn to write code and often construct their own hardware.


Frieder Nake. Bundles of straight lines, 1965. © the artist.

As a result of these issues of access, artists and those from a technical or scientific background created work during this pioneering period. Much work was experimental in nature and, as such, often ephemeral. Sadly, some work survives today only in descriptions or photographs. Equipment and components that were expensive and hard to come by, could, and regularly were, repurposed and recycled in subsequent projects and, today, the original physical object may no longer exist.

In this short article, there is space to mention only a small number of the artists involved and highlight some of the notable events; the intention here is to give an introduction to this subject and, I hope, a flavour of the wealth of artistic activity that took place during this creatively rich period.

Computer art is a broad label used here as an historical term to describe work made with or through the agency of a digital computer predominately as a tool, but also as a material, method or concept, from around the late-1950s onward. As Reichardt wrote in her introduction to Studio International’s accompanying publication, the exhibition showed “… artists’ involvement with science, and the scientists’ involvement with the arts [and] the links between the random systems employed by artists, composers and poets, and those involved with the making and the use of cybernetic devices.” A further term, cyberart, has been employed to describe artworks that have been created with, or enhanced by, the use of science and technology.


Example of NEW PICTURES computer graphics language for artists at Lanchester Polytechnic, Coventry School of Art by R Johnson, 1979.

Interest in using new technologies to make art was initially informed by the science of cybernetics.
At this point, computers were at an early stage in their development, commonly thought of as “number crunchers” and often referred to as “electronic brains”. Not only was it difficult to access this equipment, at this stage it was difficult to perceive of the computer as being an art method or material, let alone one with capacity for interactivity.

Cybernetics comes from the Greek root kubernetes, meaning pilot or helmsman, and was first used by Plato in his dialogues on Laws and The Republic to denote a governor of a country. In the 1940s, cybernetics was given its current meaning by Norbert Wiener in his 1948 book Cybernetics, Or Control and Communication in the Animal and the Machine. According to Weiner, at a basic level, cybernetics refers to “the set of problems centred about communication, control and statistical mechanics, whether in the machine or in living tissue”. Wiener’s concept was that the behaviour of all organisms, machines and other physical systems is controlled by their communication structures, both within themselves and with their environment. The result of this book was that the notion of feedback penetrated almost every aspect of technical culture. Early influential cyberneticians working in Britain include W Ross Ashby, Stafford Beer, W Grey Walter, Frank H George and Gordon Pask. Pask’s interactive cybernetic work Colloquy of Mobiles was exhibited at CS. This large-scale reactive and educable sculptural installation is now seen as a precursor to human-machine interaction. Cybernetics, the study of how machine, social and biological systems behave, offered a means of constructing a framework for art production in which artists could consider new technologies and their impact on life.

Before CS, the Independent Group, the younger members of the Institute of Contemporary Art, were meeting in the early 50s. This group of visual artists, architects, theorists and critics included Richard Hamilton, Reyner Banham, Eduardo Paolozzi, Peter and Alison Smithson and Theo Crosby. Inspired by Scientific American, Wiener’s writings, Claude Shannon’s information theory, John von Neumann’s game theory and D’Arcy Wentworth Thompson’s book On Growth and Form, they became interested in the implications of science, new technology and the mass media for art and society. Of particular influence on the incubation of cyberart was the 1956 London exhibition This is Tomorrow, a model of collaborative art practice. The catalogue of this show contains the first British published reference to the possible use of computers in art. The artists write of “punched tape … cards” and “motor and input instructions” as being potential tools and methods for art production.1


Roy Ascott. Change Painting, c1960s. © the artist.

Roy Ascott, a student of Hamilton’s, continued the interest in communications systems and cybernetics in the early 60s by incorporating into his work as models the concepts of behaviour and process, stressing media dexterity, interdependence, cooperation and adaptability. For Ascott, art is a system that involves feedback between creator and audience.2 Ascott became an influential educator and contributed greatly to the consideration of systematic and programmatic ways of teaching in art schools; ultimately, this led to the use of computers by students in British art schools.

CyberArt in Europe arose from the kinetic art movement, which derived from the European avant garde, seen for example with László Moholy-Nagy and Naum Gabo in Paris from the 20s. Both had a strong identification with science and technology alongside an interest in constructivism. By the 50s, a number of sculptors in Europe were already advanced in the development of art-making according to kinetic and cybernetic principles. The Hungarian-born Frenchman Nicolas Schöffer was pioneering interactive sound-equipped and cybernetic works from the early 50s, collaborating with engineers from Philips Electronics of the Netherlands. Also from this time, the Swiss sculptor Jean Tinguely used found objects and recycled machine parts to create kinetic works in Paris. Tinguely’s ideas spread when he travelled to the ICA in 1959 and to New York in 1960, working with engineer Billy Klüver at the Museum of Modern Art. Both Tinguely and Schöffer featured in CS.


Edward Ihnatowicz. SAM, installation view at Cybernetic Serendipity 1968. © estate of the artist.

In England, the Polish émigré Edward Ihnatowicz took the crucial component of Kinetic art – the aspect of spectator participation – and combined it with the cybernetic principle of feedback. At CS, he exhibited SAM, Sound Activated Mobile, which was interactive, moving in a very lifelike manner in response to the sounds visitors made. Judging from surviving CS film footage, it appeared to be a huge hit with audiences.

Some of the first exhibitions of plotter drawings made by computer took place in Stuttgart in 1965, organised by the German philosopher Max Bense. It was Bense who encouraged Reichardt to consider this subject as a potential exhibition material at CS. The Stuttgart show included the major German pioneers Georg Nees and Frieder Nake, pioneers of algorithmic art – writing programs (sets of predetermined instructions) to automatically generate drawings via a plotter. Both were inspired by Bense’s aesthetics, based on Charles S Peirce’s semiotics and Shannon’s information theory.


Manfred Mohr. P1650_1186, 2014. Pigment-ink on paper, 40 x 50 cm. © the artist.

Also working with algorithms at this time was Manfred Mohr, in Paris. Drawing via computer allows the possibility of producing sequences through iteration, the repetition of sets of instructions that can be adjusted so that each version is slightly different. It also enables exploration of calculations that would be mentally impossible, for example Mohr’s investigation of the mathematical permeation of aspects (sub-structures) of the cube and hyper-cube, a feature of his work for many years.

In Zagreb, in the former Yugoslavia, the New Tendencies art movement emerged in the early 60s, initially dedicated to concrete and constructivist art, op and kinetic art. They held several exhibitions before, in 1968 (under the title Tendencies 4) shifting their focus to “computers and visual research”, which launched a symposium, exhibition, competition and the groundbreaking multilingual magazine Bit International. Including computer-generated graphics, film and sculpture, Tendencies transformed Zagreb – already one of the most vibrant artistic centres in Yugoslavia – into an international meeting place where artists, engineers, and scientists from both sides of the iron curtain gathered around the then-new technology.3


William Fetter. Human Figures, Boeing Computer Graphics 1968. Given by the Computer Arts Society, supported by System Simulation Ltd, London. © Victoria & Albert Museum London.

One of the earliest places in the US to think about computer use in a visual sense was Boeing aircraft company, around 1960. The designer William Fetter was the first to coin the term computer graphics to describe the work he produced there – diagrams for potential seating positions of pilots in cockpits. Often considered the first computer-aided drawing of the human figure, Fetter’s work featured in CS and a print of his “Boeing Man” was produced as part of the Motif Editions print portfolio published in connection with CSThis portfolio was a set of seven lithographs by different artists and also included two works by the Computer Technique Group (made at the IBM Scientific Data Centre in Tokyo), plus works by Charles Csuri and James Shaffer, Maughan S Mason, Donald K Robbins and Kerry Strand.

Another early pioneer in computer graphics and drawing was Ivan Sutherland, who devised in 1963 what was essentially the first graphical interface, a drawing system he called Sketchpad. Sutherland’s work was considerably in advance of anything developed before: he was far-sighted in devising commands users of such a system might need. These included the ability to “undo” things, copying, cutting and pasting, and he foresaw the idea of layering pictures.

Much pioneering work in image-making – graphics and film animation in particular – took place at Bell Telephone Laboratories Inc from 1962 by Ken Knowlton, A Michael Noll, Ed Zajac, Lillian Schwartz and others. The exhibition of plotter drawings by Noll and Béla Julesz at the Howard Wise Gallery in New York City in 1965was the earliest such show in the US. Engineers from Bell Labs come together with artists and composers to found EAT (Experiments in Art and Technology), which helped to break down barriers between disciplines by utilising new equipment such as video projects, wireless sound transmission and Doppler sonar. This followed the performance art event 9 Evenings: Theatre and Engineering in New York in 1966, which involved artists John Cage and Robert Rauschenberg, among many others, and numerous engineers including Klüver. The group also travelled to Japan as part of Expo ’70.

Following the success of CS, The Computer Arts Society (CAS) was founded in 1968, by three individuals who knew Reichardt well – cybernetician George Mallen, ICL programmer Alan Sutcliffe (both participants in CS) and John Lansdown, an architect and pioneer of computer-aided drawing systems. A special interest group of the British Computer Society, CAS was unique at the time as a practitioner-run organisation. It helped to foster computer arts activity by providing a network of support, exhibition opportunities, workshops, events and lectures and, on occasion, funding. International branches existed in the US and Holland. Its bulletin PAGE was initially edited by Gustav Metzger (another exhibitor at CS), featured work by major British and international computer artists, and hosted some fundamental discussions as to the aims and nature of computer art. CAS’s inaugural exhibition Event One, held at the Royal College of Art in 1969, was interdisciplinary, incorporating architecture, sculpture, theatre, graphics, music, poetry, film, performance and dance. As a collaboration of artists and programmers, it was to forecast the future activities of CAS.4


Event One, 1969. Installation view. Photograph: Peter Hunot, Courtesy of the Computer Arts Society.

Into the 70s, in part due to a reorganisation of the art education system in the UK, artists within art schools began to be able to access digital computing technology for the first time. This was made possible largely by the creation of polytechnics, which concentrated expensive resources into fewer, but larger multidisciplinary centres. The first ones were designated in 1967 and many art schools were amalgamated into them. This was a unique period in which art students could learn to program. Important early centres include Coventry, Middlesex and Leicester, as well as the postgraduate programme at the Slade School of Art. These provided not only education and training but, in some cases, career incubation, employment, research facilities and networking opportunities. The character of British computer arts of the 70s is shaped by these unique issues of access within art schools.5

Post CS, books began to be published in this field, often by artist-practitioners. These helped to give practical guidance as well as defining a place within art history. Notable titles include Herbert Franke’s Computer Graphics, Computer Art (1971), Ruth Leavitt’s Artist and Computer (1976) and Jonathan Benthall’s Science & Technology in Art Today (1972). One the first theory books to describe the contemporary history was published the same year as CS. Jack Burnham’s Beyond Modern Sculpture: The Effects of Science and Technology on the Sculpture of This Century was followed in 1970 by his exhibition Software, Information Technology: Its New Meaning for Art at the Jewish Museum in New York, predicated on the idea of software as a metaphor for art.

By the early 80s, computing technology had changed radically. It was no longer imperative to construct one’s own hardware. Proprietary software packages were becoming widely available and user-friendly systems were gaining ground; financially, too, the costs decreased. Although many continued to program, the rise of software meant that it was no longer necessary to write code to achieve many similar effects. This led to new avenues of exploration. Within the realm of digital painting, innovations allowed new techniques of layering, digital collage and printing, using, first, paintbox type systems, and later commercial brands such as Photoshop and now the iPad, most famously used today by David Hockney.

The field continues to expand. The contemporary use of gaming engines, scenery-generating software and animation and modelling software allows artists to create moving image and immersive environments like never before. For example, Kelly Richardson’s recent exhibition The Weather Makers at Dundee Contemporary Arts Scotland showed this to great effect.


Kelly Richardson, Leviathan, 2011, 3 channel HD video, commissioned by Artpace San Antonio.


Mat Collishaw. Thresholds (VR view), 2017. 8.5 x 6 x 2.5 m. Courtesy Mat Collishaw.

Developments in virtual reality are also proving of interest. In 2017, Mat Collishaw’s Thresholds, at Somerset House, reimagined a Fox Talbot photography exhibition to great acclaim.Traditional portraitist Jonathan Yeo is now using 3D scanning to produce self-portraits and VR to draw and then create sculpture via 3D printing. Yeo is currently showing at the Royal Academy’s From Life (billed as “from pencil and paper to Virtual Reality”) featuring artists who use traditional media alongside those who engage with emerging technologies such as HTC Vive and Tilt Brush by Google (as used by Yeo).


The making of Jonathan Yeo’s virtual self-portrait, in legendary sculptor Eduoardo Paolozzi’s former studio © Jonathan Yeo Studio 2017.

The cross-disciplinary work of the experimental 60s has proved of lasting impact. Many of the pioneers’ ideas featured here became integrated into the mainstream. Much of the technology used by them has now become ubiquitous as we have become accustomed to rapid integration of newer and newer technologies into our daily lives. The interactive, participant-responsive aspect of early cyberart is now a major trend in contemporary new-media-based artwork. Taken for granted by artists now are ideas of freedom in the use of materials, as well as a manner of working that takes into account institutional processes and the relationship between artist and audience and material and environment. Artists publish on their own on the web, with their own documentation and ideas. Artist in residence programmes, informal and ad hoc in the 60s, are now formalised and common within institutions and corporations. Organisations including SIGGRAPHLeonardoISEAEVAArs ElectronicaLumenFurtherfieldKinetica and many others continue to champion work in this field.


R. Luke DuBois, A More Perfect Union: USA, No. 4 (details), 2011/2017. Four inkjet prints on canvas, 57.5 x 120 in each. Courtesy bitforms gallery, New York. Included in Creativity and Collaboration: Revisiting Cybernetic Serendipity, National Academy of Sciences, Washington DC, 2018.

Cyberart, with its possibilities of immediacy, interactivity and multi-authorship can reach people outside the gallery system and engage new audiences in public places, whether that be sports arenas, music projects or on web- and mobile-based devices. Artists have always been early adopters of technology and those working today are adept at repurposing the vast quantity of data and information that surrounds us, whether to make statements of a sociopolitical or aesthetic nature. As the century progresses, our society faces an increasing number of challenges around issues of security, privacy, data-mining and possible abuses of power of big data collected by corporations and used for commercial ends. Art that is fundamentally engaged with technology has a crucial role to play now and in the future by raising questions, taking a critical stance or simply holding a mirror to our life and times.

References
1. This is Tomorrow by Laurence Alloway, Reyner Banham and David Lewis, published by Whitechapel Art Gallery, 1956, Section 12.
2. Telematic embrace: visionary theories of art, technology, and consciousness by Roy Ascott, edited by Edward A Shanken, University of California Press, 2003.
3. A Little-Known Story about a Movement, a Magazine and the Computer’s Arrival in Art: New Tendencies and Bit International, 1961–1973, edited by Margit Rosen, published by MIT, 2011.
4. The Fortieth Anniversary of Event One at the Royal College of Artby Catherine Mason.
5. A Computer in the Art Room: The Origins of British Computer Arts 1950-1980 by Catherine Mason, published by JJG, 2008.

• Catherine Mason is a board member of the Computer Arts Society and the author of A Computer in the Art Room: The Origins of British Computer Arts 1950-1980 and White Heat, Cold Logic (MIT, 2009) catherinemason.co.uk

Our ability to detect patterns might stem from the brain’s desire to represent things in the simplest way possible – American Physical Society

 

Source: Our ability to detect patterns might stem from the brain’s desire to represent things in the simplest way possible

Our ability to detect patterns might stem from the brain’s desire to represent things in the simplest way possible

 March 5, 2019 , American Physical Society
Sacrificing accuracy to see the big picture
A. Example sequence of visual stimuli (left) representing a random walk on an underlying transition network (right). B. For each stimulus, subjects are asked to respond by pressing a combination of one or two buttons on a keyboard. C. Each of the 15 possible button combinations corresponds to a node in the transition network. We only consider networks with nodes of uniform degree k = 4 and edges with uniform transition probability 0.25. D. Subjects were asked to respond to sequences of 1500 such nodes drawn from two different transition architectures: a modular graph (left) and a lattice graph (right). E. Average reaction times across all subjects for the different button combinations, where the diagonal elements rep- resent single-button presses and the off-diagonal elements represent two-button presses. F. Average reaction times as a function of trial number, characterized by a steep drop-off in the first 500 trials followed by a gradual decline in the remaining 1,000 trials. Credit: Lynn et al.

During their first year of life, infants can recognize patterned sound sequences. As we grow, we develop the ability to pick out increasingly complex patterns within streams of words and musical notes. Traditionally, cognitive scientists have assumed that the brain uses a complicated algorithm to find links between disparate concepts, thereby yielding a higher-level understanding.

Researchers at the University of Pennsylvania—Christopher Lynn, Ari Kahn and Danielle Bassett—are building an entirely different model, indicating that our ability to detect patterns might stem, in part, from the brain’s desire to represent things in the simplest way possible.

The brain does more than just process incoming information, said Lynn, a physics graduate student. “It constantly tries to predict what’s coming next. If, for instance, you’re attending a lecture on a subject you know something about, you already have some grasp of the higher-order structure. That helps you connect ideas together and anticipate what you’ll hear next.”

The new model offers striking insights on , suggesting that people can and indeed do make mistakes in detecting individual components of a pattern in order to catch a glimpse of the bigger picture. For example, Lynn explained, “if you look at a pointillist painting up close, you can correctly identify every dot. If you step back 20 feet, the details get fuzzy, but you’ll gain a better sense of the overall structure.” The brain may well adopt a similar strategy, he said.

To test its theory, the team developed an experiment in which people view a computer screen depicting a row of five squares and then press one or two keys to match the display they see. The researchers timed the responses, concluding that people hit the correct keys more quickly when they anticipate what’s coming next.

As part of the experimental setup, each stimulus presented to a subject could be viewed as a node in a network, with one of four adjacent nodes representing the next stimulus. The networks come in two forms—a “modular graph” consisting of three linked pentagons and a “lattice graph” consisting of five linked triangles. Subjects reacted more quickly when presented with the modular graph, suggesting they could better discern its underlying structure and, hence, better anticipate the image to follow.

Ultimately, the experiment is designed to measure a quantity the authors call beta (β), which varies from subject to subject, assuming lower values in people prone to making errors and higher values in those inclined toward accuracy. The Pennsylvania group plans to acquire  images from fMRI scans later this year to see if the brains of people found to have different values of β are, in fact, “wired differently.”

More information: The 2019 APS March Meeting presentation “Structure from noise: Mental errors yield abstract representations of events,” by Christopher Lynn, Ari E. Kahn and Danielle Bassett, will take place Tuesday, March 5, at 3:06 p.m. in room 261 of the Boston Convention and Exhibition Center. Abstract: meetings.aps.org/Meeting/MAR19/Session/H66.4

Provided by: American Physical Society

Paradigm shift – RationalWiki

 

Source: Paradigm shift – RationalWiki

Paradigm shift

“”Really — in order to be a scientist — one should not only be comfortable, but willing to go against the grain. And I think this is true, to a certain degree. But when it’s taken too far you get into this area of obscurantism. You know, you are pushing so hardagainst the grain that you are encouraged to essentially create nonsense because you need to carve out this “extra” space that doesn’t already exist, and I think this happens a lot in the social sciences.
—Mike Rugnetta, PBS Idea Channel[1]

paradigm shift is a phenomenon described by philosopher Thomas Kuhn in The Structure of Scientific Revolutions.

Kuhn posited a process to explain the persistence of incorrect ideas, and the seemingly rapid and sudden abandonment of these ideas when they finally are rejected.

People tend to believe in what they know, and science is basically conservative. A current “paradigm” or theory is difficult to dislodge. It takes either a large volume of evidence, or a particularly powerful single piece of evidence to overturn major scientific theories (scientific revolution). When this occurs, it is called a “paradigm shift”.

Contents

Examples

Newton’s Dark Legacy of ‘Science as the Seeker of Truth’ and the misunderstanding of Mach and Miner

Newton developed the first post-Renaissance view on how the Universe worked but it also created a serious problem regarding how science was and is viewed by the public. Newton’s concept of the Universe was essentially clockwork and created the view in the public’s mind (and in many scientists) that “fact” and “truth” were the same thing. Moreover, Newton states that everything was knowable and there were things like Absolute Time and Space and Science’s goal was to expand our knowledge and uncover this Truth and that scientists anywhere would see the exact same thing and be able to discern this Truth through Facts.

The problem was (and is) is that in terms of philosophy, “fact” and “truth” are are actually totally different things.[2] “A fact is a reality that cannot be logically disputed or rejected.” whereas, “Truths are those things that are not simply acknowledged, but must be discovered, or created” or to over simplify, a fact is what can be demonstrated to be true through observation and-or testing. Truth on the other hand is subjective.

At one time stating that the Sun revolved around the Earth was the truth; that didn’t change the fact the Earth revolves around the Sun. Newton’s theory of gravity assumes that the force of gravity acts instantaneously but this does not change the fact that it propagates at the speed of light, a prediction by Einstein’s general theory of relativity that has been verified by observation of gravitational waves. In some denominations of Buddhism, the truth is that the deities, the heavens, the hells, and the world itself are all an illusion that prevents one from achieving enlightenment. In some other religions, the truth is the exact opposite.

The problem with that view science as the seeker of truth is it created a totally inaccurate view of science. This erroneous view continues to this day, Just watch the totally inaccurate way science is portrayed in The Flight of DragonsWikipedia's W.svg cartoon for an example of just how bad it can get.

So when Newton’s model of the universe got replaced with Einstein‘s and later with quantum mechanics (which Einstein didn’t like as it stated that you could not know everything and there were no certainties only probabilities; hence his famous “God does not play dice” comment), the public concept of “Science as the Seeker of Truth”, born of Newton, got kicked in the head and there really wasn’t anything certain and concrete to replace it with.

More over Ernst Mach demonstrated that what was viewed as “fact” was dependent on your senses and frame of reference which echoed ideas of Immanuel Kant — science creates structures to explain and predict how the world works, and that model could effect what is viewed as acceptable data (i.e., Fact).

One such example of how bad that could get was in France regarding stories that peasants would tell the aristocrat scientists of, “these here rocks that fell from the sky”, which partly due to the class system were dismissed. Come the evolution where said peasants were running things and presto-chango all those dismissed stories suddenly became “vital astronomical data”, and within a few years there was a book on meteorites by one of the new peasant scientists.

With his deeply sarcastic 1956 Body Ritual among the Nacirema article, Horace Miner hammered into his fellow anthropologists another example of the model driving the data rather then the data driving the model[3] and James Burke‘s 1985 The Day the Universe Changed (especially the last episode “Worlds Without End: Changing Knowledge, Changing Reality”) brought this view to the masses, but Burke also makes the unfortunate comment that religious systems “explain the world just as well as science does” reinforcing the idea among many that science was no different from religion.

In the decades that followed the Newtonian revolution, technology took off and the public got another misconception into its collective consciousness: that science wastechnology. In fact, science studies how the world works while technology takes advantage of that knowledge for practical applications. However, it is true that science and technology develop together; deeper scientific knowledge generally allows for higher technology, which, in turns, aids further scientific developments. Moreover, religion was quite able to produce technology as demonstrated by the clock and gunpowder as well as warfare (screwdriver, town planning, and better map-making). Unfortunately, this misconception held sway during one of the biggest social upheaval of the Western WorldWorld War I. It is no surprise, therefore, that the anti-science movement gathered steam especially the years following World War II.

Paradigm shifts and the demarcation problem

The concept of paradigm shifts offers one means of resolving the demarcation problem. Kuhn drew a division between sciences in a pre-paradigm state and those in a post-paradigm state, i.e. having a unifying theory or school of thought. Before a consensus is built around a single paradigm, the field is not a “true science” but a protoscience at best or a pseudoscience at worst.

Misuses and criticisms

Pseudoscientists

CreationistsNew Agers, and other pseudoscientists often reference Kuhn when citing the supposed “fragility” of scientific theories. They anticipate a paradigm that shift “is coming soon” where evolution, naturalism, etc will be rejected by the scientific community, only for the consensus to return to old discredited models such as special creationLamarckism and vitalism. The main problem with this approach is that once a paradigm has been shifted, the old position has been thoroughly discredited, and the paradigm isn’t likely to return to its original position. Suggesting it does amounts to nothing more than trying to move the goalposts or special pleading, claiming that some area (God, the soul, whatever) is off-limits to scientific inquiry.

Postmodernists

During the period of the so-called Science Wars, some advocates of “science studies” abused the term paradigm shift to justify their conclusions as a kind of scientific knowledge. (“Science studies” was a program of postmodernist pseudointellectual hogwash treating scientific knowledge as a socially constructed cultural-literary text, to be creatively critically deconstructed so as to satisfy the deconstructors’ sociopolitical agenda to reveal its underpinnings as an instrument of oppression. No, really, you can’t make this stuff up; Poe’s law applies to loony-left whackadoodles, too.)

History of science

Some scientists and historians of science have criticized the concept on various grounds, a common one being that paradigm shifts don’t occur in the revolutionary sense as often as Kuhn claimed, but happened more gradually.[4][5] The second point is that as scientific understanding progresses, scientists address smaller and smaller problems of less and less overall significance, and paradigm shifts in the understanding of details do not affect the overall big picture because they are related only to details.

External links

The Quietus | Features | Tome On The Range | Dead Precedents: How Hip Hop Hacked The Future

Source: The Quietus | Features | Tome On The Range | Dead Precedents: How Hip Hop Hacked The Future

 

The Quietus - A new rock music and pop culture website

Tome On The Range

Dead Precedents: How Hip Hop Hacked The Future
The Quietus , March 17th, 2019 08:56

In an exclusive extract from his new book Dead Precedents: How Hip-Hop Defines the Future, Roy Christopher explores rap and cybernetics

Credit: GLEN E. FRIEDMAN © Photo Public Enemy New York City Early 1987

At the end of the 1980s, it was finally evident that something needed to be done about digital sampling. Hip-hop producers were making money off of the talent and toil of other artists. This would not stand! The Turtles sued De La Soul for $1.7 million, Gilbert O’Sullivan went after Biz Markie, the Beastie Boys spent a quarter of a million dollars clearing samples for their cluttered kaleidoscope, 1989’s Paul’s Boutique. In another case of technology and culture outpacing The Law, the sampling crackdown coincided with the hacker crackdown.

One of the most recognisable cultural contributions of rap music is sampling. In its simplest form, a sample is a piece of previously recorded sound, mechanically lifted from its original context, and arranged into a new composition. In hip-hop, samples were originally manipulated using turntables and vinyl records, but the practice has since largely moved on to more efficient digital samplers. Commerce and copyright laws notwithstanding, anything that has been recorded can be used as a sample (e.g., beats, guitar riffs, bass lines, vocals, horn blasts, etc.). “Using everything from drum pads and samplers to magpie the last few centuries of speeches, music, and commercials and turn them upside-in for the betterment of the practitioner and listener,” emcee and producer Juice Aleem tells me. “Hip-hop is hacking.” Sampling technology allows producers to make new compositions out of old ones, using old outputs as new inputs, like a hacker cobbling together code for a new program or purpose.

The first example of this sort of sound hacking on the Billboard charts was a 1956 song called “The Flying Saucer” by Bill Buchanan and Dickie Goodman. The two collaged clips of songs of the time together on a reel-to-reel magnetic tape recorder creating a ridiculous alien-invasion scenario. Four record labels sued the two composers and lost. The judge deemed their looting a new and original work.

Mining the past for samples and sounds, hip-hop hacks recorded sound for self-expression, and, like cyberpunk, hip-hop has spread around the world. Both are a part of a globalized network culture that decentralizes the human subject’s stability in space and time and in which the technologically mediated subject reforms and remixes ideas of body normativity. With everything from clothes and glasses to tattoos and piercings, technology changes what is considered normal to have on or in your body.

Cybernetics, the science of command-control systems and from hence the “cyber” in “cyberpunk,” defines humans as “information-processing systems whose boundaries are determined by the flow of information.” Technologically reproduced memories disrupt more than just body normativity: media theorist Marshall McLuhan once declared that an individual is a “montage of loosely assembled parts,” and furthermore that when you are on the phone, you don’t have a body. Technology dismembers the body. Our media might be “extensions of ourselves” in McLuhan’s terms, but they’re also prosthetics, amputating parts as they extend them.

Grandmaster Flash once described another DJ as using “his hands like a heart surgeon.” In his book on Public Enemy’s undisputed and sample-heavy classic, 1988’s It Takes a Nation of Millions to Hold Us Back, Christopher R. Weingarten draws a lengthy and effective analogy between records and the body, casting samples as organ transplants. Tales of transplanted organs causing their recipients to adopt the tastes and behaviours of their dead donors read like the “meatspace” anxieties of cyberpunk:

“A 68-year-old woman suddenly craves the favorite foods of her 18-year-old heart donor, a 56-year-old professor gets strange flashes of light in his dreams and learns that his donor was a cop who was shot in the face by a drug dealer. Does a sample on a record work the same way? Can the essence of a hip-hop record be found in the motives, emotions and energies of the artists it samples? Is it likely that something an artist intended 20 years ago will re-emerge anew?”

Conceived as a combination of the hip-hop of Run-DMC and the punk rock of the Clash, Public Enemy emerged from Strong Island, New York in the late 1980s. Made up of emcees Chuck D and Flavor Flav, DJ Terminator X, and Professor Griff and the S1Ws (the Security of the First World), their paramilitary dance squad, P.E. upended not only what hip-hop could be but the power of sound itself. Their production team, the Bomb Squad (Eric “Vietnam” Sadler, brothers Hank and Keith Shocklee, as well as Chuck D) used upwards of forty-eight separate recording tracks to build their apocalyptic collages. Where their 1987 debut, Yo! Bumrush the Show relied on live instrumentation in addition to sampling, Nation of Millions is one of the most sample-ridden recordings ever made, its layers coalescing and collapsing, its chaos barely contained. Scott Herren says of the record, “it sounded like science fiction.” It remains one of the boldest sonic statements not only in hip-hop but in all of modern music. The Bomb Squad experimented in the studio like Dr. Frankenstein in his laboratory. They built a body out of noise, and it came alive, thrashing everything in its past and in its path, including copyright law.Though Russell Simmons called them “Black punk rock,” Public Enemy stated: “We’re media hijackers.” About the song “Caught, Can I Get a Witness” from Nation of Millions, Chuck D says, “We got sued religiously after the fact, but not at that time. The song itself was just challenging the purpose of it”:

Found this mineral that I call a beat Paid zero I packed my load ‘cause it’s better than gold People don’t ask the price, but it’s sold

Chuck’s use of the word “witness” is curious in our current context. “Cultural memory is most forcefully transmitted through the individual voice and body,” Hirsch and Smith write, “through the testimony of a witness.” In occult practices, a witness is an object that can link people across times, just as musical samples do. To establish such a connection is called a witness effect. Preston Nichols explains, “As a noun, it refers to an object that is connected or related to someone or something […] As a verb, ‘witness’ means to use an object to enter a person’s consciousness or otherwise have an effect on them.” A lock of hair, a piece of clothing, a proper beat, bassline, or vocal sample — any of these could have that bridging effect, tying two separate times together. The song ends: “They say that I stole this. I rebel with a raised fist; can we get a witness?” When their future is outlawed, the outlaws become the future.

Elon Musk’s “Giant Cybernetic Collectives” – Jennifer Sensiba | CleanTechnica

 

Source: Elon Musk’s “Giant Cybernetic Collectives” | CleanTechnica

 

Elon Musk’s “Giant Cybernetic Collectives”

March 17th, 2019 by 


Elon Musk often mentions cyborgs at events and during interviews. This concept is not just a funny remark, a sci-fi reference, or a way to throw around big terms to sound smarter. It’s an important paradigm, a lens we can look at the past with, and more importantly, a concept that will have a great impact on the future of our species.

Before unveiling the Tesla Model Y on Thursday night, Elon Musk walked us through the history of Tesla. In the beginning, there was only the original Roadster, and a relatively small team developed the Model S in a small corner of a SpaceX rocket building facility. To mass produce cars, he arranged to buy a shuttered General Motors factory in Fremont, California. He said some people though that buying a car factory meant he could just start building cars, but it was really just an empty shell of a building, like an abandoned warehouse.

To build cars, Tesla had to finish building the inside of the factory. It needed machines, tooling, computers, robots, and people to all work together to go from parts to functioning cars.

He described the end result as a “giant cybernetic collective” where 20,000 people, countless machines, numerous computers, and communications systems all work together over 4–5 shifts to build cars.

This wasn’t a cute or witty comment he threw around. If you look at various interviews on YouTube, Elon explains the idea in much more depth. It’s an important lens through which he views the past and present, and has very important implications for the future of humanity. But, to understand this concept and its importance, we have to go through a good bit of background information.

Our Brains Are NOT Computers

Image result for brainWhen looking at the “man vs. machine” question, we have to be careful to not get the two mixed up. Computers are designed to do certain tasks as efficiently and quickly as possible by crunching the numbers. The human brain, on the other hand, has a much more interesting history and structure.

In the mid 20th century, researchers started comparing the structures of the human brain with those of animals. What they found is that we have a lot in common with some animals, and only some things in common with others. Why? Because brains evolve to take care of the survival needs of the being in question, mostly because animals with brains ill-suited to survival don’t survive, and don’t pass those features on to offspring. Different animals will have different brains, but features inherited from a common ancestor that work well in both animals will stay much the same.

In nature, brains can vary widely. Sea stars, for example, don’t technically have a brain like we do (and that’s why Patrick is portrayed as unintelligent in Sponge Bob Square Pants). They do, however, have brain-like cells distributed throughout their bodies and it works well for what they do to live. Among invertebrates, mollusks have the most advanced brains, and evolved to have brains at least four separate times. While humans and mollusks do have a common ancestor, that common ancestor probably didn’t have a brain. A brain developed separately only after the lineages of the vertebrates and invertebrates split. Because our ancestors’ survival needs differed greatly from theirs, our brains look and work very differently. Even the individual neurons/brain cells in our brains differ from those of the invertebrates.

Compared to other animals with backbones, though, we have a lot more in common. Neuroscientist Paul D. MacLean identified three basic divisions in our brain:

  • The “reptilian” brain
  • The “lower mammal” brain
  • The “higher mammal” brain

Our very basic life functions, our basic emotions, and our movement are all controlled in our “reptilian” brain, and those structures look and act much like the brain of a reptile. On top of that, we have the “mammal” brain, also known as the limbic system, that governs mammalian needs, like feeding, nurturing and social behavior with other mammals. That commonality is probably why humans are so good at relating to our most popular pets: cats and dogs.

Built atop all of that, we have a “higher mammal” brain, also known as the neocortex. Among primates, ours is the largest and most advanced. This part of our brains helps us do things like language, abstract thinking, perception, and planning. While we would like to think this part of our brain rules it all, we use our smarts mostly to satisfy the needs and impulses of the lower parts of the brain (eating, reproduction, etc.).

This complicated mess doesn’t always work like it should, and it definitely doesn’t work like a computer. We have conflicting needs, instincts, and myriad disorders that can develop. Much of our brain’s evolution happened in a hunter-gatherer environment, while we live in a very different world today with very different survival needs.

Networking With Other People’s Brains

Image result for brainHugh Howey points out in Wired that our brains are very good at figuring out what other brains are thinking. In a hunter-gatherer situation, it’s important for Ug to know whether the other caveman is going to kill him or trade with him. We are very good, but not perfect, at reading other people’s facial expressions, guessing what might be bothering them, or otherwise trying to predict the future of others.

However, we don’t dare point that flashlight into the dark of our own minds. We tend to tell ourselves stories about ourselves and our motivations that differ from reality. Why? Because that truth is often too depressing and we need to keep moving to survive.

He also points out that the common parts of the brain might work very differently from person to person. For example, the part of our brain that looks to reproduce might not ever achieve that goal because it drives us toward partners of the same sex in some of us. Meanwhile, other humans don’t understand it because their brain doesn’t work like that.

While not perfect at it, the human brain is built to connect to other brains. Our survival is almost always dependent on our interaction with other humans and even some animals, so much of the neocortex and parts of the “lower” brain spend most of their time trying to send and receive data from other brains through language, facial expressions, writing, and speech. Many in our society who are considered disabled have perfectly functioning arms and legs, and are quite mobile and strong, but fail to provide for themselves because they can’t communicate or relate to other humans like the rest of us can.

The human brain is so dependent on the brains of others, that we even store information in other people’s heads. We do this without thinking about it, often with those we are closest to. Members of a group in a family, a job, or in other tight-knit social settings tend to start taking on specialized tasks. We all have our talents and areas that we are better at than the others, so the others expect us to remember the relevant information so they don’t have to. The important thing is remembering who knows what, so we know who to ask for information when it’s needed. This is known as group transactional memory.

The larger point here is that human brains evolved to work with other brains to get things done. It’s in our nature to network and reach out. Thousands of years ago, writing systems developed and we took our first steps toward storing information using non-living things and extending our mental networking reach past the reach of our voice and the limits of time.

That Time Humans Slowly Became Cyborgs

Image result for nikola tesla“When wireless is perfectly applied the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole. We shall be able to communicate with one another instantly, irrespective of distance. Not only this, but through television and telephony we shall see and hear one another as perfectly as though we were face to face, despite intervening distances of thousands of miles; and the instruments through which we shall be able to do his will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket.” —Nikola Tesla, 1926

While connecting to the brains of others is older than history, important changes started in the late 19th and early 20th centuries. We started to extend our reach to the brains of others in ways prehistoric humans could probably not imagine. Telegraphs, telephones, and radio made these long-distance communications instant where letters and books took time to send and read.

This is the point where we began to be cyborgs, but it was just the beginning. Telephones, radios and telegraphs only relay information. They don’t add anything to the conversation or help organize the communication between humans. As computers got better and better during the 20th century, we became more and more cybernetic. The role of the machines went from strictly passing information to contributing, organizing, helping, and more recently, directing us.

This is the point where we can get back to Elon Musk’s concept of “cybernetic collectives” and make more sense of it. He’s discussed it in a number of interviews, but the segment of the interview with Joe Rogan might be the most clear and easy to follow.

In short, nearly any present-day group of people is a cybernetic collective. At work, computers assist us with many tasks, and help us communicate with our coworkers. The tools we use do more and more with less and less of our input. We go to and from places where we work with complex machines. We use smartphones to perform an increasing number of tasks at work, for play, and in our families. Social networks connect us to people we know in real life and to people we’ve never met, help us find people to connect to, and decide what we see when we open the apps/websites.

Some collectives are better than others at collecting humans. While the human brain is great at many analytical tasks, we tend to use almost all of that brain power to keep the “lower mammal” limbic system happy. Most users of social networks“gig economy” apps, and other similar things aren’t aware that there are literally teams of psychologists behind the most successful networks trying to get people to engage with it in ways that best suit those who run the network.

The thing is, though, that this isn’t a dastardly scheme for mind control. Social networks are just following the same path our brain did during its development. These new extensions of our brains are evolving to best suit their survival needs. Those that “resonate” with our limbic system survive better than others that do not. The successful ones develop to service our basic mammalian needs and work well with our “higher mammal” neocortex. They are quickly becoming a fourth layer of our brain, for better or worse.

The only reason we are suddenly noticing the rapid changes now is that we are approaching a tipping point where the contributions of the machines in our cybernetic collectives are rapidly growing, while our contributions aren’t changing much. We are even seeing the machines in our cybernetic collectives contribute more than we do in some cases, and even start to replace some of the humans involved.

What Happens Next?

The obvious question now is what will happen to us now that the machines are starting to outperform us within our cybernetic collectives. While we can’t predict the future, we can make educated guesses. Hopefully with the background provided in this article, our guesses will be a little more educated.

Elon Musk made several educated guesses, and did it in the fashion of the typical futurist. In the worst case, we are completely replaced by the machines and possibly are even destroyed by them. One better but still not great outcome would be the rule by benign superintelligent machines, or, in other words, we become “pets” to the AI, who treat us well. One better outcome might be that we “merge” with the superintelligent AI machines and grow with them into the future.

The fact that we are already in cybernetic collectives makes this look more likely, but we have a big problem with bandwidth. We simply can’t communicate with them anywhere nearly as quickly as they can communicate with each other. I am typing this paper at approximately 80 words per minute. When you use a phone or tablet, you are far slower at data input than that (as am I). We can read and perceive pictures and video to bring data into our brains pretty fast, but there are still limits on both the upload and the download speeds.

Elon Musk proposes we solve this problem and improve the chances of human survival by implanting devices in our brains to provide high-speed data transfer between us and the machines. For that reason, he founded a company: Neuralink.

But does this theory pan out in the real world? Let’s look at one example and try it out.

One Example: Trucking

Image by leestilltaolcom on Pixabay

Both within the trucking industry and outside of it, few professions have the mythical quality that truck driving does. Movies like Convoy and Smokey and the Bandit portray the job as one of rugged individualism, with plenty of hard work and freedom for every mile. For those of us who haven’t been around truck stops much, it’s probably hard to not think of the industry as one that hasn’t changed much in decades.

When it comes to “robot trucking”, many probably envision a trucker from the 1970s, dressed in cowboy clothes, being ordered out of the cab by an emotionless Terminatorcyborg from the future. In short, it’s tempting (and perhaps cinematic) to skip from the mid 20th century to the mid 21st century, while ignoring what came before and during the change.

If we treat this issue like something so simple, we risk completely mishandling it and leaving displaced truckers worse off than if we had done nothing at all. But, if you’ve read this far, you know that it doesn’t work like that. The transition is already underway, and the obvious changes are afoot only now.

Image by FrankMagdelyns1 on Pixabay

If we look at things like Elon Musk does, truckers really started becoming cyborgs with CB radio. Not only were they able to talk short range to other truckers, but they also got information about road conditions far ahead, had conversations to pass the time, and warned each other about “bears” ahead on the road — the cops trying to write them tickets. This basically turned trucking into a big, and somewhat short range, cybernetic collective.

It should be no surprise that truckers would often add linear amplifiers or use Amateur Radio gear to go far, far beyond the 4 watt limit imposed by the FCC. Sometimes truckers would transmit using over a kilowatt of power. When conditions were right, 27-28 MHz signals would “skip” over the horizon by bouncing off of the earth’s ionosphere, allowing the truckers to talk to others hundreds or even thousands of miles away.

Even seemingly primitive cybernetic collectives give the individual such an advantage that they’re willing to risk multi-thousand dollar fines from the FCC to enlarge the number of other connected individuals. If you think about it, this gave them superhuman abilities compared to people just 50 years earlier. In Brazil, truckers in areas with no cell phone coverage use modified radios to use older, unencrypted U.S. Navy satellites to get coverage across South America. Despite repeated raids and mass arrests, Brazilians continue to use the satellites illegally.

Two-way voice radio wasn’t the end of it. As technology improved and dropped in price, truckers and trucking companies expanded the abilities of truck drivers. Satellite tracking, GPS, cellular phones, and other communications technologies were often adopted early by truckers. Larger companies even use GPS technology to get automated warnings when trucks go into higher crime areas and places with a history of load theft.

As computing power advanced and became smaller, Truckers got smartphones like everybody else. And, like any big group of people, applications came along that cateredto their differing needs. While not specifically made for truckers, the Waze app is popular in the business. Like CB radio, the app lets drivers leave warnings and information behind on the map, report and correct map errors, and do other limited things while driving. The app itself uses GPS reports from both itself and Google Maps to detect traffic jams, and report those to drivers, and even tries to route drivers around them.

In other words, we are starting to see the contribution of the machines increase, and not just pass information between humans.

But where is this going?

What happens next in the trucking industry is a big issue. There are 3.5 million truck drivers, and it’s the most common job in 29 states. And that’s just those doing the driving. There are over 7 million others working in support positions, at truck stops, and at trucking facilities. Handling this problem wrong can have major bad consequences for the entire economy.

This is a big enough issue that 2020 presidential candidate Andrew Yang has made it an important component in his campaign and a common example in his stumping so far. He points out that automation is already killing jobs by the millions in swing states, and that ignoring the impact of job losses to automation could have been an important factor in Donald Trump’s 2016 election victory.

Yang, and many others, predict that the role of machines in the truck-driving cybernetic collective will increase until the machines are doing the driving and millions are left out of a job. Yang proposes a $1,000 monthly “universal basic income” for all U.S. citizens over the age of 18 to help cushion the landing for people falling out of employment as more jobs in more industries are lost.

If Musk is right about the evolution of cybernetic collectives coming before job replacement and after it, then we should be seeing some moves toward partial replacement. And, we are seeing that. Autonomous trucks are getting good enough to safely drive in rural areas with low traffic in good conditions, and on the highways within cities in many cases, but that doesn’t mean that the driver can be safely removed from the cab, because the difficult situations will still require a human driver.

To get over this challenge, some of the companies developing autonomous trucks are installing a suite of cameras and sensors to allow remote piloting of the vehicles. Truck drivers will be on standby ready to take over when an autonomous truck has a “disengagement” and can no longer safely operate the truck by itself. Using a big array of screens and video game-like controls, the trucker can operate the vehicle remotely through 5G data connections until the vehicle can safely take over again.

The virtual cab isn’t the end of human involvement, but what place have beyond that will be is harder to predict. Some think that computers will not be able to ever achieve 100% self driving of vehicles, for a variety of reasons. Many others, especially those working on the technology, think it’s not only possible, but will happen sooner than we think.

If autonomous vehicles do get to the point where they don’t need human assistance, even remotely, then the number of human backup drivers would reduce slowly until there are few, if any, left. At that point, would something like Neuralink work to help us be smart enough to compete in the new superintelligent job market? Or will humans all need to be put on disability in the form of universal basic income to survive? Or will someone come up with a better solution?

Up to now, Elon Musk’s paradigm of “cybernetic collectives” does seem to check out, but the future is much harder to predict. What do you think we will need to do to survive and thrive in a vastly changed economy? Let’s keep this discussion going in the comments and on social media!

Tags: 

About the Author

 Jennifer Sensiba is a long time efficient vehicle enthusiast, writer, and photographer. She grew up around a transmission shop, and has been experimenting with vehicle efficiency since she was 16 and drove a Pontiac Fiero. She likes to explore the Southwest US with her partner, kids, and animals.

s
search
c
compose new post
r
reply
e
edit
t
go to top
j
go to the next post or comment
k
go to the previous post or comment
o
toggle comment visibility
esc
cancel edit post or comment