The atomic human, p.28
The Atomic Human, page 28
Building trust is the way Chris Kraft proceeded when he designed the ecosystem of decision support that worked with the astronauts when Armstrong and Aldrin landed on the Moon seven and a half years after that New York Times editorial was written. But the route to this balanced relationship required hard work. Our fascination with AI is a projected fascination with ourselves. Technological narcissism can be unhealthy, but if we can shift our narcissism to introspection, it will be beneficial. That is the idea behind unpicking the nature of the atomic human.
While there is a separation between our immune system and our nervous system, there is no neurological evidence for a separation between System 1 and System 2. These dual-process models of cognition are analogies that capture the idea that the spectrum of our cognition spreads from fast-reacting reflexive decisions to slow-responding reflective decisions. There is, however, a clear separation between us and System Zero – the digitally driven system of decision-making relies on our data. With the latest wave of generative AI, our machines have learned to converse. The first wave of System Zero primarily communicated with our reflexive self, but this next wave will be able to interact with our more reflective self. For our human society, that gives us a choice: do we really wish to be like those six men in that Harrow clinic, trusting that this systemic intervention is safe? The Theralizumab trial was a failure of process and understanding for which six men sacrificed their health. With social media and the next wave of generative AI, we are dosing ourselves with System Zero and testing it against our societal health, but the nature of social health is not easily quantified. That makes it hard to measure the consequences of the phase-one trials for System Zero.
9.
A Design for a Brain
Alan Turing’s most famous academic paper on intelligence describes the imitation game, where the idea is to distinguish between a machine and a human. In the game, there are two players. The first player is always human, the second can be either a human or a computer. The machine wins the game if it can fool the first player into believing it is human. This is the game we call the ‘Turing Test’.
Inspired by Turing’s paper, the Loebner Prize was awarded every year to the machine that did the best job at fooling humans. The story of this prize is told in Brian Christian’s The Most Human Human, and nowadays we have machines that can pass this test, but more interesting for me is a less-known aspect of the paper: the section where Turing describes how the game’s rules may account for telepathy:
I assume that the reader is familiar with the idea of extrasensory perception, and the meaning of the four items of it, viz., telepathy, clairvoyance, precognition and psychokinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming.1
I don’t know by what mechanism Turing assumed that telepathy could occur, but here’s one that might work. Imagine if we each had a high-frequency radio transmitter and receiver in our heads: we could then evolve the capability of Wi-Fi. To any humans who didn’t have such capability, this would appear like telepathy. Wi-Fi humans would have a tremendous advantage: they would have a much lower embodiment factor, they would no longer be locked in. They would be able to coordinate instantaneously. If this ever happened, then the humans who had evolved this capability would rapidly dominate society.
It didn’t happen. The statistical evidence Turing mentions comes from badly designed experiments, from a failure to conduct rigorous statistical trials. Turing was fooled by the damned lies of statistics. We don’t have telepathy, but we do have an innate ability to decrypt one another’s reactions through non-verbal cues. The error in the experiments Turing refers to was coming from a failure to consider our ability to ‘read each other’ without using verbal communication.
Our verbal communication uses a sound transmitter we have in our mouth and neck and a pair of receivers we have on the sides of our head. Wi-Fi’s radio waves travel 1 million times faster than our voice’s sound waves. Our high embodiment factor constrains our ability to coordinate our actions across groups of humans, but we have evolved creative approaches to overcoming these constraints. Our nervous system initially evolved to control our motion, our fight-or-flight responses, but it has adapted to show emotion and communicate ideas through the constrained channels we are provided with. It does this so well that Alan Turing was deceived into believing in telepathy. The Wi-Fi human doesn’t exist,2 but we can read each other’s minds if we understand and accommodate individual motivations. We can second-guess each other’s behaviour by imagining what we would do in each other’s place. We have our eyes, which are receivers for the very high-frequency radio waves we call light. We have a simple visual transmitter: we can smile, we can frown, we can smirk, we can wink. Our face contains over forty muscles to control expression. Some of these communications are voluntary, but when we laugh or cry they can be involuntary. Human collaboration relies on trust and understanding. The telepathy Turing was observing is the manifestation of subtle social cues that operate closer to our reflexes than to our reflections. They relate to our social intuitions. Through these intuitions we can develop our feel for each other. Just as Don Lopez had a feel for his plane, just as Donald MacKay described our feel for a visual scene and just as Kevin O’Regan described our feel for a house or a car, we have a feel for one another.
Powered flight is just over a century old, and by the 1940s Bob Gilruth had characterized the feel of an aircraft, but conversation is over a thousand centuries old and we haven’t yet characterized the feel of our fellow human being. We call a collaborative unit a team and teamwork is our route to coordinated action. Like Eisenhower on D-Day or Chris Kraft at NASA’s Mission Control, teams can have captains. But a good captain knows her team’s capabilities; she knows where and how support will be needed. A good captain has a deep understanding of who her team members are. A good captain empowers her team to deliver and allows them to get on with their roles unmolested. She brings her team into a state of information coherence. When the team understands its shared goals, it can deploy naturally to its strengths. In these circumstances the division of labour is emergent and adaptable. When we manage to work together in this way, we achieve a whole that is greater than the sum of its parts. This is true across our society, and we can see it across our evolutionary history. We have collaborated with our fellow humans for hundreds of thousands of years, and as animals for millions of years. Human collaboration has led to highly evolved forms of communication that were subtle enough to fool Turing into believing in telepathy.
Norbert Wiener developed his ideas for Cybernetics while he was working as a mathematics professor at the Massachusetts Institute of Technology. Like Donald MacKay, during the Second World War Wiener was asked to address the challenge of gun targeting with radar. Wiener was so inspired by the importance of feedback to automatic control systems that he also came to believe that feedback is fundamental to intelligence, but feedback is just one form of a wider phenomenon of systems interacting with their environment. In feedback systems, output feeds directly into the input. In the earliest nervous systems, outputs were actions; they affected the input senses through the changes those actions delivered in perception of the world around. The earliest animals had sensorimotor neurons connecting their senses directly to their primitive muscles: sense directly led to action and these animals were integrated into their world in the same manner that Watt’s governor is integrated with the engine.
This early reflexive intelligence was not locked in. It was fast-reacting and reflection-free. In it the sensing was a result of changes in the environment or changes that resulted from the animal acting on its environment. The two were intertwined: perception and action were tightly coupled. Contrast this with the origins of the digital computer. The grinding tasks of the electro-mechanical bombes exhaustively exploring possible settings of the Enigma machine or the repetitive comparison task of the bombes’ electronic cousin Colossus. These are slow-reacting processes. The information was prepared and presented to the machine; the machine then worked through many computations before it gave its answer.
Directly connected sensorimotor systems are at one end of the spectrum of our intelligence. They are fast, reflexive systems, but there are cascades of reflexive systems between our fastest-reacting systems and the reflective intelligence Bauby describes in the butterfly. The reading of a fellow human’s thoughts in the manner that fooled Turing forms just one part of what we call our intuitions. These are a set of instincts that fuse our senses with our previous experiences. Their nature and origin are difficult to describe – we use terms like ‘gut instinct’ to capture them. But when we are interacting with our fellow humans, those instincts emerge at the moment of the conversation. They affect our responses and feed our own emotional state, triggering empathy, joy or anger. Those intuitions form our feel for our fellow humans. Each of us differs in our capability to fly a plane, many of us could never develop the feel Amelia Earhart had for her aircraft, and we also differ in our feel for our fellow humans. How quickly we can adapt and respond to this feel relates to our social intelligence.
By many measures, Turing should be regarded as one of the greatest Allied minds of the Second World War. Alongside his great mind, Turing was also a great athlete, a marathon runner. In 1946 he came tenth in the UK National Championships, and his time of 2 hours, 46 minutes, 3 seconds would have placed him fifteenth at the 1948 Olympics, ten minutes behind the winner.3 But just because he was extremely talented in some areas, it doesn’t mean he was talented in all areas. If we listed the areas of Turing’s extraordinary capabilities, we probably wouldn’t include social intelligence among them.
After the war, Turing continued working on electronic computers at the UK’s National Physical Laboratory, known as NPL, in south-west London. Turing’s running talent meant he came to the attention of J. F. ‘Peter’ Harding, secretary of a local athletic club:
We heard him rather than saw him. He made a terrible grunting noise when he was running, but before we could say anything to him, he was past us like a shot out of a gun. A couple of nights later, we kept up with him long enough for me to ask him who he ran for. When he said nobody, we invited him to join Walton. He did and immediately became our best runner.4
By this time, Turing was hoping to realize his vision of the universal computer in electronic form. At NPL he was designing a programmable digital computer. Harding goes on to comment on Turing’s appearance and social integration:
Looking back, he was the typical absent-minded professor. He looked different to the rest of the lads; he was rather untidily dressed, good quality clothes mind, but no creases in them; he used a tie to hold his trousers up; if he wore a necktie, it was never knotted properly; and he had hair that just stuck up at the back. He was very popular with the boys, but he wasn’t one of them. He was a strange character, a very reserved sort, but he mixed in with everyone quite well: he was even a member of our committee.
Turing had clearly worked out how to integrate with his fellow runners, but like many academics he was seen as different.
We had no idea what he did, and what a great man he was. We didn’t realize it until all the Enigma business came out. We didn’t even know where he worked until he asked us if Walton would have a match with the NPL. It was the first time I’d been in the grounds.
Turing must have had a natural talent to be running at such a high level. But he must have also been dedicated to his training. Like any skill, to excel at the very top level a combination of talent and hard work is required.
I asked him one day why he punished himself so much in training. He told me ‘I have such a stressful job that the only way I can get it out of my mind is by running hard; it’s the only way I can get some release.’
Some of Turing’s stress was arising from the gap between his intellectual promise and his ability to deliver on his vision for the NPL. Turing was designing the Automatic Computing Engine, or ACE for short. Unfortunately, he was struggling to move from designing to building. His design tended to evolve. Turing was no longer under the time pressure of the shifting global conflict of the Second World War, and he allowed his imagination to roam.
The boss of the NPL was the mathematician Sir Charles Galton Darwin, grandson of the naturalist. Sir Charles received a letter from a pathologist called W. Ross Ashby and shared it with Turing, who replied in excitement:
Sir Charles Darwin has shown me your letter, and I am most interested to find that there is someone working along these lines. In working on the ACE I am more interested in the possibility of producing models of the action of the brain than in the practical applications to computing. I am most anxious to read your paper.
The year was 1946. Turing was now communicating with a like-minded genius. The NPL was supposed to be working on practical applications of the computer, but Ashby had written to Sir Charles about brains. Ashby was a pathologist, medically trained, who had become interested in adaptation in the nervous system. He had spent his spare time teaching himself advanced mathematics and working on the brain. The theories he shared with Sir Charles were based around adaptation in animals. Ashby was interested in homeostasis. This is the process by which a lifeform reacts to changes in its environment to keep conditions in its body right for sustaining life. Just as Watt’s governor tries to keep the speed of the engine stable by feeding back to the engine’s regulator, animals have to adapt to their environment to keep their biological systems working. Animals strive to stay alive. Ashby described his thinking in a 1952 book called Design for a Brain. The examples he gives come from homeostatic systems in our bodies:
As first example may be quoted the mechanisms which tend to maintain within limits the concentration of glucose in the blood.
He is referring to the mechanisms which keep our cells supplied with the fuel they need to survive, the job the metabolism needs to do to enable the cell’s persistence. Since the era when the eukaryotic cells emerged over a billion years ago, glucose has been their main source of energy. The cell converts glucose and oxygen to ATP, the fuel of the cell. Our multicellular bodies have evolved to provide glucose to our cells in the right quantity to sustain them:
The concentration should not fall below about 0.06 per cent or the tissues will be starved of their chief source of energy; and the concentration should not rise above about 0.18 per cent or other undesirable effects will occur.5
These other undesirable effects include a narrowing of the blood vessels, coma and death. When blood glucose drops too much adrenaline is released into the blood, causing the liver to produce glucose and appetite to increase. If the level goes too high, insulin is produced and the glucose is absorbed as glycogen or excreted in urine.
Ashby goes on to list other mechanisms where maintaining adaptation is critical to our survival, including our body temperature, the ways our pupils adjust to allow the right amount of light on to the retina of our eyes, how our skin darkens in the presence of sunshine, the volume of blood we have in our bodies, and the production of saliva to digest food. When Ashby wrote to Sir Charles, he was developing his ‘design for a brain’. It was based on homeostasis.6 Ashby’s suggestion was that our nervous system also operates according to the ideas of homeostasis.
Turing’s misunderstandings around extrasensory perception were to do with a mode of communication in which he didn’t excel: direct conversation. There is a joke that goes ‘What’s the difference between an introverted and an extroverted mathematician?’ The answer is that an introverted mathematician looks at her shoes when she’s talking to you, whereas an extroverted mathematician looks at yours. When we look at each other’s faces, various reflexive parts of the brain light up. We cannot help processing the emotional reactions of those we are speaking to. Just like some people struggle to do mathematics, others struggle to process the social feedback the face contains. Their response is to look away, to stare at the ceiling, to look at your shoes or theirs. From descriptions, it seems that Turing may have been one of those people, but when communicating by letter he could be just as effusive as any other person. In 1946 he wasn’t aware of Ashby’s ideas around homeostasis and his reply to Ashby focuses on his own plans for brain research:
The ACE is in fact, analogous to the ‘universal machine’ described in my paper on computable numbers. This theoretical possibility is attainable in practice, in all reasonable cases, at worst at the expense of operating slightly slower than a machine specially designed for the purpose in question.
Turing’s universal computer is the machine that can compute anything that is computable. Realizing this machine is the objective of his work at NPL. In his reply to Ashby he is contrasting his digital universal computer with the special-purpose machine, which is what we would call an analogue computer.
Thus, although the brain may in fact operate by changing its neuron circuits by the growth of axons and dendrites, we could nevertheless make a model, within the ACE, in which this possibility was allowed for, but in which the actual construction of the ACE did not alter, but only the remembered data, describing the mode of behaviour applicable at any time. I feel that you would be well advised to take advantage of this principle, and do your experiments on the ACE, instead of building a special machine. I should be very glad to help you over this.7
