The atomic human, p.29

The Atomic Human, page 29

 

The Atomic Human
Select Voice:
Brian (uk)
Emma (uk)  
Amy (uk)
Eric (us)
Ivy (us)
Joey (us)
Salli (us)  
Justin (us)
Jennifer (us)  
Kimberly (us)  
Kendra (us)
Russell (au)
Nicole (au)



Larger Font   Reset Font Size   Smaller Font  

  Today our modern approaches to machine learning use exactly the approach Turing describes: we simulate neural networks on digital computers. But while Turing could imagine that solution, it was beyond the technology of his era to deliver it. What he was experiencing was an affordance gap. There was a separation between what he’d like to do and what he could do in 1946. He believed he would be able to build a machine that could simulate the brain, but his imagination had got ahead of his reality. While the things he dreamed of would become possible in time, they could not be realized in his time. This affordance gap is also what Bauby experiences. His imagination can take him to places where he cannot physically go; it allows him to imagine feeling things he can no longer feel and to act in ways he can no longer act. The Nobel Prize-winning Austrian zoologist Konrad Lorenz once wrote, ‘Thinking is acting in an imagined space.’ This is a beautiful description of what Bauby’s butterfly is doing, and it also captures the feeling Winnie-the-Pooh describes when talking about the difference between how the idea feels inside you versus when it is deployed in practice. Contrast this to the unthinking intelligence represented by Watt’s fly-ball governor. Both Turing and Bauby were thinking, but their imagined spaces did not match the real world they inhabited.

  When it comes to artificial intelligence, many of the ideas we see in popular culture share one characteristic. They depict AI as a form of automation that is capable of adapting to who we are. So, when people imagine an AI utopia, they imagine the personal assistant that accommodates our personal needs, a silicon manservant akin to Wooster’s Jeeves in the P. G. Wodehouse novels. Conversely, the AI dystopia consists of robots that understand us and dominate us. James Cameron’s Terminator robot has the capability to converse with humans, interpret human actions and even act as a human while exhibiting superhuman strength and invulnerability. In practice, the first wave of algorithmic decision-making – as characterized in Math Destruction and Surveillance Capitalism – does have an understanding of us, but not one we would characterize as human. It manipulates us through its vast information capacity. It exploits unprecedented consumption of our personal data which gives it a different perspective on us. I call this phenomenon ‘the great AI fallacy’. The fallacy is that we think we have created a form of algorithmic intelligence that understands us in the way we understand each other. A technology that gives us the same feel for it as we have for our fellow human.

  When Wilbur Wright took the first flight, he and his brother had invented the mechanism by which he controlled their aircraft. On that first flight, he had to develop a feel for that control. He had to bridge an affordance gap because, before he flew, no one had ever steered a powered vehicle in three-dimensional space remove. Flying requires new controls for the roll, yaw and pitch of the aircraft.

  When Wilbur landed after his first flight, he would have shared his experience with his brother, Orville. He would have shared how it felt to use the stick to control the altitude of the aircraft, to be the first to develop a feel for the interface between a human and a powered flying machine. How we share such information between ourselves is a question we now need to answer.

  This is a challenge, because the gap between our expectations of AI and the reality of what we’re producing is closing. With generative AI and large-language models such as ChatGPT we can now build machine-learning systems that provide plausible communication between human and machine using the languages humans have developed for themselves, rather than for their machines. This new frontier presents a promising route to ending the AI fallacy and bringing the technology we’re producing into line with people’s expectations.

  Turing was only thirty-four when he was running for Walton Athletic Club, quite young to have developed many eccentricities, but to the club’s secretary, Peter Harding, Turing already appeared different from the other club members. When I read about Alan Turing or Norbert Wiener I marvel at their capabilities, but I’m also relieved to read of their weaknesses. When Wiener or Turing were working, thinking, feverishly scribbling, pausing, reflecting, crossing out and correcting, they were engaged in deep reflection about the way the world might be. This is the trance William Blake captured when representing Newton in his famous print. Both Turing and Wiener were mathematicians, and this means they didn’t only reflect on their ideas, they could convert their thinking into rigorous mathematics and test it within that context. The aim of Bertrand Russell’s Principia was to show that mathematics is a principled and consistent framework of ideas: this makes it ideal for the brain to test its more improvisational thinking against. Wiener and Turing were using their pencil and paper to test and refine their intuitions and reflections on how mathematics might work. By refining their intuitions in this way, they could integrate their understanding in their fast-reacting, reflexive thinking. They could create mathematical experiences that were equivalent to Pooh’s experience of hooshing. This is how they developed their feel for mathematics. They could then compare this against the world around them and test their ideas for how mathematics manifests in the real world. Wiener and Turing would have, at times, felt at one with their mathematics, just as Lopez felt at one with his plane.

  In 1992, Kevin O’Regan captured this idea with a notion of ‘outside memory’.8 Outside memory is where the brain – rather than storing memories internally – relies on the consistency of the world around it for its information storage. Konrad Lorenz also described this in Behind the Mirror. Lorenz had made a life’s study of animal behaviour, founding the field of ethology, and in one section he describes the behaviour of an orang-utan faced with the problem of taking a banana hanging out of reach:

  … the orang-utan looked helplessly up and down from the box standing in one corner to the banana hanging in the other; then, in a fit of bad temper, it tried to turn its back on the problem…

  From Lorenz’s description, the animal is looking around at the items – including a movable box – and assimilating them in its analysis of the problem:

  … it turned its mind to the task again. Then suddenly its eyes moved from the box to the point on the floor immediately underneath the banana, from the floor upwards to the banana itself, then down again and from that spot back to the box. In a flash, as one can clearly see from the orang’s expressive face, it realizes the answer.9

  Konrad Lorenz’s description of the ape addressing the challenge is interspersed with the ape’s glances at the environment. As it is working its way through the answer it is continually grasping at the problem with its eyes. To have an understanding of a visual scene in front of you, you don’t need to remember where everything is. As long as your eye can rapidly saccade to extract a salient part of the scene, the brain can imagine it has access to the whole image. A similar effect would occur when Wiener and Turing worked on their mathematical ideas. When mathematicians iterate between mathematics and reflection, they test their intuitions in the framework of mathematics, just like the orang-utan tested its ideas in the visual scene in front of it. Wiener’s and Turing’s scribblings are performing the equivalent of the eye’s saccade to verify an aspect of the mathematics they need to formulate their theories. Blake’s Newton captures the scientist in the same moment as Lorenz’s description captures the ape. The geometric drawings and the dividers give Newton an outside memory which he manipulates to solve his geometric puzzle.

  Some of us are good at mathematics, and some of us are good at reading and understanding our fellow humans. The feel of a human conversation can work in a similar way to the manner in which Wiener and Turing developed their feel for mathematics. When communicating with a colleague I can test my ideas against their knowledge. I can explore limitations in my thinking by exploring their understanding in such a way that it highlights problems in my theory or plans. This is the process of conversation, but instead of testing our understanding directly against the real world or against the world of mathematics we test our understanding against each other. The beauty of Shannon’s theory of information is that it separates information from knowledge – that’s what allows us to compare information transfer that’s occurring in evolution to that between humans to that between machines. But by separating information from knowledge we lose the meaning of those conversations. When it comes to our culture and our conversations the nature of this knowledge goes to the very heart of the atomic human. These conversations build on our empathy, our culture and our wider social context.

  One of Norbert Wiener’s social difficulties had been in his relationship with Bertrand Russell. Both men realized early on that they weren’t going to be close friends, but Russell was mature enough to guide Wiener away from philosophy and towards mathematical colleagues, among whom Wiener found his true intellectual home. Wiener wasn’t the only child prodigy to interact with Russell. In 1938 Russell took sabbatical leave at the University of Chicago, where he taught seminars and an undergraduate course on ‘Problems of Philosophy’. There he met a fifteen-year-old runaway from Detroit, Walter Pitts. Pitts became party to some of the most important philosophical discussions of the decade. He witnessed Bertrand Russell debating with Rudolf Carnap on logic, language and the logical foundation of knowledge. The lectures must have made an impression, because when they finished, rather than travelling home, Pitts hung around the campus in Chicago. He didn’t register as a student; he took on menial jobs and studied with different members of faculty he encountered. Back in Europe, Turing, Flowers and Good were pitting their minds against the cryptographic codes of Germany. In Chicago, Pitts educated himself in mathematical logic, with a dose of Greek and Latin on the side. Three years later a psychologist from Yale arrived. His name was Warren McCulloch. The young Pitts’s itinerant lifestyle meant he had nowhere to live, so McCulloch welcomed him into his family home to stay with his wife and three kids. In the evenings they worked together on their shared passions: logic and the brain.

  McCulloch and Pitts were inspired by Alan Turing’s universal machine. They wondered how the brain could implement universal computation. The debates between Russell and Carnap caused them to view mathematical logic as a plausible route. The year was 1941, the Battle of the Atlantic had not yet finished, and Rommel was just arriving in North Africa. But McCulloch and Pitts were reflecting on how a network of firing neurons could represent thoughts. In 1943, just as Rommel was leaving North Africa in defeat, they described how such a neural network could represent logic:

  The ‘all-or-none’ law of nervous activity is sufficient to ensure that the activity of any neuron may be represented as a proposition.10

  Their theory was that the neuron was the basic logical unit of the nervous system, and that composition of these neurons led to the process of thought. They viewed the neuron as a universal gate for intelligence. They called the composition of these neurons nervous nets. Their paper is the first example of the type of model Turing was referring to when he wrote to Ashby. The ideas of McCulloch and Pitts are the foundation of the modern methods that have revolutionized artificial intelligence. They are the first examples of models that we call neural networks.

  The approach McCulloch and Pitts came up with was called threshold logic. In their model, the neuron firing represents True. Their threshold function is a mathematical function that operates as a switch. Just like a logic gate, it has an input, but instead of being a discrete True/False input, the input can be a continuous number. If that input is above a particular value, called the threshold, the output of the function is 1. If that input is below the threshold, the output is 0. The 1 represents True and the 0 represents False.

  The simplest threshold function has one input. Let’s imagine there’s a simple McCulloch–Pitts sensorimotor neuron in your head. You use it to decide whether to go swimming in an outdoor lake. We assume you have a receptor cell that senses the temperature. The McCulloch–Pitts model for the neuron says the sensory neuron will fire if the temperature goes above a particular threshold, let’s say 25°C. If the neuron fires, then you go swimming. From Braitenberg and O’Regan’s perspective, this is a sensorimotor response: the neuron detects that it’s warm and the action is to go swimming; from McCulloch and Pitts’s perspective, this sensorimotor approach is a proposition: ‘When the temperature is above 25°C, I go swimming.’ Because the neuron fires that proposition is True.

  This is a model, so it’s oversimplifying things. Imagine if it really were like that, if each time the temperature went over 25°C we all jumped into the nearest body of water. That doesn’t seem a very sensible model, but the McCulloch–Pitts neuron takes this into account. There may be other factors, like whether you have your swimming costume, whether you’re feeling confident about your body image, which other people are around. So, you end up with a proposition more of the form ‘If the temperature is high And I have my swimming trunks And I’m feeling comfortable about my body image And I’m with friendly people Then I go swimming.’ This proposition could be implemented with And gates. But it can also be implemented in the McCulloch–Pitts neuron.

  The assumption that McCulloch and Pitts made was that these different factors – your sense of your body image, the nature of the people you’re with, and so on – could each be encoded in a separate neuron that could feed into the swimming-decision neuron. These neurons could be composed, so the balancing of different behaviours could be encoded in how they are all wired together. When we compose logical gates to form Wittgenstein’s truth tables, we feed the output of one gate into the input of the next. But in the McCulloch–Pitts nervous nets there is an additional step. It is called weighting. Before feeding the next neuron, the output of the previous neuron is weighted by a number. If the number is small, the influence of that neuron on the next is small. If it is large, there is a large influence.

  The weights are the set of parameters. We’ve already encountered parameters in the Albion fly-ball – they were the side measures like the length of the linkages. We can think of each weight as dictating how much influence each input has on the final decision. In the nervous-net model of thought, the weight reflects something about your behaviour, about your personality. How important is your body comfort compared to your desire to swim? How concerned are you about what other people think? How warm does it have to be before you want to jump in? Maybe on a warm day, you don’t care what other people think. In this way we can represent personality with parameters. In the logical-threshold model of the neuron, you add up the weights associated with each of these factors, and you compare them with the threshold. If they are above the threshold, you go swimming. The weights in the model represent synapses in real neurons. The synapse is a small gap between the axon of one neuron and the body of the next. It regulates how much charge flows between any two neurons. In the McCulloch–Pitts model, the weight is analogous to the conductivity of the gap.

  The McCulloch and Pitts approach was highly influential. It showed how the nervous system could operate according to logical rules. The two men became famous across the academic community. In the introduction to Cybernetics, despite his earlier scepticism of Russell’s logical ideas, Wiener explicitly mentions Walter Pitts and the importance of mathematical logic.

  Pitts and McCulloch had drawn the same conclusion about how to model the brain that Shannon had derived when modelling communication networks. Logic was the key. At the same time their paper was being published, across the Atlantic in England, Tommy Flowers was planning the Colossus as a logic-based machine for breaking the German High Command codes.

  Ashby’s ideas for the brain were based on the importance of homeostasis – the idea that lifeforms must be adaptable to respond to changes in their environment. Maintaining homeostasis is why bacteria cluster together in colonies when they come under threat. Our machines can also struggle with their environments. In the history of automation, we don’t normally cast them into the world to do direct battle with the gremlins of uncertainty. We coddle them in protected spaces. We place them in factories, or we build railways for them, or we pave our landscape with roads. When we build a machine to do work for us, we shelter it from external challenges that would damage it. In this way uncertainty for the machine is reduced, and its job is made easier. But all this is only possible because of the care humans give to the machine. When we build an engine, its shelter, fuel and repairs are all provided by the human hands of engineers, mechanics and technicians.

  The needs of the machine defined the century following Watt’s refinement of the steam engine. The rise of the factory system, the division of labour and the mechanization of society required humans to maintain our creations. For the machine to operate, the human had to adapt to its needs. Albion Mill, which Boulton and Watt fitted with steam engines, was completed in 1786. Soon it was grinding enough wheat to provide flour for the whole of the City of London. The new factory created a furious reaction among local mill owners and was burnt to the ground in 1791. Despite this setback, the nineteenth century still became the century of steam. In 1863, Samuel Butler, then working as a sheep farmer in New Zealand, wrote to the editor of The Press about the rise of the machine:

  … but it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organization; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race.

  Butler was writing just three and a half years after Charles Darwin published On the Origin of Species – the book that laid out the principles of natural selection. Butler’s emphasis on the self-regulating machine stems from the developments made in control systems, the autopilots that stem from Watt’s use of the fly-ball. Nineteenth-century New Zealand sheep farming might be as far away from the industrial landscape as you could imagine, but machines were bringing the world closer together. By 1858 the first transatlantic telegraph cable had been laid, reducing communication times between North America and Europe from ten days to minutes. The printing press had allowed ideas to be widely distributed as books; the telegraph now allowed information to be shared across the Atlantic almost instantaneously. New Zealand was still isolated from this network, but Butler’s letter lays down a vision of machines evolving and improving. It reflects the extent to which machines already held society in slavery:

 

Add Fast Bookmark
Load Fast Bookmark
Turn Navi On
Turn Navi On
Turn Navi On
Scroll Up
Turn Navi On
Scroll
Turn Navi On
183