The atomic human, p.9
The Atomic Human, page 9
September 2019, I’m at my son’s school in Cambridge, listening to his headmistress. My mind drifts and I look around the hall to the rolls of honour for former students. One name is everywhere: state scholarship, university scholarship, fellowship at Trinity College, fellowship of the Royal Society. Inscribed in gold letters: W. T. Tutte. These boards show a litany of achievements, but they only scratch the surface. There is no mention of Tutte’s most important work, because he single-handedly reverse-engineered the Lorenz cipher. Even though he never saw a Lorenz machine until after the war, he was able to unpick this new lock.
In August 1941 a German operator made a major error: two almost identical messages were sent from the same cipher machine. Careful analysis unveiled both the plaintext form of the message and the encryption key used. Bill Tutte analysed the key, and in arguably the greatest single intellectual achievement of the Second World War he was able to deduce the entire design of the Lorenz machine from patterns in the key.
Tutte’s analysis identified weaknesses in the code. Just like the Enigma, the Lorenz cipher can be seen as a mathematical function. The Enigma was a substitution cipher, but the messages Hitler’s machine sent were digitized, converted into a stream of electronic bits: 1s and 0s, like Shannon’s representation of information. This bitstream was combined with a key of electronic bits to lock the message – the approach is known as a Vernam cipher.5 Tutte’s work enabled a statistical attack on the Lorenz cipher, but even with this statistical attack the follow-up brute-force work was too much for humans or even the electromechanical bombes. They needed a new machine, a new way of exploring the millions of combinations more quickly.
Jack Good, who coined the term ‘ultraintelligence’, was on the team asked to help. They were led by a mathematician called Max Newman, and they were also joined by Donald Michie, a nineteen-year-old wannabe linguist who had accidentally taken a cryptography course. The situation was becoming desperate. When the Germans first started using the Lorenz cipher, the operators were sloppy and made errors in their encryption. That made it easier to crack this new mathematical combination lock – the plans for the Battle of Kursk that turned the tide on the Eastern Front were unlocked in this way – but the operators’ practices had improved and it was becoming harder to break the messages.
Newman’s team built different machines, including one called the ‘Heath Robinson’. Heath Robinson was a cartoonist who drew elaborate machines that performed simple tasks.6 The machine was well named, because despite its spinning wheels and paper tapes it was performing a simple comparison-and-add task. Two paper tapes were used, one containing the ciphertext, the other containing part of the key, and comparing the tapes allowed the team to apply the statistical attack to decode the message.
Magnetic relays from telephone exchanges had enabled Turing to design the bombes for decoding Enigma. Tommy Flowers was a telecoms engineer who worked on those automated exchanges before the war. At Bletchley Park, experts like Flowers brought their knowledge of automated exchanges to help in building the bombes and other machines.
Flowers was called up to Bletchley Park from his office in London. Newman’s team wanted him to design a counting system for comparing the two tapes in the Heath Robinson. Flowers realized that the counting system could work faster if they used thermionic valves instead of relays. Valves are the switches that radio sets, like my grandfather’s, used. They switch faster than relays because they are switched by a beam of electrons instead of by a metal contact. With the valves the Heath Robinson could operate faster, but that caused its paper tapes to stretch, split and spread across the floor as they unspooled.
Flowers knew that the limitation of the machine was the spinning paper tapes and he believed that the solution was to replace the paper tape by storing the key electronically: by storing half the system as electrons, a much faster machine could be built. Flowers sketched out a plan. He would use thermionic valves to represent the Lorenz cipher key. He could use the valves to represent the 1s and 0s by having the voltage of the valve as either negative or positive. He sketched a design for the machine: it would need 1,600 valves.
For thousands of years, we have used paper to store and share ideas. In Michelangelo’s time, the printing press sped up the copying of those ideas. Five hundred years after the printing press was developed, moving ideas from paper to electronics was the next step forward in the information revolution. What Flowers was proposing to build was the world’s first electronic programmable computer. If it worked, it would enable near-real-time reading of Germany’s most senior communications. It was massively ambitious.
In great expectation, Flowers shared the idea with Alan Turing, Jack Good and the rest of Newman’s team. But to his disappointment, they pointed out a major flaw. Thermionic valves were known to be unreliable. As even my grandfather knew, they needed regular replacing: like an incandescent lightbulb, they could blow. Flowers was asking to use a colossal number of valves; failures would mean that, even if such a machine could be built, it would hardly ever be working. The idea was a non-starter. Flowers had just presented one of the most far-seeing ideas of the twentieth century to some of the world’s most brilliant minds. Their disapproval must have been a severe setback. It could even have embarrassed him. How did he react? He built it anyway.
Flowers had worked on large-valve systems for automated telephone exchanges before the war and knew they could be reliable. Valve failure occurred due to heat cycling when the machine was switched on and off. The trick was, don’t switch the machine off. Failure could not be eliminated, but it could be reduced to very manageable levels.
Flowers disobeyed his instructions and exercised his devolved autonomy. Turing, Newman, Michie and Good were some of the most intelligent people in Bletchley Park, but they didn’t have Flowers’s experience. That experience was locked into his mind: if he could have shared everything he knew, they would have backed him to the hilt, but, like all humans, he was bandwidth limited. He couldn’t share both his intent and his full understanding with his allies. So they were sceptical. But Flowers had common purpose with Turing, Newman, Michie and Good. He was able to use that common purpose to support Bletchley Park by building the machine they thought wouldn’t work.
People must have trusted Flowers, because not only did he decide to build the machine, he persuaded his boss to support him with staff, space and supplies. Supplies that included thousands of precious valves. Flowers inspired his team to work eighty-hour weeks, and they continued at this relentless pace for ten months. Only a few knew the true purpose of the machine, but all of them knew it was important for the war. By February 1944 it was complete. It was the largest machine yet at Bletchley Park. They called it Colossus.
Colossus was brought into operation, and Newman’s team waited for the inevitable valve failures. They didn’t come. Flowers’s instinct was right.
The machine we saw Tony Sale operating on that day at Bletchley Park in 2005 was a Mark II Colossus. I’ve worked with big computers and stood in the rooms where they used to operate. I also stood with my colleagues watching Tony’s reconstruction of Colossus. The power requirements for heating thousands of valves are huge. Special electrical supplies are needed. The heat causes acrid smells to be emitted from the components. Tony slowly turned his creation on, powering it up gradually to avoid the heat cycling that would damage the valves. Today, we can visit these machines in museums; we can sample these smells from the past. But for Flowers, when he switched on Colossus, he was switching on the future – the electronic computer.
Turing and the team must have been astonished. In his pre-war mathematics work Turing had developed the theoretical idea of a ‘computing machine’. He conceived of it to prove mathematical theorems. He invented an idea called universal computation, a machine that could simulate any other computing machine. He showed what properties such a machine would need to have. Until the day that Flowers switched on Colossus, Turing’s machine had just been a notion, a concept for debate over a cup of tea. Flowers was demonstrating that universal computers could be built.
Colossus is the ancestor of all the computers we’re working on today, including those at Facebook. It was Flowers’s breakthrough that enabled the eventual vast assimilation of information that underpins the social network. Seven years after that demonstration, and seventy years after the Colossus was conceived, Joaquin joined Facebook. The company already had nearly a billion users and just under 5,000 employees. Facebook could track and store all conversations between those billion people. In 1943, Bletchley Park had 12,000 people to track and monitor the conversations of the German armed forces, which totalled 18 million men. Facebook can manage so much more information because it uses more automated decision-making. The information-processing pipelines pioneered at Bletchley Park could now be constructed by small groups of engineers using vast electronic databases. Processing information had moved from cottage industry to megafactory.
Still, on joining Facebook, Joaquin realized that even its information-processing pipelines were too piecemeal for the new approaches required for machine intelligence. He began working on a framework that would allow any engineer to quickly create and test new machine-learning algorithms. He automated the process of training those algorithms and testing how well they performed. The approach revolutionized how Facebook went about deploying machine learning. It was called FBLearner. By 2016 this new approach had become incredibly successful in the company:
FBLearner Flow is used by more than 25 percent of Facebook’s engineering team. Since its inception, more than a million models have been trained, and our prediction service has grown to make more than 6 million predictions per second.7
Joaquin had created an information infrastructure enabling software engineers at Facebook to deploy automated decision-makers at scale. Those 6 million predictions per second were decisions being made about over a billion people.
Back in Bletchley Park, they didn’t have time to waste thinking about the broader implications of their new machine. Their aim was to crack the Lorenz cipher. Flowers had built a demonstrator system, but lessons had been learned. The team immediately commissioned him to build an updated version. His exhausted team redoubled their efforts to build the Colossus Mark II. Four months later, in the early hours of 1 June 1944, Flowers was wading ankle- deep in water from a broken pipe, making the final connections to bring the new machine online. Four days later, and Eisenhower was reading one of its first decrypts and ordering the invasion of Normandy. Eisenhower read his enemy’s mind using the first electronic computer, and Fred woke up at his home in Kenilworth to hear that his unit was about to go and fight in France. Flowers’s machine hadn’t just launched an invasion, it had launched an intellectual revolution.
When looking to achieve their objective, the military intelligence services of the United Kingdom had one clear advantage over our brains. They knew who their enemies were. Human beings are placed in a different position. We are constantly faced with other humans who may be collaborating or competing with us. Our intelligence’s high embodiment factor means that even if we want to openly share our knowledge and intent, we can’t. At Bletchley Park there was a hybrid combination of human and machine working together in an information assembly line. The challenge they faced was decomposed into separate tasks and specially trained humans or machines were deployed to complete each task. This decomposition was possible because the objective was known: decode the German intercepts. In contrast, for human beings, the types of challenges we face vary, so our intelligence needs to be more adaptable.
Flowers surged ahead with developing Colossus despite the scepticism of the team. He couldn’t share his deeper understanding of the thermionic valve. If he could have, they would have fully backed him. As it was, they had to trust him, but their trust was qualified. They didn’t back him, but they didn’t block him.
This notion of trust, a suspension of scepticism arising from faith in another’s capability and motives, is critical to efficient human collaboration. It’s a vital component of the system of devolved autonomy we use to collaborate. Operating within a network of trusted individuals towards a shared aim leaves us free to focus on our own tasks without concerning ourselves with the motives of others. It allows us to overcome our intelligence’s high embodiment factor. However, imbued trust also comes with risk, because it leaves us vulnerable. Trust implies we are no longer sceptical of motives, because we believe we have alignment. When this is not true, we are exposed to manipulation.
On Thursday, 10 November 2016, two days after Donald Trump had been elected President of the United States, Mark Zuckerberg announced to assembled attendees at the Techonomy 16 meeting: ‘… the idea that fake news on Facebook… influenced the election in any way I think is a pretty crazy idea’.
Eleven months later, Zuckerberg was testifying in front of the US Senate. Facebook’s own internal investigations had shown that a Russian company known as the Internet Research Agency (IRA) had engaged in systematic exploitation of the Facebook platform.
Facebook estimates that as many as 126 million Americans on the social media platform came into contact with content manufactured and disseminated by the IRA, via its Facebook pages, at some point between 2015 and 2017. Using contrived personas and organizations, IRA page administrators masqueraded as proponents and advocates for positions on an array of sensitive social issues. The IRA’s Facebook effort countenanced the full spectrum of American politics, and included content and pages directed at politically right-leaning perspectives on immigration policy, the Second Amendment, and Southern culture, as well as content and pages directed at left-leaning perspectives on police brutality, race, and sexual identity.
So much for a pretty crazy idea. The IRA consisted of around 1,000 staff. By creating a few hundred pages they managed to coordinate a disinformation campaign which reached 126 million people. This small Russian entity could have such an outsize effect because it tailored the way it shared its posts to be sustained and spread by the artificial ecosystem Facebook had created. The IRA exploited Facebook’s automated decision-making to propagate misinformation. As Facebook’s supreme commander, Zuckerberg had lost control of his system.
In the Techonomy interview quoted above, Zuckerberg had explained how AI algorithms with thousands of parameters are used by Facebook to determine what content to share with which users. Zuckerberg’s naive confidence in the algorithm is at the heart of the problem. The algorithm provides a single point of attack in a similar way to the German military’s shared use of the Enigma code provided the Allies with a single point of attack. The IRA was able to identify weaknesses and exploit them. It leveraged its understanding of Facebook’s information infrastructure to sow disruption across the social network.
Human intelligence evolved in social groups, and for coherence and trust those groups have depended on validation from their peers. This validation breeds the necessary trust between us. Given our limited communication bandwidth, it feels natural that we would seek out such validation. I sometimes think of our need for information as akin to our need for food. So we have a cognitive diet, just like we have an actual diet. Real-world social validation is like finding fresh fruit on a tree – a wonderful opportunity to eat. Our bodies have a particular response to fructose, the sugar we find in fruit. What the algorithms behind social media companies have been able to re-create is an artificial sense of social validation – they are feeding us with the cognitive equivalent of high-fructose corn syrup. It triggers our sense of validation, but it lacks the cognitive nutritional value that real social validation brings.
The algorithm that ranks your posts in Facebook is called the Newsfeed ranking, and the IRA managed to find the cognitive equivalent of catnip. The Russian agents flavoured their posts with sharp discord: tailored misinformation. The posts were widely shared by US citizens. The IRA found weaknesses in the Facebook algorithm in the same way the Bletchley Park codebreakers had found weaknesses within German codes. Once the algorithmic weakness was discovered it could be repeatedly exploited because decision-making had been devolved to automatons.
It wasn’t only the Internet Research Agency that found Facebook’s ecosystem to be a happy hunting ground for manipulation. The IRA operated in the shadows of Facebook, creating fake accounts to share fake news stories. The extent of the IRA’s activities was revealed only after a ten-month investigation, but another would-be internet Svengali had no such qualms. This company was open about how it planned to influence elections and proud of its ability to deliver. Like a Bond supervillain, it boldly shared its plans for world domination while just on the cusp of success. Cambridge Analytica was a London-based company that sold services to political campaigns. It was interested in political advertising and its business idea was to tailor its adverts to the individual who was receiving them. The company could sell a different message to different voters. Adverts have always targeted different sectors of society, but the new proposal was to target the individual. The idea is known as microtargeting.
Just as the Bletchley Park codebreakers harvested information about individual radio operators and their foibles, Cambridge Analytica illegally harvested data about individual Facebook users. It used each individual’s post-likes to assess their personality. The syrupy sweetness of Facebook’s cognitive offering is the social validation given by receiving likes on a post. But, by sharing our preferences, we also share insight into our personalities. Our post-likes are a strong predictor of our psychometric profile. This profile is an advanced way of thinking about our individual personality, a deeper way of characterizing each one of us and what our susceptibilities are. The profile was then used to target specific political adverts at users.
