The atomic human, p.1
The Atomic Human, page 1

Copyright © 2024 by Neil D. Lawrence
Cover design by Pete Garceau
Cover image © iStock/Getty Images
Cover copyright © 2024 by Hachette Book Group, Inc.
Hachette Book Group supports the right to free expression and the value of copyright. The purpose of copyright is to encourage writers and artists to produce the creative works that enrich our culture.
The scanning, uploading, and distribution of this book without permission is a theft of the author’s intellectual property. If you would like permission to use material from the book (other than for review purposes), please contact permissions@hbgusa.com. Thank you for your support of the author’s rights.
PublicAffairs
Hachette Book Group
1290 Avenue of the Americas, New York, NY 10104
www.publicaffairsbooks.com
@Public_Affairs
Originally published in hardcover and ebook in the United Kingdom by Allen Lane, an imprint of Penguin Books in June 2024.
First US Edition: September 2024
Published by PublicAffairs, an imprint of Hachette Book Group, Inc. The PublicAffairs name and logo is a registered trademark of the Hachette Book Group.
The Hachette Speakers Bureau provides a wide range of authors for speaking events. To find out more, go to hachettespeakersbureau.com or email HachetteSpeakers@hbgusa.com.
PublicAffairs books may be purchased in bulk for business, educational, or promotional use. For more information, please contact your local bookseller or the Hachette Book Group Special Markets Department at special.markets@hbgusa.com.
The publisher is not responsible for websites (or their content) that are not owned by the publisher.
Library of Congress Control Number: 2024936714
ISBNs: 9781541705128 (hardcover), 9781541705142 (ebook)
E3-20240722-JV-NF-ORI
Contents
Cover
Title Page
Copyright
Dedication
Prologue
1. Gods and Robots
2. Automatons
3. Intent
4. Persistence
5. Enlightenment
6. The Gremlin of Uncertainty
7. It’s Not Rocket Science or Brain Surgery
8. System Zero
9. A Design for a Brain
10. Gaslighting
11. Human–Analogue Machines
12. Trust
Epilogue
Acknowledgements
Discover More
About the Author
Praise for The Atomic Human
Notes
To Valerie, Mark and Garth
Explore book giveaways, sneak peeks, deals, and more.
Tap here to learn more.
Prologue
December 8th, 2013: Penthouse Suite, Harrah’s Casino, Stateline, Nevada. This was the moment at which I became an artificial intelligence researcher. Or, at least, the moment when the research field I work in started calling itself ‘artificial intelligence’.
I was at the casino for a scientific meeting called Neural Information Processing Systems, the main international machine-learning conference. I’d attended every year for sixteen years, and I’d just been asked to chair it the following year. But the community I was leading was about to take a major turn.
A handful of global experts had been invited to Mark Zuckerberg’s penthouse suite. Corporate events were not uncommon at the conference: after all, Stateline, on the south-eastern shore of Lake Tahoe, is where Nevada borders California and is only a four-hour drive from Silicon Valley. But this Facebook gathering had a different feel. Twelve months earlier, a breakthrough study had been published, a step-change in the computer’s capability to digest information. Computers were now able to discern objects in images – finally, computers could see. You could present them with a picture, and they might (correctly) tell you it contained, for instance, a container ship, or a leopard, or some mushrooms. The key advance emerged from an approach known as ‘deep learning’. This works in the opposite way to how an artist paints. Artists compose pictures from brushstrokes; deep learning decomposes images into mathematical signatures that give the essence of an object. The team that delivered the result quickly formed a company. It was bought by Google in a multimillion-dollar deal.
Our new millennium’s medium of communication is images, videos, text messages and likes, and deep learning now gave us the ability to read the images being shared by Facebook’s users, potentially allowing the company to understand what its users were sharing without having to manually view the hundreds of millions of photographs on the site. The knock-on effects, not only in terms of advertising and moderation but in many other aspects, could be enormous.
Facebook’s rival, Google, had stolen a march on it by buying up the talent that made this breakthrough. Facebook was about to make its response. It was a gamble, a major investment in a new direction. That was why we were in the penthouse suite: Facebook was going all in on machine learning, all in on deep learning. To show the company was serious, the small group assembled at the top of Harrah’s Casino included Facebook’s Chief Technology Officer, Mike Schroepfer, and their 29-year-old CEO, Mark Zuckerberg.
The plan was simple. Recruit one of the principal researchers driving the revolution in machine learning. Unveil them as the head of Facebook’s new research lab, then develop and monetize this new technology which could radically change the platform. They chose Yann LeCun, a professor from New York University with twenty-five years’ experience in the field. Yann’s early career had been spent working at Bell Labs, developing the ideas that would deliver deep learning’s success. He would go on to win the 2018 Turing Award, the ‘Nobel Prize of Computer Science’, for this work, but five years earlier in Harrah’s, the attention was not from his academic peers but from industry. Having known Yann for fifteen years, the words that came out of his mouth when he launched the lab surprised me. He was quite specific: ‘The new lab will be focused on advancing AI.’
AI – artificial intelligence. This wasn’t to be a machine-learning research lab, or a deep-learning research lab. It chose to call itself ‘Facebook AI Research’, and at that moment the die was cast.
The term ‘artificial intelligence’ has a chequered history. Classical AI techniques relied on logical reasoning about the world, like a silicon version of Sherlock Holmes. Outcomes were deduced through weighing evidence according to the strict rules of mathematical logic. But while Sherlock Holmes’s deductions may make an entertaining read in a detective novel, they prove to be very brittle when deployed on real-world problems. The world we live in is more complex and nuanced than a simplistic logical representation can accommodate. Back in 2013, my community, the machine learners, associated the term ‘AI’ with a group of researchers that had heavily over-promised and considerably under-delivered. Despite these failings, AI had maintained a strong hold on the popular imagination. Perhaps Yann wanted to capitalize on that, but the term comes loaded with cultural baggage. Past promises have led to representations on film and in literature that, like Sherlock Holmes, entertain but mislead us about the nature of intelligence. So what is intelligence – and what is artificial intelligence? One of the major challenges when writing about intelligence is that ‘intelligent’ means different things to different people in different contexts. In this book, I will look at how this shapes – and sometimes distorts – our perceptions of what artificial intelligence is, and what it can be. That’s why sometimes I refer to machine intelligence instead of artificial intelligence. I am hoping that a fresh term will bring fresh understanding.
Henry Ford built his eponymous car company around the principles of mass production, creating the first widely affordable car, the Model T. When designing it, the story goes that if he had asked his customers what they wanted, they would have asked for ‘a faster horse’. If Henry Ford were selling us machine intelligence today, would the customer call for ‘a smarter human’? On that evening in Nevada, Mark Zuckerberg was the customer: he was buying an AI lab, so just what did he think he was getting for his money?
Zuckerberg spent the event glad-handing potential recruits and outlining his vision for Facebook. At one point I stood listening to him, standing alongside two friends and colleagues, both long-time academics in the machine-learning world. Max Welling was a professor at the University of Amsterdam and Zoubin Ghahramani a professor at the University of Cambridge. Zuckerberg laid out his vision to the three of us, one of an interconnected world. A pure, single-minded vision. I was reminded of listening to a very bright graduate student, one who was flexing the new-found intellectual freedoms offered at university, imagining the changes their ideas would bring to the world. I had to resist my conditioned response. Normally, I enthuse about my students’ dreams but warn of the reality of the challenges they are likely to face. Encouragement and guidance. That evening, I was listening to a billionaire who had already connected a fifth of the world’s population to one another. His dream had already been realized, and artificial intelligence was a new frontier for him to assimilate within his ambitions.
And it wasn’t only Facebook that was turning its focus to AI. Within a month there was another announcement, in London. Google had built on its initial investment by spending a further $500 million on a London-based start-up, DeepMind. This ambitious venture had the stated aim to first solve intelligence, then use it to solve everything else. But what does this tag line even mean? Can intelligence be
solved’? Is it like a crossword puzzle, where the words just need to be slotted into place? I’m uncomfortable with that idea. I prefer to think of it as a riddle, where the wording of the riddle itself might be misleading you.
In retrospect, for me, that December day in Nevada was the start of a journey. Since then, my research field has transformed into something unrecognizable from its beginnings. That conference in 2013 attracted a record 2,000 delegates. Six years later, there were 13,000 attendees, and a long waiting list. As the meeting has changed, our community has also had to change: machine learning has become more important within society. Algorithms we develop are now widely deployed. Not only on social networks but in robotic systems and driverless cars. These algorithms have a direct effect on our lives and on our deaths. Most recently, the techniques we have developed have been able to pass the Turing Test, a long-time flagship challenge in AI. Zoubin went on to be Uber’s first Chief Scientist and now leads Google’s AI division. Max became Vice President of Machine Learning at Qualcomm and is now a Distinguished Scientist at Microsoft. I spent three years heading up an Amazon machine-learning division. Facebook’s wooing of us would, as it turned out, presage a movement across the tech industries and the wider world: one that we’re still just at the beginning of.
1.
Gods and Robots
At the centre of the Sistine Chapel’s ceiling, far above your head, is probably the most famous image of God and Adam. It depicts the creation story from Genesis: in the Bible, man is described as being formed of the dust from the ground, with God blowing the breath of life into his nostrils. The great Renaissance artist Michelangelo represented God in classical form on the chapel’s ceiling. A white man with a large white beard and a flowing robe, God reaches out to touch the finger of a languid Adam, who reclines on a hillside.
Today, society is focused on a different form of creation – the realization of machine intelligence – and for this creation we form our own modern images. There was a time when the lead image of just about every article on artificial intelligence was a red-eyed android. While God breathed life into Adam, James Cameron’s Terminator was created to snuff out life from humanity.1
These two images are separated by five centuries, and they are opposites: God is creator; the Terminator is created. God is our patron; the robot our nemesis. But the images have a thread of commonality to them, one that is intrinsically linked to a key characteristic of human intelligence.
Machines automate human labour, and we can trace the history of automation to the period when Michelangelo was finishing the Sistine Chapel ceiling. In his time, any image that was created had to be manually rendered: the Sistine Chapel ceiling was painted in fresco by the artist lying on his back. The Terminator, by contrast, was rendered by machine-augmented special effects and a film camera. It can be reproduced almost instantaneously wherever it is required.
Yet the efficient reproduction of images and text dates back to Michelangelo’s lifetime. When he was painting the ceiling the printing press was already becoming popular. Initially it was used for the reproduction of biblical texts, but it was soon employed to automate the work of writing indulgences – promissory notes granted by the Catholic Church intended to provide relief in Purgatory for earthly sin. The printing press allowed for widespread sale of indulgences, which no longer had to be laboriously copied. The proceeds were used to fund one of Michelangelo’s later commissions: the dome of St Peter’s Basilica in Rome.
The printing press automated the copying of writing and so facilitated the rapid exchange of information. The high-pressure tactics used to promote the sale of indulgences led, both directly and indirectly, to the Protestant Reformation. But the printing press also enabled classical works on mathematics, logic and science to be widely shared. The printed word propelled Europe from the Renaissance to the Enlightenment. Literacy increased, and, from da Vinci to Kepler to Newton, innovation transferred across generations.
Printing removed a blockage on our ideas, allowing them to flow more freely across time and distance, and leading eventually to our modern, technology-driven society. Photography and film cameras automated the creation of images, removing obstacles to the labour of creation. The printing press automated the sharing of the written word, releasing our ideas. Artificial intelligence is the automation of decision-making, and it is unblocking the bottleneck of human choices.
In this book I will explain how artificial intelligence does this, and what it means for the human left behind. More specifically, the book is about human intelligence, through the lens of the artificial – and whether there is an essence of the human that can’t be replaced by the machine.
To better understand human intelligence, I will look closer at the perils, pitfalls and potential of a future that is already here: the future of AI. To understand that future I will look at stories from the past, using them to develop our understanding of what artificial intelligence is and how it differs from our human intelligence.
In 1995, when he was Editor-in-Chief of Elle magazine in France, Jean-Dominique Bauby suffered a stroke that destroyed his brainstem. He became almost totally paralysed, but he remained mentally active, suffering from what is known as locked-in syndrome. The only movement he could voluntarily make was to wink with his left eyelid. From his hospital bed, incredibly after he became paralysed, Bauby wrote a memoir, Le Scaphandre et le papillon (The Diving Bell and the Butterfly).
It took Michelangelo four years, lying on his back, to paint the Sistine Chapel ceiling. Bauby’s book was written in the same supine position from his sanatorium bed. Over ten months, in sessions that lasted four hours a day, Bauby winked to indicate a letter on an alphabet grid to spell out the words.
His ability to communicate was severely curtailed. He could think as freely as each of us, but he couldn’t share his ideas. A diving suit2 is a claustrophobic space where communication with fellow humans is limited. For Bauby, the diving suit represented how it felt to be restricted in this way. The butterfly represented the freedom of his internal thoughts, his ability to retreat into memories and dreams. The Diving Bell and the Butterfly gives us an insight into a state of isolation, physically constrained but mentally free.
Stories of locked-in syndrome seem to have a fascination for us. I think this reflects our fears of being in a similar state. So, it may surprise you to learn that we are all already in that state. Our intelligence, too, is heavily constrained in its ability to communicate. Each of us is, in a sense, a butterfly within a diving suit.
Today, written words are spread not just by the printing press but by a network of electronic machines that communicate almost instantaneously across the globe. This network allows us to view Michelangelo’s ceiling, or James Cameron’s Terminator, wherever we are. To share these images, we have built an international infrastructure that carries information through the heavens, across the sky, under the seas and over land. Whether through satellites, mobile phones, undersea cables or copper telephone lines, our images and words are converted into digital streams and propagated around the planet.
Early communication networks were built by the Bell Telephone Company, and laying the cables was expensive. Telephone companies needed to estimate how much information was moving between cities so they could plan how many cables to lay, so they needed a way of quantifying the information to be carried. Claude Shannon, an American engineer who worked at Bell Labs, where Yann LeCun would find himself many years later, came up with a mathematical representation of information to help quantify the content the telecommunications cables were to convey. We call it information theory. He suggested information should be separated from its original context: it should be rendered not as a word or a sound but as a 1 or a 0. He called each 1 or 0 a bit of information.
In the moment I share the result of a coin toss, where 1 represents heads and 0 tails, you gain a single bit of information. This holds for any result from any two equally probable outcomes. If the odds of a tennis player winning a match are even, then learning that they’ve won gives you one bit of information. The quantity of information doesn’t depend on whether you’re a fan of the particular player, how you feel about the match, or even whether you’re interested in tennis. Shannon’s idea was to quantify information in a universal manner, in a way that doesn’t depend on circumstance.
