Playground, p.31
Playground, page 31
At first the site didn’t even have rules of behavior. We were all about finding the moves that the rules forgot to outlaw. We had a slogan, as goofy and ingenuous as I was at thirty: “Play hard. Get surprised.” That minimal rule set was embraced by our developers and users alike. And for a while our colony in cyberspace did very well with nothing but that minimal constitution.
Bad actors soon made us introduce a formal slate of things a person could get suspended for. Of course, that increased my payroll, as I now had to pay moderators to spot-check the thousands of texts and replies getting posted every few hours. I dreamed of training a machine to do that for the cost of electricity.
My growing team of lawyers drafted what would become an ever-evolving end user agreement. The first version they showed me seemed dead on arrival. It reserved the right to do pretty much anything we wanted with the users’ data. It even let us plant tracking cookies on their hard drives that kept collecting data long after they’d left our site.
“Nobody’s going to sign this,” I told the lawyers.
They just smiled, as patient as primary school teachers. The one just two years out of law school explained things to me. “They’ll click on ACCEPT without a second glance if it means being able to use the site for free. There’s no other choice, except not to play.”
Kim Janekin, my chief legal officer, reassured me on all counts. “We’re completely in line with current practice. This is how it’s done, these days.”
“And asking for all those things . . . will hold up in court? If someone decides to challenge the legality, we’d prevail?” It didn’t seem possible.
Kim shrugged, conceding the mystery of it all. “Brave new world, right?”
And I’d thought I was one of the builders of that world.
It dawned on me: People in my field always talked about “human equivalence” as the gold standard for machine intelligence. But the smartest people in the world gave away their data for free without bothering to read the contract. Data was life. Little in the world was more valuable. If giving away your data was the benchmark, maybe artificial general intelligence was going to be easier to achieve than we thought.
CREATIVE USERS WERE AMASSING small fortunes in Playbucks and putting them to ingenious uses in secondary markets not officially recognized by the platform. They leveraged our tipping system to hire other users to act as lobbyists in the various forums. They created their own in-thread polls and personality contests. This all seemed fine to me. Why should our virtual society be any better behaved than our real one?
The site was its own laboratory. We tweaked the designs and added features with an eye toward making the place as addictive as we could to as many kinds of people as possible. Endless scrolling, mystery friends, matchup algorithms, power-ups and special privileges, lots of ways to build your stats, all kinds of intermittent reinforcements and notifications: something urgent was always happening in response to the urgent gossip that you had just responded to a minute ago. “Keep them logged in,” I told my staff.
My directive worked. Not only was our user base doubling faster than the world’s processing power, the time the average user spent on the site was growing faster than China’s GDP. More people than the population of New York had tried my creation. I wanted to brag to Rafi. A million person-hours, every day. And the growth curve was still shooting straight up. For my own reasons, I wanted more.
I was now interesting enough to the press that they began speculating about my private life. Mostly they wanted to know why I didn’t seem to be in a relationship with anyone. I told them I didn’t have enough hours in the day. It didn’t occur to these journalists that someone whose work was being used by ten million people might find life sufficiently gratifying without having to fight another human being every single day over how high to keep the thermostat or how to leave the toilet seat lid.
USERS WERE EARNING AND SPENDING fortunes in Playground long before the platform had its first profitable quarter. But when we turned a profit at last, growth fed on itself. Our IPO soared, and my paper wealth became a real fortune. We paid off our angel investors, who used the money to buy prepper bunkers in New Zealand. We bought our own server farm and updated the site’s aging interface. As our mountain of user data turned into an entire range, our share price began to dabble in speculative fiction.
I was making more than I knew how to spend. I built this spectacular house, which I have always loved. The solitude of it, even inside a crowded city, the big views of the sky and the trees: it will be a fine place to die in. And I went to some incredible places. I hired a full-time chef and ate better than any of history’s most notorious monarchs.
But I lived for Playground. In terms of combinations and possibilities, my game did to Go what Rafi once told me Go did to chess. It was worker placement and area control and hand management and push your luck, all rolled into one. I was making tens of thousands of simultaneous moves a day, and the days were long and engrossing, full of tension and laughs. Nothing else was half as interesting.
I worked for the love of it, the sheer joy, the way I programmed as a boy, to escape the hell of my family and to make a good thing out of nothing. The need to solve an intricate puzzle and the need to quiet your brain are twin sons of different mothers. I was helping to build the next big way of being. And some part of me was getting my revenge on those who had declared me dead to them.
ODDLY, SOPHISTICATED VIDEO GAMES passed me by on their way to world domination. I knew they would surpass books and music and movies, in total value. But I couldn’t play them. My fingers were never fast enough, and my brain couldn’t locate the living opponent. Rafi had never warmed to them, and neither did I. Respect them, sure. Even delighted in some. But my loyalties were elsewhere.
Throughout the years of the board game renaissance, I collected board games the way a car collector amasses undrivable cars or an oenophile collects wines that he knows he’ll never have time to drink. I had my favorite game designers. I read the rules and punched the pieces and sometimes even set them up to see what they would look like played. But they sat on the shelves of the special library that I built for them—more than two thousand titles—waiting for the day when I might have a little gaming group again.
A little more than a decade after Deep Blue conquered chess, IBM’s Watson beat the world’s best Jeopardy players, and the goalposts for the uniquely human got pushed back again. Winning at a general-knowledge trivia game was much more formidable than beating the best human player at chess. But I didn’t write Rafi. I was done with him. The man had demeaned me.
WHILE MY BOARD AND HANDPICKED officers focused on holding on to the largest possible share of a market that hadn’t existed a few years before, I set my sights on Playground’s real product: our millions of accumulating heart-cries. I knew we could turn the billions of posts into a whole new kind of currency. Every time users posted anything, they gave away all kinds of information about who they were, how they behaved, and what they valued. That mass of human messiness held the key to the company’s future.
To me, all those flamboyant user posts were unreadable. I’d never been great at understanding humans, and I gave up trying the day Rafi cut me dead. But countless other vendors lusted after the piles of data we were amassing, and I was happy to sell. There had to be a way to spin that mountain of shit into gold. Obviously, only more software could do the spinning at a speed and scale that would be profitable.
I knew that good old-fashioned AI—the kind I’d worked on in college—wasn’t up to the task. But the first signs of a technique that might mine our millions of posts for valuable information had started to take hold. It was a revolutionary approach that went by the name of deep learning.
Deep Blue had beaten Kasparov by brute force. Human programmers specified the best openings, defined strong moves, described how to control the board and gain material, and spelled out how to win the endgame. Deep Blue just took those declarative instructions and applied massive computation to look farther down the branching tree of moves and countermoves than any human could.
But the new machines did something wildly different. They were learning on their own, with reiterated reinforcement under limited supervision. They combed through continents of data on their own, finding patterns, generalizing, and drawing conclusions that even their trainers couldn’t see. They were learning how to play simply by playing.
And these deep players learned the most extraordinary things. They started to drive cars. Without being told a single thing about cats except whether a given picture showed one, they learned how to recognize any cat from any angle under any conditions. They figured out how to translate text from one language to another with uncanny fluency, without being taught a single rule of grammar or usage. They learned these things the way a child would, by weighing the evidence and adjusting the strengths of the connections in their networks of neurons until their brains began to generalize solutions.
I SET DEEP LEARNING loose on Playground’s trove of user data. Every sentence a person wrote, every picture a person uploaded, every post a person voted for taught the deep learners what that person believed and what that person wanted. A deep learning AI could look at our hundreds of millions of pages of evidence and figure out what kind of car a user liked to drive, how much money they made, what charities they might donate to, the food and drink and clothing and luxury goods they most coveted, whether they might commit adultery or cheat on their taxes, or how they voted in real life. To know someone was to have power over them, and my deep learning algorithms were starting to know our users in ways no human could. They could see things in the data that eluded everyone, without blindness or bias, strictly by correlating all the evidence.
Our matchmaking algorithms were crude at first: Outfitter ads for people who contributed posts about hiking and fishing. Ads for certain makes and models of cars for those who praised them. But as the deep learners began to correlate what our users did and said with the ads they in fact clicked on, the knowledge deepened. Before long, plenty of vendors were willing to pay a great deal for the advantages that our targeting systems gave them.
FINDING CORRELATIONS in the user data was just Mesohippus—a great breakthrough that was already obsolete. The next new thing depended on machines learning to understand what our users were posting. I saw a chance to do Asimov’s psychohistory: predict the flow of collective events by statistically aggregating their tiny parts—aka individual end users. Something larger than us was playing in Playground now.
AI apprentices like ours began to make marketing decisions, provide customer support, develop drugs, diagnose and treat patients, and hand down criminal sentencing. We were putting the future on autopilot. But I never stopped to question the rule that governed life as I knew it: Unfold or die.
I SANK A TON of money into a start-up called DeepDive. It was a not-for-profit that promised for-profit research components. My investment was part of a pledge among Valley tycoons that totaled over a billion dollars, and that ante bought me a chance to use, for my own selfish pursuits, whatever the start-up discovered. The founders of DeepDive had my money the moment I learned about their proposed approach. They intended to raise the next generation of AI agents by training them to play every board and video game worth playing.
At first the procedure consisted of teaching the machines the rules of a given game, explaining the goals, and letting the AI find its way forward by trial and error. Often that was enough to evolve the AI to play as well as anyone. Sometimes the machine players developed wild new strategies that blew their teachers’ minds.
But goals alone weren’t enough to give the AI the traction it needed to find the dominant strategies that richer, more complex games required. The team at DeepDive came up with another master tactic: inverse reinforcement. They told the AI almost nothing at all, instead leaving it alone to figure out the rules and the goals itself. In time, these next-generation AIs learned to derive winning strategies simply by watching real people play and inferring what the human beings were trying to do.
Time and again, the AIs saw past the human players’ bumbling moves, then turned around to teach them a better, more brilliant way to win. The play of these artificial agents was often alien and always intriguing, but because the machines were not explicitly programmed, their trainers could not look under the hood to see how they worked their wonders. Or rather, the humans could look, but all they saw was a tangled network of weighted connections as mysterious as any living brain.
The Age of Humans was coming to an end. We were already past year one of the Age of Deep Machines. A new kind of life had come along to take our jobs, manage our industries, make our new discoveries, be our friends, and fix our societies as it saw fit. And that age launched itself in a heartbeat, after the briefest childhood.
Games now ruled humanity. Mobile games that consisted of little more than tapping on the screen when a box popped up were destroying people’s lives. Dragon quests with thirty million streaming subscribers. Video games that spawned theme parks and film franchises. Four thousand new board game titles published each year. Sports themselves were already out of control, but e-sports were growing faster than any physical sport ever had. The combined revenues of all competitive recreations now dwarfed all but a few other industries. And it made perfect sense to me that the machines that would doom us cut their teeth by watching humans play.
MY INVESTMENT IN DEEPDIVE paid off in spades. What those labs learned from watching AIs learn to play board games they now applied to the greatest game of all: Wittgenstein’s Sprachspiel—the language game. Supervised learning with human reinforcement, fed on a diet of millions of web pages, produced agents that could look at Playground’s posts and predict a user’s hopes, fears, and buying habits with chilling accuracy.
All the boats were going up fast. At DeepDive, we duplicated others’ results almost as fast as we read about them: deep reinforcement learning, shaped learning, sequenced incentivizing. . . . My contribution was to teach the learners how to be curious. Curiosity was the core inner value of all the strongest players.
I NOW HELPED TO RUN three companies at once, one of them among the largest social media firms in the country. I took no days off, and the hours for my second and third enterprises came out of sleep. Somewhere around this time, my mother sleepwalked out onto Dempster a little after midnight and was hit by a car and killed. I made it to the funeral in Evanston but was back in San Jose for a meeting seven hours later.
PLAYGROUND GOT AWAY FROM ME. I logged in one day to discover that I couldn’t understand a good third of the posts. Some of the most celebrated entries were written in a combination of acronyms, neologisms, and emoticons that made them look like a child’s rebus. People were making videos where they superimposed icons or text boxes over the faces of moving bodies, turning them into animated allegories. They’d post these without comments, earning huge tips.
Posts were spilling out into the sandbox of real events, becoming the news items they were commenting on. One of the site’s domains birthed a vigilante group whose posts about reputed morals abuses forced a university professor to resign in disgrace. Threads in other branches caused weird consumer goods to sell out or once-healthy companies to declare bankruptcies. Viral posts and the responses they touched off made and ruined the careers of actors and helped cancel and create popular TV series.
There were flame wars and all-out partisan conflicts. There were threats of violence and incendiary statements that in any other form would have been grounds for slander suits. Roll-your-own facts were doing a brisk business. Creative hate made big Playbucks. Cults bred as fast as bacteria. So did influencers, deepfakes, conspiracy theories, and scrollable doom-peddling. We let crazy things go, banning as few users as possible. We were an experiment in real democracy. The future had to be a level playing field, free for all voices.
A self-assembling posse in the Investments subdomain encouraged tens of thousands of users to buy small lots of a failing stock. The price skyrocketed, putting a squeeze on hedge fund short sellers, who ended up losing billions. The press billed that as grassroots Davids beating capitalism’s Goliaths. I knew it wasn’t that, but I didn’t have an account and wasn’t posting my opinions. This was the first time that a lot of people realized that the stock market had become a variant on Texas Hold ’Em with no relation to fundamentals. I never did like poker. Too much psychology, never my strong suit.
We scraped the data of a hundred thousand users, analyzed it, and sold it to a political consulting firm, who used it in a hyper-savvy campaign of digital targeting to put their man in office. When news of that broke, it caused a flurry of hypocritical breast-beating around the world. I was called to testify before Congress, and for four hours I was the most famous CEO in the country. But the legislators were too benighted by the whole rise of social media to stay on course or to grasp what was happening. Once we established the legality of our end-user agreements and our use of the data, they hardly knew how to proceed.
A liberal congresswoman from Massachusetts asked, “Why shouldn’t your site be regulated, the way all other public utilities are?”
“Because we’re not a public utility. We’re just a platform. A neutral platform. Playground encourages all flavors of human ideology, and we believe in protecting the free speech of our users.”
This caused the congresswoman to retreat behind her notes. “Mr. Keane. Two years ago, in an interview in Wired magazine, you called yourself a creative destroyer. Would you still use those words to describe yourself?”








