His sci-fi novel about WWIII got him invited to the White House. He thinks his next one, about A.I., will do more than that.
Why I Made This Future is a recurring feature that invites speculative fiction authors, futurists, screenwriters, and so on to discuss how and why they built their fictional future worlds.
Peter W. Singer is a well-known political scientist who examines trends in international relations, technology, and warfare for beltway think tanks like the Brookings Institute and the New America Foundation. He has written a number of influential nonfiction tracts, like Wired for War, which explored the rise of autonomous weapons. But the work that has had the most lasting impact on U.S. foreign policy, he says, was his Tom Clancy-styled spy novel.
In 2015, Singer published Ghost Fleet, a researched speculative fiction book about the coming of World War III, with co-author August Cole. It was both authors’ debut novel, and it struck a chord. “I got invited to brief it everywhere from the White House to the tank inside the joint chief staff meeting room in the Pentagon,” Singer tells me. “To groups like JSOC and NSA. My co-author August got invited to speak on it at the Nobel institution. The Navy even named a $3.6 billion ship program Ghost Fleet.”
The book was such a sensation in the defense community that the authors were inspired to systematize their process of fusing nonfiction research about technologic, economic, and social trends to fictional plotlines — they call it FICINT. And they’re hoping the next book, Burn-In: A Novel of the Real Robot Revolution, out May 26, will do for the threat posed by automation and A.I. what Ghost Fleet did for the next world war. This book is even more methodically based on real-world research and events — Singer says they approached it as if it were nonfiction, and read 1,200 reports about automation — and portends a future marked by technological unemployment, rampant consumer and state use of augmented reality, and law enforcement’s total embrace of digital surveillance and facial recognition.
I spoke with Singer about why he so meticulously researched and built a near-future world on the cusp of violent transformation, how fiction can transform government policy, and why the “robot revolution” is, for him, the single biggest story in human history thus far.
The following has been edited for length and clarity.
OneZero: What are the key cornerstones of the future that you’ve created here? How did you go about building this future?
Peter W. Singer: So Burn-In is a different kind of book. It’s a blend of fiction and nonfiction in a way that, as far as we know, hasn’t been done before. So it’s a techno-thriller set in Washington D.C. of the future as you follow an FBI agent on the hunt for a new kind of terrorist. But baked into the story are some 300 explanations and projections that are drawn from real-world research with literally the endnote reference there to show that it’s real and where it came from. It might play out in micro details. If there’s a certain kind of drone, it’s not that we dreamed it up. It’s “here’s the patent for it.” Or it might be something larger. A macro trend, what automation is going to do. Increasing vulnerability of critical infrastructure, to new kinds of cyberattacks. It might be a certain kind of dilemma that we have to figure out.
An example of that would be algorithmic bias, which is the idea that an A.I., even if it’s not been told to, could yield biased actions or recommendations. And it might be biased in terms of some kind of outcome — it drives the wrong way, like in the joking scene from the TV series The Office where they follow the GPS into the lake. Or it might be something more sinister. We’ve already seen examples of A.I.s that weren’t deliberately programmed to be racist, but they’re providing racist recommendations for things like who should get a loan or even get certain medical treatments.
It’s more about painting the entire world.
The title Burn-In is taken from the technical term for when you deliberately push a new technology to the breaking point in order to learn from it. When you take a new watch underwater to see just what depth it can go to. So what we’re doing here is taking all the plans, all the trends, and playing them forward in order to learn from it. And the subtitle, though, shows the really big trend that comes out of that. The subtitle is A Novel of the Real Robotic Revolution.
And what that’s referencing is that the way that we think about A.I. and robots really for the last 100 years has been shaped by the very first narrative of it. We’re on exactly the 100 year anniversary of the creation of the word robot. Back in 1920, the writer of a book called RUR comes up with a new word for what to call mechanical servants who wise up and then rise up. And ever since, the way we talk about, write about, think about robots always goes back to that idea: Kill all humans.
I loved the nod to Rossum’s Universal Robots in the RUR function — “Roam Until Recall” — of your self-driving cars. [RUR is a 1920 play by the Czech writer Karel Capek, that introduced the word “robot” into popular vernacular.]
Yeah. The whole narrative today is “killer robots:” You see people spend over $5 billion to address existential threats from robots in research programs. Maybe a robot uprising might happen one day far off in the future, but for our lifetime, it’s actually a robotics revolution, in that it’s an industrial revolution — but taken to even the next level.
It’s the equivalent of the transition that we saw with the steam engine and mechanization or the last generation of computers. But it’s beyond that because it’s not just a new tool. It’s changing what’s in the person’s hand. Shovel to hammer to keyboard. No, this is a new tool that actually begins to think and act on its own. So that really is the big trend that is what we play with in the book, but also, of course, it’s the trend that’s reshaping the world around us. Whether it’s business, politics, security, you name it. We’re only just at the start of it.
What led you to take this very granular approach to science fiction — there are actual footnotes throughout the text — and where you’re actually quoting the slogans of Chinese technology companies and referencing actual patents?
In a certain way, we’re serving two gods. The writing gods of fiction and the gods of nonfiction. So for the fiction side, we think it makes it not just more realistic, but more compelling. When a scene happens, the fact that you can go to the footnote and go, “holy crap — that really could happen” makes it even more of a holy crap moment.
The nonfiction side makes it what we’ve called FICINT or useful fiction. FICINT is a term that’s a reference to what the intelligence community calls SIGINT, signals intelligence, or HUMINT, human intelligence. You have these different tools for collection and analysis. And FICINT is the idea that you can use grounded, researched narrative as a way not just to understand the future, but to say, “Okay, here are the trends. Here’s what in turn we can project out of them. And also how different real-world actors are going to react to these trends.”
So it’s not just that we’re seeing a move towards automation and IOT. It’s, okay, they’re baking in vulnerabilities and that means of course that certain hackers might go after them. What would be the consequences of that? Or, we’re moving into more and more face recognition. Is it criminal? Is a teenage prankster just going to go, “Okay. You can recognize my face.” Or are they going to come up with counters to it? And we show what those counters might be.
And so you can learn from it.
This came out of our experience with the last book, Ghost Fleet. Strangely, or maybe not so surprisingly, people, including incredibly important people — CEOs, senators, generals — are more likely to read a novel than an academic white paper. I’ve written a number of nonfiction books and they’ve done well. But the book that has had the greatest policy impact and opened up the most doors was Ghost Fleet.
And so for Burn-In, we leaned into that even more. We had this dual track of research. The kind of research that not just a good fiction writer would do, but what you have to do for good analytic nonfiction. We built a database of every single automation jobs impact report we could find. Everything from the World Bank to McKinsey projections. Some 1,300 in all. And then that allows you to see not just what one organization is projecting, but across the board. And interviews: everything from A.I. scientists, to FBI agents, to a water systems engineer.
So that job automation report database, we carry that across through the main character’s husband. When most people think of A.I.’s impact on jobs, they think blue-collar jobs. Factory workers or truck driver, et cetera. But what’s starting to play out right now and what’s going to hit even harder soon is across a wider array. Including some areas that might surprise people. So we use the character to show that. He’s actually a contract lawyer. Contract lawyers make over $200,000 a year now, yet it’s one of the professions that will see massive amount of shrinking because of automation. A.I. basically will be able to do the job that these high-paid lawyers can do now.
That’s the nonfiction coming across, but then you put back on the fiction hat and you then go, “Okay. What’s the emotional side of that? How does that hit the characters’ self identity?” He got good grades, went to a good school, got a really well-paying job, and then suddenly it’s all pulled out from under him and now he’s doing remote gig work, which is something that we all sort of feel right now. How does that hit his self-identity? How does it hit his marriage? How does it hit the way he parents? How does it hit his politics? You can clearly see that putting it in the fictional framework, one that makes a really good, compelling storyline, but it also allows you to think about it a little bit further in terms of the policy impact side: “Oh wow, this is more than just numbers.”
One other thing I should hit on the how, we actually have a set of rules that we have to follow. So for us, any technology in it has to already exist or already be at prototype stage. Even the actions. Say it’s a cybersecurity hack. It has to have already been done or at least demoed at a hacker convention that it could be done. And then real-world character actions and reactions. It’s not just, “x happens.” It’s okay, how might someone in the real world react to that, exploit that, think of it? So the criminal has to mirror things that a criminal might do.
Ghost Fleet really embodies this rising new trend of researched speculative fiction that’s designed to be useful. A year or two ago, I wrote a piece about the rise of this science fiction industrial complex. There are efforts to standardize this process — especially by the military, which uses fiction programs to game out war theater.
Yeah. My co-author August Cole literally led a project at Atlantic Council that was designed to essentially Johnny Appleseed these programs around the world. Since then, we have participated in either helping to set up or as participants in these programs. U.S. Marine Corps, U.S. Army, U.S. Navy, Norwegian military — you name it. And they go and they come in lots of different forms. Some of them are in the public — some are sci-fi writing contests.
Given that you’ve so thoroughly systematized this process, why did you make the focal point of this book the rise of automation and A.I.? Why is this the book you wanted to write right now?
The trends going on around us in A.I. and automation and robotics is arguably the most important story going on around us. Not just in technology or even politics, but it’s one of the most important in human history. I know that sounds grand to say, but we are literally right now creating for the first time tools that are intelligent. Tools are what originally separated us from all the other species and now we’re starting to use and create tools that more and more make decisions and undertake actions on their own. Hence, to make a joke, a really big deal. And yet, even more, it’s so incredibly poorly understood. And you can illustrate that in numeric or anecdotal ways. In the numeric way, pretty much every single major entity out there, whether it’s the U.S. government and the Chinese government to Fortune 500 companies, all say that A.I. is the key to their strategy moving forward. It’s written into the national U.S. defense strategy document. China has a document that says they want to be the world leader in A.I. Read the strategy documents of all the Fortune 500. And yet only 14% of leaders self-report that they have even a passing familiarity with A.I., let alone its consequences. Let alone its applications, consequences, and dilemmas.
And that’s self-reporting, right? Most of these leaders are probably kidding themselves. So think about that. That incredible disconnect between every single organization out there saying, “This thing is so important to me!” And yet most of their leaders, let alone the broader public, not even getting the basics of it. The anecdotal version of this is the secretary of the treasury said that A.I. and automation is not on his “radar screen” because he doesn’t think it’s going to be an issue for “50 to 100 years.” That’s insane! That’s crazy. It’s already an issue right now, let alone the next 10, 20 years.
And all of the data is showing that it is being drastically accelerated by the pandemic. We are seeing an entire generation thrown into distance learning and another generation thrown into distance work at a level that was, frankly, never anticipated.
While other fields like telemedicine… In a couple of weeks, it went to the level that that industry didn’t think would happen for 10 years. To you’re seeing robotics rolled out into everything from policing curfews to cleaning subways, hospitals. It’s one of the few industrial sectors that’s seeing an upsurge in buying. A.I. and big data and the tracking of society at large is being planned for a level that, frankly, no science fiction really got to this level of what we’re planning to put into place. And so this acceleration though also means that all the political, economic, social, legal, ethical security questions that we would have spent a good next 10, 15 years wrestling with, probably not solving, but at least wrestling with, they all just got set aside too. And so that means all the questions and dilemmas that our characters in the novel deal with are going to come faster for the rest of us in the real world.
On a scale of one to 10, how feasible would you say that this particular future is to emerge in reality?
9.9 out of 10.
9.9 and I’ve got the endnotes to prove it. Our bigger problem is that it actually keeps coming true. A couple of situations before we could even get the book out. And by that, I don’t just mean a certain technology that was seemingly futuristic has already rolled out. A drone or whatever. The opening scene has a tiny detail: Two characters are talking and a wheeled robot goes by on the sidewalk. And we don’t set a date for the story, but it’s a way of showing it’s futuristic. And as a footnote to that system, that system’s been deployed to deliver groceries in Washington D.C.
In the book, law enforcement uses pretty invasive technologies, like the mapping of personal data and the ability to access personal records in real time. You get the sense that you’re supposed to bristle at it. At the fact that you can see, “Oh, that person’s about to be investigated for fraud or what have you.” But at the same time, you need to move the plot along. The hero, Keegan, makes some pointedly questionable calls that are then validated. How do you feel about the use of these technologies, and is it a concern that valorizing someone who wields them might encourage their use?
So the technologies are being developed and this is the way that they’re planned to be used. And whether you’re talking about something for policing or maybe it’s a certain Silicon Valley widget that’s in the book, one of the themes that it plays with is how there’s this fine line between utopian views of the future and dystopian views. And it often depends on which perspective you have. That one person’s utopian view, someone else with a very different background or role in society might go, “No, no, no. That scares the hell out of me.”
This is a case that Silicon Valley and especially social media companies played into. They never assumed they were the bad guy. Facebook used to have a marketing campaign that was “the more you connect, the better it gets.” Now that you see where they wound up, that reads really creepy. Why it reads really creepy is not just about oh, Facebook having too much information about me. It’s about the Russian government and anti-vaxxers and far-right extremists and all the other bad actors out in the world being able to connect and it’s become better for them. I don’t think any of us like that.
So what I’m getting at here is that one of the other perspectives is that powerful actors that aren’t of goodwill are going to either use that very same technology or maybe react to it, manipulate it. I just today read a coronavirus example of that where they, again, straight out of the book—one of the technologies that’s in the book is a robotic system for police—has been rapid-rolled out in Singapore and they’re using it to do curfew policing and it walks about and essentially yells at people, yells at humans.
Right. Go back inside.
Now, the plan was it would remind people about social distancing. What they’re discovering is that the reason that it works is not because it’s reminding them about social distancing. It’s in the words of one report, it freaks people out so much they leave the park. So it’s actually the robot yelling at you as opposed to the original plan was, “this is a low cost, efficient way for the police to extend their reach to remind people of good public health.” Right?
Are you hoping that this serves mostly as a cautionary tale to take some of these A.I. and automation technologies more seriously? Are you hoping policymakers pay attention to any particular threads or undercurrents in this world? What do you hope, in a best-case scenario, that spending some time in this rapidly approaching future will move people to do if it sees the level of success as Ghost Fleet?
So on the escapist side, I hope people just enjoy it. I hope they enjoy a good, fun, exciting read. On the relevant side though, I think it does have something to say, in particular about the world that we’re entering, that has been accelerated by the coronavirus pandemic. And I hope it, somewhat similar to Ghost Fleet, has both an immediate impact in terms of people reading it, it allows them to understand key issues and trends and maybe make certain policy changes or call for certain policy changes that will help steer things towards a better outcome. But also one of the great things that happened with Ghost Fleet and I hope it’s the same here, is that it had a half-life. It had stamina.
We are entering into an industrial revolution that’s been excitingly talked about as the new machine age. But what we need to understand and what comes from looking at it from these different perspectives is the widespread impact, both good and bad, challenges and consequences and dilemmas that come out of that. So if you think about the last industrial revolution, it created all sorts of wonderful things like mass consumer goods. It also created climate change. It led to new economic winners at individual level, at a business level, at a nation level. And as a result, it also led to political and economic losers. It led to new political concepts and movements. No industrial revolution — no workers’ rights, no women’s rights, no modern concept of children’s rights. All of that comes out of the industrial revolution. Oh, by the way, so did fascism and communism, which we’d spend roughly a next century working our way through.
So we shouldn’t expect this one to be any different. But oh, by the way, on top of all that, it introduces new questions that we don’t have any history of wrestling with. Machine permissibility. What is the tool allowed to do on its own or not? And how do you figure out who’s responsible for that decision? That’s not something that you had to debate about with your hammer or shovel or even your iPhone. But you do now, whether it’s your driverless car or it’s your armed autonomous robotic system deployed into warfare. And oh by the way — it’s a fun, scary, cool space to play in for fiction.