A Conversation With Richard H. Ledgett Jr.

Wednesday, November 14, 2018
Max Taylor
Speaker
Richard H. Ledgett Jr.

Former Deputy Director, National Security Agency

Presider
Judy Woodruff

Anchor and Managing Editor, PBS NewsHour

This symposium convenes policymakers, business executives, and other opinion leaders for a candid analysis of artificial intelligence’s effect on democratic decision-making. The symposium is timely as countries such as China, France, Germany, the United Kingdom, and the United States rush to invest in artificial intelligence to solve cybersecurity challenges and stem the spread of disinformation online.

WOODRUFF: Hello, everyone, and welcome to this—I guess the final part of your—of your morning symposium. And this part of it is with Richard Ledgett. I’m Judy Woodruff, the anchor and managing editor of the PBS NewsHour. I’m going to be moderating the discussion. I think both of us acknowledge that we have the disadvantage of not having sat in on your morning conversation. So we recognize there may be some things that came up in those—in those discussions that you’ll want to bring up when we turn it over to you for questions. So please feel free to do that.

I just want to say at the outset I’m glad to be here to talk about—really, to help facilitate this important—what is more important than preserving the health of our democracy. And that really is what we’re talking about, as we face growing threats from disinformation and people attempting to undermine our system of government. And I’m really glad to be here with somebody who understands the U.S. intelligence community as well as or better than anybody around, in Rick Ledgett. As you know, he has four decades of experience in intelligence, in cybersecurity, and cyberoperations. He spent twenty-nine years at the NSA, the National Security Agency. At the end of that time, he spent more than three years as its deputy director, until his retirement in April of 2017.

I just want to say I want to invite—you will be invited in thirty minutes. We’re going to have conversation for thirty minutes and then I’m going to be turning it over to members for questions. The question before us, as we know, is will artificial intelligence curb or will it turbocharge disinformation online. And I want to begin with the state of disinformation right now.

Rick Ledgett, you were one of the authors of the intelligence community report on Russian attempts, Russian activities in the 2016 timeframe. But you’ve also studied Russian disinformation throughout its history. So we really—I think it’s important for us to start with an understanding of how Russia has operated, how the Russian government has operated for a very long time, how it thinks about information and about disinformation. So why don’t you start with that?

LEDGETT: Sure. I would like to make one small correction. I was not one of the authors of the intel security assessment. But they worked for me and I spent a lot of time with the people who were authors from the NSA side of the house. So I can talk about that more a little bit later.

So disinformation is not a new technique. It’s not new to Putin’s Russia. It was a mainstay of the Soviet Union, going back to the formation of the Soviet Union. In fact, it goes back to czarist times. The idea that the government owes the truth to its citizens is a foreign idea is Russia, and it always has been, and it’s been used—information’s been used as a tool to manage its population and as a tool to express the will of the Russian, and then the Soviet, and now the Russian again government in the international space. When someone in Russia says information security the meaning is very different than the meaning we apply to that phrase.

In the West, information security means keeping your information secure—defending it, making sure people can’t get to it, keeping it safe from improper use, that sort of thing. In Russia, information security means using information to secure the state. That includes disinformation. It includes propaganda. It includes, you know, weaponizing information in various different ways. And so the fundamental philosophy of how they do things is just very, very different than the way we think about information. There is no such thing as free speech in Russia. There may be the words “free speech,” but there’s no actual free speech. And the government takes a very active role in shaping the information policy of the—of the Russian citizens.

WOODRUFF: I think we all think we have a really good understanding of how sophisticated the Russians are or are not, but how would you size up their capabilities when it comes to disinformation, whether it’s spreading it in the United States or anywhere else in the world?

LEDGETT: They’re very capable. They have a long history of it, as I was just saying. And they have masterfully adopted their techniques to the modern tools of—you know, that arose from the internet. So social media. If you think about how you used to have to do disinformation and propaganda in the ’50, and ’60s, and ’70s, and maybe into the ’80s, the way you would do that is you would get a sympathetic journalist, a sympathetic editor, a sympathetic book publisher, a sympathetic movie maker, and they would—and they were either a witting or an unwitting agent. And you would write a script, or a book, or an article that as part of the narrative that you wanted to put out there. And you’d go through the process of getting it vetted, and getting it published, and getting it disseminated to people. Long process. Complicated process. Limited in reach.

What social media has done, and the internet has done, is brought a direct channel from the Russian disinformation manufacturers to the minds of their target audience. Think about—it’s at truism in information operations that the target is the brain of the decision-makers. And so the brain of the decision-maker that you’re going after in the case of something like voting or elections is the individual voter. And so social media gives them an unprecedented, direct venue to that at speed and at a scale that’s never been seen before in the world. And so they’ve done a really good job of adapting that. And if I were—if I were the head of the FSB, I would be handing out medals and cash awards to everybody that was involved in this, because it’s been wildly successful from their point of view.

WOODRUFF: Since they didn’t develop much of the social media, and I think this is relevant to what we’re talking about with the next phase and, you know, as we get into artificial intelligence. How did they learn it so fast? I mean, what was it about their technique? Was it just that they said to a lot of people: Go do it, or else? I mean, how did they do it?

LEDGETT: So they’ve been practicing for a while. They’ve been practicing going back to 2007 in Estonia, 2008 with Georgia, 2014 in Ukraine, in Montenegro, in Moldova, in several countries, and also the Baltic states. In the near-abroad they’ve been practicing these techniques for a very long time. And they’ve gotten very good at it.

WOODRUFF: And so you’re saying by the time it came around to our election, 2016-2015, they were—they had a lot of experience under the belt?

LEDGETT: They did. And you could see the switch in 2015, when they—and this has been widely reported—the intrusions into the Democratic National Committee servers, and the harvesting of emails to use in a weaponized kind of a way. There was the initial intrusions, that was intelligence gathering by the—their version of the CIA, called the SVR. And then the next step was the GRU, the second actor that came in, who were the people who do information operations. So you could actually see them transition from one phase to another, in retrospect. Unfortunately, we didn’t catch that early enough in the process. And I think someone—I don’t know if Laura’s still here—I think—I think—oh, there you are. Hi, Laura. I think she may have described it as a failure of imagination. I think that’s an accurate way to think about that. We saw this being done in other countries and didn’t imagine that it would be applied to the United States in the same way.

WOODRUFF: How much of what they’ve done is due to—is a result of human hands-on activity, and how much of it is computer-driven, or in some way process driven?

LEDGETT: A lot of it is hands-on. There’s the Internet Research Agency, that’s actually the troll farm that is run by one of—one of the oligarchs who does this for Russia. But they do use bots, which are basically automated set of scripts and agents that will do things like raise the profile of a story by retweeting it or reposting it time and again. If you think about the algorithms that are used by Twitter and Facebook to rank stories, those are a form of—we say artificial intelligence, what I think we really mean here is machine learning, a subset of artificial intelligence. And so those machine learning algorithms are designed to cause the stories that fit certain profiles, that are, you know, proprietary to Facebook and Twitter, to spike and be more visible to people. And so what the Russians have done was basically been successful in reverse-engineering those algorithms, largely through trial and error. Let’s try this and see what happens, see if I get the right output that I want from providing this input. And that then lets them manipulate those machine learning algorithms in ways that are beneficial to them.

WOODRUFF: So what does that tell you about—again, we’re just on the beginning edge of what AI—of what artificial intelligence and machine learning is going to—is going to look like. What does it tell you in the near term, as much as we can predict? We don’ have any way of knowing what all this is going to turn into—or, maybe you do, twenty, thirty years from now. But at this point, I mean, looking at 2020, is it going to have an effect?

LEDGETT: Well, I mean, it’s been going on since 2015 or so. It was going on through the past election, the midterms that just happened. And there is a—the Alliance for Securing Democracy, Laura and Jamie Fly cochair it. And, full disclosure, I’m on the advisory council. But they have been tracking these activities through a website called Hamilton 68. And they—if you look at Hamilton 68, you can Google it right now and go to the website, and it will tell you what the Russian-associated accounts were doing today and what themes they were—they were tracking. And so—and they generally fall into—Laura will correct me if I get this wrong—but I think three broad buckets.

Bucket one is stories that advance the Kremlin’s agenda in some way. Things that fit with their propaganda, disinformation regime. Second thing they do is things that denigrate especially the U.S., but Western democracies in general. And the third category of things that they do are things that cause conflict in society. So, for example, they rarely invent issues. But they’ll find issues and they’ll pile on both sides of the issues, because what they’re trying to do is pull the social fabric apart by getting people to read things online that are on the poles of their respective views, and to say things, and retweet things, or repost things online that are designed to—again, to tear apart the fabric of society.

WOODRUFF: How—they obviously know that we’re monitoring them. They know about Hamilton 68. They know about other efforts on our part to monitor what they’re doing. And we’re talking about it more out in the open. How is that affecting what they’re doing?

LEDGETT: So they’ve changed their tradecraft since 2016. They were fairly crude, they were fairly obviously. We were about to pretty easily detect them once the social media companies decided it was something that they wanted to do. They were able to pretty easily find the agents through characteristic of how they registered and how they behaved and, you know, more subtle things like they didn’t really have a lot of real person interaction.

But what they’ve one is they’ve changed their techniques now. They’ve changed how they register. They’ve changed how the maintain the accounts. They’ve changed how they propagate. They’ve also picked up something that’s’ a—personally, I find fascinating. If you’ve heard of money laundering, they have actually done information laundering. And that’s a—that’s a term that was coined by one of the ASD researchers which I really like. To me, it’s very evocative. If you think about money laundering—so I get my money through criminal enterprise, and I have to launder it through a series of businesses. So I can produce it as income, and then put it in a bank, and then access it and use it the way I would want to use money.

Information laundering, you start off with something on a very fringe publication, maybe a blog posting or an article or something that’s way, way on there—on either fringe, it doesn’t really matter. And then it’s picked up through a chain of other blogs. It’s cross-posted, it’s cross-linked, it’s tweeted, it’s put on Facebook. And then it gets picked up by a news agency. And I think if you read the intelligence community assessment, you know that our assessment was that both RT, which they now—they’re like KFC, right? It’s not Kentucky Fried Chicken, it’s KFC. Well, they’re not Russia Today. They’re RT. They’re trying to sort of turn themselves into a disassociated brand. But they’re a state-run media enterprise. And Sputnik is another one that’s state-run.

So they’ll pick up those articles. Now they’re in the mainstream media. It might not be the mainstream that you like, but it’s part of the mainstream. And then they get picked up by other news organizations. And now you’ve got laundered information.

WOODRUFF: And they just did that, just by watching what was going on and figuring out how to expand their reach.

LEDGETT: They’re very clever. (Laughs.) They’re very clever.

WOODRUFF: So the fact—the fact that we are talking about it, writing about it, having these conversations about it, you’re saying they’re constantly adapting? And that—

LEDGETT: Yeah. It’s an arms race. It’s a—it’s a move, countermove, parry, riposte kind of a thing. And it’s always going to be that way.

WOODRUFF: Do you feel confident at this point that the—that U.S. capability—that we’re able to stay ahead of them, or not?

LEDGETT: So I think you have to break that down a little more finely than that. I think in terms of the technology, does—AI’s not evil. Machine learning’s not evil. It’s like all technologies, it can be used for either. But the use of machine learning to identify the source of information—because really what we’re talking about is—we’re not talking about necessarily filtering information beyond some categories that violate the acceptable use policies on the platforms. We’re talking about being able to say where this information came from. In the previous conversation there was a discussion about, you know, free speech and the right to free speech, even when it’s speech you don’t agree with. Totally agree with that. Totally support that. And I think anything we do that impinges on free speech is a bad thing.

But what we can do and should do is be able to talk about the provenance of data. Here is where this data comes from. And there’s some things associated with that that are both really suitable for AI, and some of them—and machine learning—and some of them are hard things to do. So for example, there’s the attribution of who the post came from or where it came from, being able to—in that information laundering example I talked about—being able to trace that back and say: Here’s where this actually came from. And here’s the pedigree of that information.

And then there is the—it’s almost deanonymization. One of the—one of the things about the internet, most bad things on the internet come from the fact that people can be anonymous. And but there are also good things on the internet that come from people being anonymous, like people who live under the kind of state that wants to oppress its people. So there’s almost deanonymization that you need to have in there, so that you can say that this came from, you know, someone who’s actually a troll in the Internet Research Agency in St. Petersburg.

WOODRUFF: So, I mean, are we—how good is the U.S. right now at doing that? At sort of working it back and figuring it out?

LEDGETT: So we’re getting better. I think the engine for that is the companies themselves. And I think they are becoming sufficiently motivated to do that. But I think there’s also—as somebody said earlier—there’s a customer demand signal that needs to be send. People—you know, I want to understand where the information comes from.

WOODRUFF: And at this point, there’s so many Americans. Some of us have time to sit around and talk about this, but many Americans just—you know, they go about their daily lives and they pick up information here and there. And they don’t have to go figure out whether it was, you know, developed by, you know, some bunch of Russian officials working in St. Petersburg or someplace else. I mean, they just—they look at it and they say, oh, that’s interesting. What is—do you think enough is being done? What needs to be done to, I think, better inform the American people, better label information? Is there even a way to do that? Are we just—is the horse out of the gate and it’s too late to do that?

LEDGETT: I think there’s a way to get there over time. And it’s a combination of three things. One is, I do believe in regulation in this space, but regulation-lite and outcome-based regulation. In other words, companies need to have some responsibility for the capabilities of the things that they put in front of people. So it’s got to be done in a way, though, that doesn’t crimp innovation, doesn’t impinge free speech, so it’s a balancing act. And I’m not sure that Congress has fully gotten their minds around how to do that yet.

Thing two, I think, is the companies themselves, the social media, the internet-focused companies need to be able to use their knowledge of how their system operates in order to be able to go back and provide provenance to that information. And there’s also perhaps the small role for the intelligence community in terms of here—you know, here are Russian-based or foreign-based entities that you should look for in your stream. And I don’t think the intel community needs to be dredging its way through Facebook and Twitter, you know, in the United States. That’s a bad idea.

And then the third component is people. And this is a long-term cultural change. A friend of mine is a high school teacher in Baltimore County. And she teaches history. And what she does is she’s probably more on the left side of the political spectrum, but she comes at it completely apolitically and gives students articles and says: Tell me what you think of this. Look at it critically and say: Does this sound—based on what you know about the world—like it’s likely, like it’s true? And then go back and teaches them how to dig deeper into—not just Google it, but go deeper, you know, two and three levels deep in order to understand that. I think that sort of thing is essential for that. I don’t know how you do that on the national scale.

But I think that kind of a critical thinking, developing that is really important. I was personally stunned when I realized, when all this was going on, when I found out how many people get their news from Facebook. I mean, that, to me, is astonishing. You can—people have talked about the echo chamber that you can—that people can get into, the information echo chamber, where you pick the news organization that you want, you know, Fox, or CNN, or MSNBC, or PBS, or BBC. Whatever news organization you want that has a certain bias. And a combination of that, things you listen to on the radio, things you look at on the internet, who your friends are on Facebook and who you follow, can very quickly get you into a space where all you get is information that either accords with your worldview or is more extreme than your worldview. And so you end up sort of a self-reinforcing thing.

So I make a rule to every day when I’m reading, read something I know I’m not going to like, just to sort of keep the apertures wider than they would be by just reading things that appeal to me. And I think that’s a useful kind of approach to take.

WOODRUFF: You do that. And some of us do that. But a lot of people don’t. I mean, a lot of people don’t have time. They don’t think about it. And they’re very comfortable just picking their own silo, if you will, of information. And some news organizations are making that easier by saying, you know, subscribe to our feed and we’ll give you our version of what’s important today.

So, you know, to me, part of this—a lot of this comes down to whose responsibility is this? Some of it falls on us, American citizens, consumers of news and information. But what is the government’s role in this, to help us through this really challenging time ahead?

LEDGETT: Yeah. I think I identify the threats, help make people aware of the issues, and then hold, again—in a not-overly intrusive kind of a way—hold the providers of information accountable for the—you know, being able to not just repeat information, but being able to provide some indication of the—of where the information came from, like I said, it’s pedigree.

WOODRUFF: And is that being done right now?

LEDGETT: Well, bits and pieces. It’s not a coherent—it’s not a coherent approach yet.

WOODRUFF: I mean, because there’s much more debate now in Congress and elsewhere about the roles of Facebook, Twitter, Google, and et cetera, and how they police themselves, and how they police the traffic that comes through there, in and out of their space.

LEDGETT: Yeah, Facebook said they’re hiring twenty thousand security people.

WOODRUFF: Right. Right. So as we move ahead, I mean, how much do you, Rick Ledgett, worry about this, I mean, in terms of—and, by the way, we’ve only been talking about the Russians. They’re not the only ones doing this, are they?

LEDGETT: Right. Right. The Russians are the most egregious I think. There’s been some recent stuff done by the Iranians. The Chinese are doing it. Although, slightly different approach. Russia’s goal is hurt the West, hurt the U.S., hurt Western democracies. I think China’s goal is more long-term make China the largest and most influential power in the world. And so I think that there are different timelines, there are different kinds of approaches. The Chinese, I don’t believe, are, you know, actively putting disinformation into Americans’ information space at the same scale or pace as the Russians are.

WOODRUFF: Is there any risk that we—that we hype this—

LEDGETT: Sorry, can I say one more thing about the Chinese, before I forget?

WOODRUFF: Sure. Sure, go ahead. Yeah.

LEDGETT: So, look—next time you to the theater and you—or watch a movie on HBO, especially one that’s made in the last few years, look at the production companies. Look at the companies that paid for having the movie made. And you’ll find that a lot of them are Chinese companies. And then if you look at those movies the Chinese company makes, find a movie made by a Chinese company that says anything bad about China. I have not been able to do that so far. And so that’s an example of—

WOODRUFF: Fascinating. I hadn’t seen that.

LEDGETT: —controlling the information space, but in a soft power kind of way. We’re not introducing disinformation, we’re just flattening the curve of the variability of the information that you’re exposed to.

WOODRUFF: Are we talking about MGM and 20th Century Fox? I mean—

LEDGETT: No, no. These are actual Chinese names of companies that are backing these.

WOODRUFF: That are backing—that are playing the role of producer.

LEDGETT: Yeah.

WOODRUFF: But—

LEDGETT: A friend of mine told me that last year, and I wasn’t sure if it was actually going on. And I did a little research and it turns out it really is. And it’s the kind of subtle influence campaign that’s a little scary.

WOODRUFF: Are we—I’m going to sort of turn this whole thing around on its head. Are we worrying too much about this? I mean, we are a big country, what, 325 million people, the most economically powerful nation on the planet—

LEDGETT: For the next couple years.

WOODRUFF: Oh, really? Just for the next couple years?

LEDGETT: Yeah. The Chinese and U.S. economies are going to cross here in the near future.

WOODRUFF: Yeah. Yeah. But we’re a significant player. And we’re worried about this country that is—what’s the Russian GDP? I don’t know, it’s—

LEDGETT: Yeah, about a sixth of the size of California.

WOODRUFF: A sixth—OK. So are we overly worrying and pulling our hair out about this? Or is it something that we should be—

LEDGETT: I think it is something that we need to worry about, for a couple of reasons. One, it eats away at the foundations of our democracy. That’s what makes the U.S. great. It’s not because we’re smarter, or taller, or better-looking than everybody else in the world. That’s not necessarily true. But it’s because we have a way of operating that gives our citizens the ability to move beyond just executing the next phase in a plan, to think out of the box, to innovate, to take risks, to challenge ideas and assumptions in ways that virtually no other country in the world does. The thing that makes that possible are things like freedom of speech, things like the economic system that we have, things like the ability to unify, to get together behind ideas and make them happen, and things that undercut that are a danger to the country in, I think, the near term. And I’ll define near term as five to ten years out.

WOODRUFF: It’s values that we’re talking about.

LEDGETT: Yeah, exactly right. Yeah. Yeah.

WOODRUFF: And if you were—if you were still in government right now, what would you be focused on in all of this?

 LEDGETT: In any part of government or in my old job?

WOODRUFF: Well, in any part—the White House. I mean—

LEDGETT: Yeah. I think what we need that we don’t yet have is a national imperative in this space. We need a way to get the—government is not the solution to the problem, but government is a component of the solution. And it’s got a key role in orchestrating and alerting people in a way that we haven’t done yet. And we’ll use the “we” because I still kind of feel governmenty. So there are lots of efforts in individual departments and individual organizations that were focused, you know, on the 2018 elections, and remained focused on the 2020 elections. But there’s no unifying whole-of-government this is a national imperative. It’s like a—you know, it’s like the challenge to put a man on the moon, you know, in the next decade. That’s the kind of thing that I believe we need to be successful in the—in this near term.

WOODRUFF: So we’ve been—in essence, the United States has been taken advantage of, because we’re not focused as we should be.

LEDGETT: President Putin is—does judo as a sport. And judo is all about taking your opponent’s strengths and using them against them. And the Russians have been masterful in doing that in terms of using our First Amendment free speech against us, using our openness as a society against us, and turning our strengths into weaknesses. And as they did that, eating away at those foundations of the democratic institutions in this country.

WOODRUFF: Vladimir Putin looks pretty healthy. He looks like he’s going to be around for a while. But at some point he won’t be where he is. How confident are you that the people—person who comes behind him, or people, are going to be as determined as he has been to carry out this kind of thing?

LEDGETT: I think that’s unknowable at this point. It’s how does he—you know, does he have a successor in mind? I don’t know the answer to that. You know, most people who pick their successors don’t pick someone who’s diametrically the oppose of them. They pick someone who acts and thinks the same way that they do, so.

WOODRUFF: But it’s interesting that we don’t—or at least, as of the time you were—in April of 2017, it wasn’t clear who his—

LEDGETT: Or it was something I couldn’t talk about if I did know. (Laughter.) Sorry.

WOODRUFF: I think it’s about—is it time? Yeah. I’ll take questions from members. And I am to tell you that you are—a reminder, again, the meeting is on the record. So whatever has been said by how is all on the record, and it will continue to be. If you want to ask a question we have a microphone. Speak directly into it. We ask you to stand, give us your name and your affiliation, and we ask you to keep it to one question and keep is as concise as possible. This sounds like the White House briefing—(laughter)—except they don’t have very many of those anymore.

LEDGETT: (Laughs.) We’re not going to throw anybody out of the room, though.

WOODRUFF: OK. Here you go. Yes, sir. Stand up and, if you would, give us your name and affiliation.

Q: I’m Kevin Sheehan with Multiplier Capital.

Thank you for your remarks. What I was struck by was the continuity. And I was left with the thought that if Felix Dzerzhinksy had had this technology available a century ago, he would have done the same thing that the leaders of the SVR were doing today, although maybe he wouldn’t have identified us as the main adversary.

What I was hoping you could do, recognizing that this is an unclassified forum, is really talk about the SVR as an organization. Have they gotten better since 1991 and ’92 than they were previously? And is that because of organizational change? Or the technology is favoring them? And are there new vulnerabilities that this new information age has created in the—in the SVR?

LEDGETT: Sure. So the SVR’s actually not the main engine in this space. It’s the GRU, the Russian military intelligence. SVR’s like the CIA. GRU’s not really a direct analogue in the United States. But military intelligence organization, very large and charged with information operations. And what they’ve done is taken something that—the Russians are big on developing doctrine. And a few years back the head of the defense forces, Gerasimov—I think I said that right—wrote a paper about actually using this kind of power in terms of going against adversaries, and short of actual kinetic fighting but being able to make it so that when—if and when you do have to fight them, they’re much weaker, they’re disorganized, and they’re disarrayed.

They have actually done a really good job of implementing it. Like I said, they’ve had training grounds very close to home for the last eleven years now. If you look at things that have gone on in places like the Ukraine from an information operations point of view, from a cyber point of view, from what we would call covert action point of view, that’s been their test bed. And they’ve tried things out there and then run them against other parts of the world, including the United States.

So I think that the services have evolved. They’ve learned how to use this sort of thing. They’ve also fall vulnerability—or, are vulnerable to this sort of information used against them. You look at the roll-up of the GRU agents in the Netherlands a couple of weeks ago, where they found the guys outside the organization trying to hack into the wi-fi network of the organization OPCW, whatever, the chemical weapons guys. And you know, they were—some of the same tools that they used were used against them, causing them to roll up. it was also useful because it sort of poked a hole in this idea that these guys were all ten-feet tall with a big red S on their chest. They’re really not. Some of them just made some really egregious tradecraft errors.

So did I answer your question? OK.

WOODRUFF: Yes. All the way in the back. Uh-huh.

Q: Hi. Zach Biggs with the Center for Public Integrity.

I wanted to ask you about the role the government might have in this—with this particular issue. Historically speaking, the sort of propaganda or influence operations have largely just been tolerated by governments. There’s limited international law prohibiting it. Do you think that that IC and DOD, specifically CYBERCOM, have a role? Has a threshold been reached where those sort of organizations need to take a more active measure to prevent this kind of influence operation from taking effect?

LEDGETT: It’s a great question. I think that what’s happened is the way that the information space is in the United States, and I would argue most other parts of the world, is it’s created new vulnerabilities for the population in ways that we didn’t have before. And the response to that needs to span the entire range of potential government response options. So diplomatic, economic, intelligence, military, you know, things with allies—all those are components of that. And a cyber activity doesn’t necessarily beget a cyber response. And in fact, it’s often not useful to have a cyber response to a cyber activity.

In the case of information operations, because it does span so many different components of the government’s ability to counter it, it’s got to be a whole-of-government response. And you have to go after the levers. So what is it—we’ll keep talking about the Russians. What is it that Vladimir Putin cares about. You know, off the top of my head, he probably cares about support of the oligarchs, supporting the military, supporting the intelligence services, his money is however many billions of dollars he has overseas, and control of information to the Russian people. So there are things that we could do to go after each one of those to decrease the value and increase the cost to him. Because right now, value’s here, cost is here. We have to rest those to something more level or inverted to get him to stop.

WOODRUFF: OK. Right here in front. Yes, sir. Yeah.

Q: Glenn Gerstell. Thank you very much for this, Rick.

LEDGETT: Where did you say you work?

Q: NSA. And had the pleasure of working with you. So it’s good to be back.

Now that you’re out of government, can I take advantage of that fact and ask you: A number of commentators have talked about how the government is organized to deal with the threats posed by artificial intelligence and the various cyber threats that we’ve all been discussing. Now that you’re out of government, what’s your perspective on whether we need to change something in the executive branch and Congress, are we well suited, are well positioned to deal with these threats? Or does something need to be done in that regard?

LEDGETT: Yeah. I mean, the problem with organizing for each threat is you end up reorganizing about four times a day, because all threats are different and the perfect organizational structural, you know, might not be the same of for each one of them. I’m more of a fan of taking what you have and combining it, what we would call a joint taskforce or a tiger team sort of thing, where you say: I am—again, this requires a whole-of-government approach because by definition you’re boundary, the authority lines between the different departments and agencies of the government. But say, this entity that I’m going to have—lead the response to this. I’m going to take people from Justice, Commerce, State, Defense, CIA, NSA, FBI, you pick it, all the different relevant agencies. I’m going to put them in a room, and they’re going to be empowered to act and to come up with options, and maybe even to act in some cases using the authorities derived from their organizations. That gives you the two things that you need in this kind of a fight: Integration and agility. So I don’t think it’s a permanent organization. I think it’s a functional time balance sort of approach.

WOODRUFF: But you’re saying—you said a minute ago we weren’t doing that, that we haven’t—we don’t really have a—

LEDGETT: We’re not, no.

WOODRUFF: Somebody else in the front. Yes, sir, right here.

Q: Thank you very much. Todd Helmus, RAND Corporation.

We’re talking a lot about trying to play defense against these types of initiatives. But I want to ask you about playing offense. Do you envision a role in the future where the United States is using bots, fake troll accounts, AI, deep fakes, all of that as part of an offensive information campaign, not necessarily directed at Russia to punish them for doing it against us, but, say, in the theater of operations, and providing those authorities to meet those operations?

LEDGETT: I don’t think so. And I think—I don’t think so because it goes directly against the values of our nation. And I think it might have been Laura who said it earlier, I think our weapon is the truth, and getting the truth in front of people in the way that helps them see what’s different about us, and what’s different about our way of life. So I mentioned earlier, one of the things Putin cares about is control of information to the Russian people. You might know that the Russian version of Facebook is called VKontakte. And on VKontakte, they—families of Russian soldiers who’ve been killed in Ukraine, they would put up pictures of the funeral.

And there as a team of people from the Russian government who would go through there and take those off, and suppress that information, again, to control the flow of information to the people. Don’t tell them things you don’t want them to know. And so we could easily—I can think of one hundred, maybe two hundred different ways that we could get information in Russian citizens’ information space that the Russian government didn’t want them to have—truthful information. And I would, as a way to disincentivize, you know, Mr. Putin from what he’s doing—I would say: Let’s do a half-dozen of those and tell them: We’re doing this. And we’re going to continue doing it until you stop doing what you’re doing.

So I think use of information in that way, weaponizing the truth, so to speak, I don’t think deep fakes, I don’t think misinformation, disinformation is something that we would use, or should use.

WOODRUFF: But as long as it was truthful.

LEDGETT: Truthful, yes.

WOODRUFF: Truthful is OK.

LEDGETT: Right.

WOODRUFF: Good. That’s a relief. Yes, sir.

Q: Hi. Fred Roggero from Resilient Solutions.

We’ve been talking about the issue quite a bit in sort of a state-centric way. But these days, when we have Alexa on our kitchen counter and Siri in our pockets and purses, corporations have actually been able to achieve and to gather more data than anybody from the NSA would ever in their wildest dreams hope to gather about the U.S. So the question is—

LEDGETT: We don’t want that data, just to be clear. (Laughter.)

Q: So in that—in that context, how secure is that data, in the nation-state context—from—

LEDGETT: Oh, it’s not. It’s not.

Q: And if so, do you see a shift—if knowledge is power—do you see a shift in power from the government to these corporations who are collecting all this vast amount of data?

LEDGETT: That’s a great question. First off, the information is not safe. You know, if a determined nation-state or high-end criminal actor wants to get access to information that a company has, then they’re going to get it. There’s—you can—you can put up defenses, and you should put up defenses, but if they are willing to devote the time and attention needed to really get that information, then they’re going to be successful one way or another. And so relying on that data being totally secure is not a good strategy.

I think, though, that the idea of that information being useful for the private sector is true, and it’s actually what’s feeding the big data revolution. I mean, you have to have data, you have to have big data in order to do machine learning at scale. And that’s actually proving to be a boon to the economy. And, again, in the earlier talk they were—the panel talked about if you clamp down on that too much, then you starve the engine of machine learning, and we end up behind the curve. Our principal competitor in this space is China. Some people would say we’re neck-in-neck, some people would say we’re head, some people would say they’re ahead. I think there’s too many factors for me to definitively say.

But I think that if we choke the input to the—to the AI machine learning feeds, then we disadvantage ourselves. So I don’t think the issue is how do we control the flow of information. And I think the issue that we need to deal with is how are you allowed to use that information? And so we sort of think of this—we’re in a big data world where data comes in from all different kinds of sources. And we spend a lot of time talking about the input and how we got the input and what inputs we’re allowed to put in. I think we need to spend a lot more time talking about the outputs and the outcomes. How are you allowed to use this information and to what ends? And that’s how you—listen, I do not think the GDPR EU approach is the right approach.

WOODRUFF: How would—what would that look like? You said we’re talking a lot about input and not enough about output. What would that look like?

LEDGETT: Yeah. So I think there’s regulation or legislation on what are entities, commercial entities, allowed to do with the data you get, and what are the disclosure rules that they have to do. There’s some of that now, but it’s kind of a patchwork, and there’s not—I don’t think there’s a good, well-thought-out legal regime. John Podesta, when he was in the White House—I want to say this was maybe 2012—did a study on this. And it’s actually a pretty good read, talks about the role of data for the government and the private sector.

WOODRUFF: OK. Let’s see, yes, over here.

Q: I’m Jacob Breach with the Department of Defense. Thank you for being here to speak with us.

My question relates to your comment, which I agree with, that the strength of America—our greatest strength is our values, including free speech. So when you—in an earlier panel, we talked—the panelists talked about China exporting technology, either through its apps and shaping the information space, or exporting the technology to fellow illiberal regimes. Can you talk a little bit about the threat that that poses to our, you know, strength, and what can we do to combat that?

LEDGETT: Sure. (Laughs.) That’s kind of a broad reach, so I’ll sort of hop on a couple lily pads through there and you can tell me if I got to what you wanted, or you want to elaborate or something else. But I think the—certainly the Chinese—the spread of Chinese technology around the world and the spread of Chinese business around the world potentially has two effects. One is indirect, and I talked about the ownership of movie production houses and how you can strategically shape information in a subtle, low visibility way over time. And if you take—if your view of the world is twenty-five, fifty, one hundred years, then that’s a great strategy because hardly anybody gets mad about it.

The spread of technology has direct implications in terms of the information flow across those pieces of technology. And you’ve seen in the press, I’m sure, discussions about things like Huawei, the Chinese telecommunications company, concerns over national security implications of letting Huawei into, say, the U.S. backbone networks, or key parts of the telecommunications infrastructure. China has a law that requires Chinese companies to, on-demand, provide data to the—to the organs of state security, the ministry of state security, the—what used to be the 3PLA.

The legal regime for that is about this deep. The ability to get the authority to do that is actually pretty easy and pretty low level. In the United States, people say, well, the U.S. can do the same thing via the Foreign Intelligence Surveillance Court. That’s a different legal regime. It’s much more—much more process, much higher bar to getting it, and it’s a very different thing. Those are not a—that’s not an equal level of process there. Same for Russia as well. They have a law that any Russian company operating anywhere in the world is required to provide information to the—to the FSB, their equivalent of NSA—on demand, and any company operating in Russia is also required to do that.

And so, again, those are—those mean that the technology, the applications from countries like that are susceptible to being used to gather intelligence or information. And also, in the case of Russia, being used to put out information. Did I get to your question?

Q: Well, what can we do?

LEDGETT: What do we do about that? Oh, it’s a really hard problem, because it’s—there’s supply chain risk management is kind of a phrase that we use to talk about this. And supply chain being more than just the devices. It’s everything about—from the services, to the supersession of devices. It’s a really complicated problem, trying to make sure that you know the provenance of all that. In terms of the influence of things like movie houses, I’m not really sure what you do about that, because it’s all legal. The Chinese are not doing anything illegal. They’re using the rules of the system that we set up in order to do the things that we’ve done. Now, if they get to the place where they’re hiding, you know, the identity of where the money’s coming from, that would be a different issue. But I don’t believe they are.

WOODRUFF: But it sounds like you’re saying there’s not enough going on to counter that.

LEDGETT: I just don’t think we’ve thought about it from a strategic point of view. I mean, this is sort of a—I think a strategic conversation that we need to have, informed by real information, and then put it front of the American people and the Congress to say: What are we going to do about this? Because it’s not a—this is not a one-week thing, or one month, or a one-year thing. This is a long-term strategy.

WOODRUFF: Right here in front.

Q: Kim Dozier with The Daily Beast.

So where would you—

WOODRUFF: I’m sorry. Who are you with?

Q: The Daily Beast.

Where would you put a body to regulate information, or to go to verify? Would it be at the U.N.? Would it be one of these cooperative cyber bodies that the EU is standing up? You know, somewhere where I, as a reporter, could go and say: Well, they did not or did put their seal of approval on this piece of video?

LEDGETT: Yeah. It’s—I don’t think it’s a government’s job to do that. I mean, the government can provide some input to that, but I don’t think you want the government doing that. In Europe, I believe it was—there’s a consortium of entities that do that, some civil society groups and maybe some media groups, if I recall correctly, that were doing that sort of work, like the provenance of stories. So I think something like that might be the answer. I don’t have a definitive answer, though, on this spot on the map is where it should be. But I’m pretty sure that spot on the map is not the government. That’s a really easy step from that to censorship, and I don’t think that’s something we want.

WOODRUFF: Did you think there’s serious—I mean, are there people advocating for the government to have that role?

LEDGETT: I’ve not heard any serious talk about it.

WOODRUFF: OK. Let’s see. Looking around. Yes, right here.

Q: Hi. Nate Fleischaker. I’m with the Department of Defense.

Can you help me to think through kind of an internal toil I’ve got? On one hand, I really appreciate that you had democratic norms and truth being your weapon. I’m also thinking through historical example where military deception was very effective. So can you help me think through, like, Operation Overlord, where we—maybe not—the military and large parts of the government were very active in trying to deceive the Nazis as we were going to land. And it was very effective for making it possible, and that kind of stuff. So can you help me think through at what point does that become allowed, or is that considered—or, at what point is that considered propaganda or inappropriate?

LEDGETT: Yeah, sure. And there’s that story about the man who never was, the corpse that they released off the coast to indicate that they were going to invade somewhere else. I think that military deception is a different thing, and military deception is something—it’s a tactic that you use and, you know, was the deception about, you know, where D-Day was going to occur, was that strategic or tactical? I still think that was tactical in the greater scheme of things. So I think that kind of thing is perfectly acceptable. It’s actually accepted in international norms. That’s different than saying, you know, we are going to lie to the people of, you know, country X, whatever country X is, in order to change their perceptions over time about a particular issue. I think those are—those are fundamentally different things.

WOODRUFF: OK. Yes, right here in front.

Q: Elizabeth Bodine-Baron from the RAND Corporation.

Following up on that, how does that play with the goal of the United States to change people’s views when it comes to violent extremism and other things like that? You know, political Islam, and things like that?

LEDGETT: Yeah. So I think what the U.S. has done in that space—we haven’t, you know, told lies to people. We’ve tried to expose them to the truth and to differing viewpoints than the radical Islamic viewpoint, like the ISIS sort of approach. And there was something that was started three years ago now, where they got Madison Avenue, Silicon Valley, and Hollywood together to come up with: How do you reach out and appeal to the target audience, the young—the young Muslim or the young person who might be a suitable target for radicalization? They called it Madison Valleywood. They sort of jammed them all together. But they were—I lost track of where they are on that. I don’t know if somebody else knows. But the idea was, put together a combination of technology, advertising, and, you know, visually appealing tropes that would get people—give them an alternative to the radical Islam point of view. Something to look at besides the ISIS guys walking through Syria, handing out food to the refugees?

WOODRUFF: What came of that? I don’t have a clear memory of it.

LEDGETT: Yeah. I lost the bubble on that when I left government. I don’t know. But I think that’s the kind of approach that you look at for the countering violent extremism problem.

WOODRUFF: I don’t see a hand. Who’s got a burning question out here?

What do you think—what am I not asking you? What do you—what do you—what do you think we need to be—you’ve talked about what we need to be doing that we’re not doing. But what should people—I mean, how should people think about it? I mean, frankly, when most people hear artificial intelligence, it’s, like, immediately eyes glaze over because they don’t understand what it means.

LEDGETT: Right. Or they go right to Skynet or something, you know, where the machines are going to kill us all.

I think with machine learning there’s a—there are a couple of points—and this is a couple of technologies, machine learning and cloud technology, where they come together. And this is kind of important, I think. My favorite definition of the cloud is somebody else’s computer. So when you put stuff in the cloud, you’re putting it in somebody else’s computer. And so it’s important to me that you know—that I know where that computer is, who has access to it, how the people who can touch it are vetted, and that sort of a thing. And when I talk to clients, I advise them to know that before they outsource to the cloud.

And then machine learning is approaching the point where machine learning algorithms are going to learn more than people can understand. So you’re going to have a thing—a software program that’s taking in so much data running an algorithm that’s going to exceed the ability of humans to go back and track through the information space. And so think about the implications of that. That’s got huge implications in my old business, the intelligence business. Well, it looks like—it looks like we’re going to have to invaded Slavovia. Why is that? Well, because the box said so. I can’t really explain the reasoning behind that, but the box said so, so we’re going to have to do that. That’s not going to fly. Same for the legal business. Explain to me how you got this answer, defend this answer that the program spit out. Well, I don’t know. I can’t explain that. So that’s actually an area that folks are doing research on, is how do you—is there some kind of modeling or abstraction you can do of machine learning that lets you certify or in some way validate that the machine learning process went the way that it was supposed to.

Now, think about that in terms of fake news and information operations. So the ability of the machines to outpace a human, whether that’s—whether we’re on the receiving end of that, whether the bad guys are doing that to us, or whether we’re doing that in defense, and being able to understand what the—what the offensive directed against us information operation was, and what we did defensively, and being able to understand and accept those two answers. That’s a big, huge—in my mind—a big, huge issue. And so a lot of this stuff is cloud-based. So I’ve got software I don’t understand how it got to the answer, running on a computer that I don’t know where the computer is and who is running it. So that’s kind of a pretty big uncertainty area, and something that I think we need to work through understanding in a better way. And the good news is there are people in the research community who are doing that.

WOODRUFF: You mentioned the cloud. I mean, if we don’t trust the cloud, then where are we going to store all this important information?

LEDGETT: Well, trust the part of the cloud that you know. I’m not saying don’t trust the cloud, just don’t throw your information out the window and hope it lands someplace good. (Laughs.)

WOODRUFF: OK. Here.

Q: Lucas Koontz (sp) with the Joint Staff.

You mentioned some avenues that Russia—some positive avenues about our culture or our institutions that Russia’s taken advantage of. Freedom of speech was an example. You didn’t specifically mention this with China, but your film industry example and then you mentioned supply chain that kind of related to it, kind of made it seem like a search for capital or a search for profit might be one of the avenues that China is taking advantage of. If that’s the case, do you see any way to—any way to combat that?

LEDGETT: So I think that China is—the economic engine of China is not like the economic engine of the United States. We are motivated by making money and stockholder dividends. They are motivated by enhancing the state. And all Chinese companies are either state-owned enterprises or almost state-owned enterprises. There’s not really such a thing as an independent, free company in China. And the way that they’re writing the laws, that’s becoming even more so. So I think that the motivations are different. And I think that the way that you address that with Chinese companies is we have to think strategically. We can’t think on a quarter-by-quarter basis. We have to think in long terms—you know, ten, twenty years.

The Chinese are very transparent. If you look at their 2025 plan, their five-year plan that’s currently out there for commentary, they said: We’re going to best in the world at this, and this, and this, and this, and this. Things like, you know, handling our aging population, and that sort of thing. But things that they put out there as their goals directly trace to the things they’re doing in information space and in cyberspace. So I’m going to be number one in taking care of my aging population. That means that Chinese state-sponsored hackers are out there going after pharmaceutical companies today and stealing their intellectual property today. That’s happening. And the folks from the civilian intelligence agencies, the CrowdStrikes and the Symantecs and the FireEyes keep telling you about that in generic terms. They won’t give you the specific companies.

The only thing the Chinese go after that’s different is the five poisons. And the five poisons are Falun Gong, Taiwan, Tibet, democracy, pro-democracy movement, and the Uighurs, the Muslim minority in the West. And so any place in the world where there is a representation supporting the five poisons, you will see the Chinese in various ways going out there and trying to affect that information.

Did I answer your question?

WOODRUFF: You didn’t have a follow up? OK. There’s a hand right here. Yeah.

Q: Thank you. Shiraz Saeed from the Starr Companies, insurance carrier.

You talked a lot about artificial intelligence and the impact on information. Can you talk a little bit about artificial intelligence and the impact on physical warfare in terms of autonomous vehicles or any of these other items that might impact it?

LEDGETT: Yeah. That’s a—that’s a great question. And there’s a lot of work being doing for that, for autonomous vehicles, for swarming vehicles—you know, like swarms of drones—that sort of a thing. And I don’t think it’s—and my Air Force friends will hate this idea—I don’t think it’s, you know, too many years in the future where you’re not going to have manned platforms out there in combat, or at least they’ll be by exception. They will not be the bulk of the forces.

And I think the—that also changes the kind of warfare that you do to one where things like command and control become really important, you know, the ability to hold adversaries’ satellite systems at risk and reconstitute when ours are held at risk, and that becomes a really important maneuver going forward. It’s all part of that combined thing. You guys know the Third Offset strategy? First offset was nuclear weapons. Second offset was stealth, night-vision goggles, stuff like that. DOD’s been looking for the third offset. And the things that they seized on—AI, autonomous vehicles, things like that—are exactly the sorts of technology that the Chinese are actively acquiring at speed.

And they’ve figured out—we have a thing called CFIUS, the Committee on Foreign Investment in the U.S., where when there’s an acquisition being made there’s a—it’s led by Treasury and they’ll convene a group that will get together, and they’ll make a national security determination of whether that acquisition is a threat to national security. Most recently that was invoked by the president when he denied the ability of Broadcom to buy Qualcomm in the U.S., because he thought it was a threat to the competitiveness in the fifth-generation wireless. The Chinese figured this out. And there was a report that was done by the defense innovation unit, experimental—it’s now—they dropped the X. They’re not experimental anymore.

But last year they did a report that talked about Chinese research dollars in the U.S. And something like 85 percent of China’s research dollars in the U.S. are going to angel seed and series A funding of startups. And so they’re buying heavily in these same areas that we’re looking. So they’re buying the startups. And maybe eight out of ten will go down, but the two that are good, that actually are successful, they’ve acquired their intellectual property at very low cost and completely evaded the CFIUS process.

WOODRUFF: In AI and what else did you say?

LEDGETT: AI, autonomous vehicles.

WOODRUFF: Autonomous vehicles. Fascinating. Well, we could go on, and on, and on. This is endlessly fascinating. I want to thank all of you for being here. And I especially want to thank Rick Ledgett for enlightening us. Thank you. What a great conversation. We appreciate it. Thank you.

LEDGETT: Thank you. Appreciate it. (Applause.)

(END)

Top Stories on CFR

Iran

Steven Cook, the Eni Enrico Mattei Senior Fellow for Middle East and Africa Studies at CFR, and Ray Takeyh, the Hasib J. Sabbagh senior fellow for Middle East studies at CFR, sit down with James M. Lindsay to discuss Iran’s unprecedented attack on Israel and the prospects for a broader Middle East war.

Economics

CFR experts preview the upcoming World Bank and International Monetary Fund (IMF) Spring Meetings taking place in Washington, DC, from April 17 through 19.   

Sudan

A year into the civil war in Sudan, more than eight million people have been displaced, exacerbating an already devastating humanitarian crisis.