Meeting

What Tech Giants Can Learn From Hackers: Ethical Lessons in Cybersecurity

Monday, June 3, 2019
A man at the Def Con hacker convention in Las Vegas. Steve Marcus/Reuters
Speakers
Joseph Menn

Technology Projects Reporter, Reuters; Author, Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World

Katie Moussouris

Founder and Chief Executive Officer, Luta Security; Visiting Scholar, MIT Sloan School of Management; Affiliate, Belfer Center for Science and International Affairs, Harvard University; Cyber Fellow, New America

Presider
Jason Healey

Senior Research Scholar, Columbia University School of International and Public Affairs

HEALEY: Good afternoon, everyone. My name is Jason Healey. I’m really glad you could be here. Welcome to today’s Council on Foreign Relations meeting on “What Tech Giants Can Learn From Hackers: Ethical Lessons in Cybersecurity.” 

As I said, I’m Jason Healey. I’m a senior research scholar at Columbia University’s SIPA, School of International and Public Affairs. I’ll be your presiding of today’s session. 

And just a reminder that this is on the record, so please use what you see here. You can quote us on this. You can take fabulous photographs of your—of your panelists today. You can use it all. 

And I’m really excited to be here for part of this conversation, for this, because the people that we have—you’ve got their bios, but it is just going to be a really good conversation because we’ve got one of the best tech journalists. Joe Menn has been in the business since the dot-com era, since Y2K. You’ve been writing about security specifically for probably fifteen years now, right? 

MENN: Twenty. 

HEALEY: For twenty years now just on the security side, so really is going to be able to give us a great view on that. 

And especially happy also to have with us Katie Moussouris, who is just hacker royalty when it’s coming to this—(laughter)—and has just been involved as a hacker and has really learned a lot of lessons on this and many, many other topics. 

So if you can just take a second and look around your table and around the room, because all of us are going to be a little bit cooler because we get to hang out with Katie during this time—(laughter)—and we’re all going to be a whole lot smarter because we get to hang out and hear from both Katie and Joe. 

So it’s really an interesting title, “What Tech Giants Can Learn From Hackers: Ethical Lessons in Cybersecurity.” And first, I really want to hear from you, Joe, because your research on this has just been, I think, fascinating to hear what you’ve uncovered in your research here. And then—and then we’ll follow up with Katie after that to hear your perspective. Joe? 

MENN: Well, I guess—thanks very much for the—for the introduction, and thanks to everybody for coming. I’m really happy to do this. I’ve been working on this book for three years, so it’s just—it’s officially published tomorrow, so you’re going to have some of the earliest copies that are not already in the possession of intelligence agencies—(laughter)—and some of Beto O’Rourke’s more intrepid primary competitors. 

HEALEY: And I—and I forget; what was the title of that book? 

MENN: It’s Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World. 

So my last book was called Fatal System Error, and as you might guess it was kind of a downer. (Laughter.) It was about the confluence of geopolitics, the sort of indefensible attack surface of the internet and software, and organized crime were together making it pretty much impossible to stop cybercrime and hacking for nation-state purposes. And you know, I stand by that. I think that’s all true. But as the years have gone by, I’ve looked to try and find something happier to—you know, some way forward. You don’t—I mean, I think people realize the problem is disastrous, and I want to—I want to point to some things that are good. 

And I came up with writing about the Cult of the Dead Cow because of their moral development over the last three decades. They go back even farther than Katie. They go back to the mid-’80s. 

MOUSSOURIS: I am older than that. I mean, I am. (Laughter.) 

MENN: And you know, in the beginning their moral quandaries were is it better to steal long-distance—a little long-distance service from a lot of people or a lot of long-distance service from one big company—(laughter)—because, you know, if you wanted to connect to a bulletin board—which in those days was—it was what Facebook is now—you needed—you know, it was hundreds of dollars long-distance bills, and your parents weren’t going to like that. So it started with fairly low stakes. But these are the same guys, you know, who discovered really bad flaws in software and then had to figure out, well, if the software company won’t answer the phone when there’s—when there’s a problem, what do we do with that information? We just use it to amuse our friends or hack our enemies? Do we give it to the government? Or, as the cDc decided, should we have a media circus and, like, throw out copies of CDs to allow anyone to hack Windows? Which is what they did in 1998. 

So that was fairly dramatic. And then they just—they just kept evolving. They started new companies. They worked for nonprofits. They pretty much coined the idea of hacktivism. And they went to work for the government as well. So there are all these different paths they took to try to do the right thing. And I feel that now that Facebook and Google and Apple, all these companies are in these sort of continual moral quandaries, they can learn from the security folks, the hackers—the hackers that were wrestling with moral stuff all the way along, and sort of learned as they went, and were fairly open about the process. 

HEALEY: And there’s a lot in there about, you know, that it’s hard to stop. The problems were disastrous when you did the last book. That was 2010? 

MENN: 2010. 

HEALEY: And so I would like to come back to that, other problems still disastrous, and how that ties in with our what are the ethical things to do as part of that. I also love the what do you do when you find a vulnerability, and that’s one of the things that you mentioned as the—as the key ethical dilemmas in that, and the paths to do the right thing. And so I know the right thing right now is to turn to Katie. 

MOUSSOURIS: Well, no, I appreciate the—I appreciate the chance to be here as well. 

Just so that you’re all clear, I abdicated my hacker royalty throne back in 2007 when I stopped professionally hacking for a living. But I do—my company now works with governments and large organizations in helping to organize a response for that very important question, if somebody reports a vulnerability to you. 

HEALEY: Can I just say, professional hacking for a living, what do you mean by that? 

MOUSSOURIS: Well, we were called penetration testers. It’s still a profession today. In fact, you’ve got some penetration testing companies and some friends of mine in this audience that you should talk to if you are in need of those professional hacking services. But it’s essentially where a company hires a bunch of hackers; pre-vets them for skills, number one; and then also will run, you know, a typical background check that any employer would run. So most pen-test companies would not hire felons or convicted hackers. Some will. But you can negotiate that with your penetration testing company. 

Back in the early 2000s I was part of a group called @stake, which were born out of these hacking groups—the L0pht, cDc, et cetera—and I was in—I was in this company after I had been an independent hacker for hire. But I had grown up with this group and dialing into those BBSes. Getting myself in trouble by running up long-distance phone bills was certainly part of my early adulthood and childhood. And that was—you know, growing up in the Boston area being around these folks just kind of generally exposed me to a lot more of technology, which I already loved, than I would have been exposed to any other way. We had live hacking, you know, meetups with each other, and we would exchange knowledge and ideas. That’s how I learned how to hack systems I didn’t have access to, because at some of these warehouse spaces they would accumulate hardware and we could experiment on it there. So that was one of the primary reasons why the L0pht had a separate workspace. I think they were getting some pressure from a lot of their significant others who happened to be female—(laughter)—to get this equipment out of here, and so they rented a literal loft to put it in. And that was—that was where a lot of the hacking would take place. But we also exchanged, you know, other physical security techniques. We’d all learn how to pick locks and try out new lockpicking techniques, as well, at these meetups. 

But then there wasn’t a professional—a profession for it when I was learning to do this in the late ’80s, early ’90s. There was no—there was no need. Governments and banks and other underpinnings of our society were not largely on the internet yet. So when there were no professional needs for these services, they were just our hobbies, and that’s what we did. 

But, yeah, now my company pretty much sets up governments, large organizations, to be able to receive reports from friendly hackers who—you know, if you see something, say something, right? Report a vulnerability. And you would think that after about twenty years of being professionally in this organization that this would be a no-brainer; there would be no need for anyone to instruct governments and large orgs how to do this, and how to react to a friendly hacker trying to report something. But, in fact, it’s pretty grim right now: 94 percent of the Forbes Global 2000 have no way to report a security hole to them. You have to go hunting and looking. And often what my hacker friends do is they actually go to social media, they go to Twitter, and they try and find a security contact at a place. So this is actually quite common still. It’s common among governments. And in fact, three years ago, when the Pentagon started the program called Hack the Pentagon, that was the very first time the United States government had ever opened its doors to a random selection of hackers on the internet and asked to have its security tested and for the bugs to be reported, and that was a program that I helped design for the Pentagon. 

HEALEY: And so that gets to the path to do the right thing that Joe alluded to. So you’re someone, you’re—you come across a vulnerability in the Pentagon or, you know, a Fortune 50 company, and you say, oh my gosh, what do I do about this? And so you’re saying that there was no way—there still is largely not much way for them to do the right thing. 

MOUSSOURIS: Well, and you know, I actually—I actually take issue with it being called the “right thing.” And I take issue with hackers being categorized as black hats or white hats, right? I don’t actually recognize us as a Wild West cartoon of humanity. I think that the motivations of hackers are along the same lines as the rest of humanity’s motivations, and we are motivated by different things at different stages in our lives. But ultimately, it’s some mix of compensation, recognition, and pursuit of intellectual happiness. And that’s—you know, it’s a huge reason why we get into hacking is that last one. Compensation, you know, it’s just that we all have to make a living. And recognition need not be public. Often, it’s the recognition of our hacking peers that means more to us than anything else. 

So when I think about hackers doing the, quote/unquote, “right thing” when it comes to them deciding what to do with a bug that they’ve discovered, it’s much like any one of us deciding what to do at a yellow light, you know? (Laughter.) You may make a different choice on certain days, certain times of night, whether or not you’ve got somebody in your car that you really care about not, you know, getting into a high-speed police chase, et cetera, et cetera. So you’ve got a lot of calculus that you do in a split second at an intersection, and I think that calculus takes place with every bug. And you may have had a good reporting experience to one vendor and you feel like reporting bugs to that vendor any time you encounter them, or you could have a terrible reporting experience where you’ve gotten a legal threat and you decide that any other bug you find you’re either going to use it, you’re going to sell it to somebody who will use it, or you will do—or you’ll just sit on it, which is sort of the neutral thing to do. 

So when I think about expecting hackers to, quote/unquote, “do the right thing,” which usually means report the bug to get it fixed, that’s not taking into account all of the prosecution possibilities for them that they have to take into account. Why would I turn something over if I know for a fact that that organization is going to immediately send me a cease-and-desist and, you know, start putting gag orders on me? I would literally refuse, and that is absolutely what hackers do. 

So, yes, I try and clear the path for hackers to have least resistance when reporting a security hole, because we’re not actually in a Dr. Evil movie where, you know, they tell you ahead of time: By the way, I’m going to steal the following secrets from your company in this manner, and I have produced an entire report so that you can see how I’m about to steal it. I mean, that’s actually not—that’s not criminal behavior, right? So— 

MENN: As Jason was saying, there are different paths. And to me, one of the most remarkable things about the cDc is that as the playing field kept changing they would adapt to it. They didn’t just run into a wall with a particular software company or the fact that most software, at least back then, was monopoly or oligopoly. They would find different ways around it. So the public—they would hack the media. They put a lot of manipulative stuff into the media to get on TV and call attention to the fact that people’s computers aren’t secure and it’s in nobody’s interest to tell them that. You know, working in government and helping the buyers by, you know, founding companies like Veracode, as somebody in cDc did. So there are all these different ways to do it. 

And, right, every bug is different. Every hacker is different. And cDc itself only had—has only ever had fifty people, but it’s a big tent. People in the government, people who would never consider working for the government, different interpretations of hacktivism— 

MOUSSOURIS: Presidential candidates. 

MENN: Presidential—so far one. (Laughs.) Well, actually, Tequila Willy kept running, but— 

MOUSSOURIS: That’s true, that’s true. 

MENN: —he was more of a write-in. 

MOUSSOURIS: For those of you who didn’t read that article, Beto O’Rourke was part of the cDc at some point. 

MENN: From the— 

MOUSSOURIS: I note by the murmurs of surprise you had not all read that, and that is absolutely true. 

MENN: Yeah. Before he came to Columbia, so we didn’t have anything to—(laughter)—do with that.  

But so they came together, and what drove them to think about this as hacktivism? I mean—and can you just talk you just talk about that term a little bit and their role in that? 

MENN: So it was—this is fascinating. It was a really organic process, right, so in the beginning they—you know, they’re teenagers writing stuff for their own amusement. Some of it is technical. Some of it is making fun of the more accomplished criminal hackers that they aspire to be, you know, but were—you know, that were sort of more exclusive and elite. And then because they were sort of neutral ground and they were funny, actual more talented hackers started showing up and wanted to join them. And so they started the first modern hacker conference—I call it HoHoCon—in 1990 that invited cops and the press, and let’s all talk about what’s going on, what’s going on in security. 

There were hacker gatherings before, but they were more sort of drinking fests and war stories, and the cops that showed up were undercover. So that was an evolution. And then as they kept going, this talented crew from Boston in the L0pht started coming, and it was sort of like Switzerland. Some of the other, more talented criminal groups got arrested, broken up—LOD, MOD, some others—and so it was like—it was like the communal gathering place, the social club for people on all sorts of parts of security. 

And then—and then they survived because the L0pht had one of the first websites. All the other bulletin boards got wiped out by AOL and Netscape, but the L0pht had a website so they took all the text files, which were the lingua franca from before, and preserved them on the L0pht’s cDc website. And so when people were starting to understand, you know, write about what is this new Web thing, where does this culture come from, well, the—here’s something—here’s already a throwback. In like 1995 it was a throwback to the earlier years. 

So it kept evolving, and as they got popular with the Microsoft stunts at DEF CON, all the press came to them and they’re—wow, we’re really big now; what should we do with this attention? And there was actually an outsider named Laird Brown whose, you know, stage name was Oxblood Ruffin—said, you know, you really ought to move beyond just tech. You ought to do politics, and you ought to do it specifically against the Chinese government. And that was very interesting— 

HEALEY: Really? So he was— 

MENN: —because that was the perfect target— 

HEALEY: So what year was this? 

MENN: — because nobody liked the Chinese government. 

HEALEY: What year? 

MENN: So this starts in ’97, ’98— 

HEALEY: OK. 

MENN: —and it really picks up steam in ’00 and ’01. And in ’01 at DEF CON the cDc had a panel just about hackers— 

HEALEY: DEF CON, one of the big hacker conferences in Vegas. 

MENN: Yes. Jeff Moss had gone to HoHoCon, got inspired, and did DEF CON, which sort of took over the space and has been going ever since. 

HEALEY: And so they picked on the—or they decided— 

MENN: They picked on the Chinese government— 

HEALEY: —that China was the target. (Laughs.) 

MENN: —which was really smart because there were some people in the cDc that were surprisingly close to Western intelligence, did contract work, et cetera. There were some that were anti-U.S. government, but they hated the Chinese government just as much or more. And if you can get hackers— 

HEALEY: That’s interesting. 

MENN: —to agree on anything, it’s that information should flow freely. 

And Oxblood had actually worked in the bowels of the U.N. as a consultant, and he knew that free flow of information is actually enshrined—access to information is enshrined in international treaties which are enforceable. 

So they worked with lawyers, and they set up this—you know, citing the U.N. Declaration on Human Rights and other things—these treaties. They got people to say, you know, we should all come together and help the Chinese escape censorship, escape surveillance. And so they actually wound up driving development of Tor and other tools. 

HEALEY: Tor? 

MENN: Tor, the anonymity tool of choice for Edward Snowden, you know, widely used by spies, criminals, and dissidents to hide their tracks on the Internet, and—I will come out and say—a good thing. 

HEALEY: But I mean, so—cover hacktivism just for a second, and then let’s talk about, you know, this—let’s tie this to tech giants, and maybe Katie, pick up at that. And then after that we’ll probably open up to questions, so—you know, in ten minutes or so if you want to start getting my attention. 

A lot of times now I think if you are talking about hacktivism, the main thing that pops up is Anonymous, which is a quite different thing—I get it—than what they were trying—cDc was trying to get done nineteen years ago. 

MENN: Absolutely. So, I mean, a lot of—anybody can pick up the mantle and say, we’re the hacktivists now. Anonymous certainly carried the torch there for a while, I guess, about ten years ago. Their initial targets were the payment processors that were not processing payments to WikiLeaks. They got upset about that and so they did denial-of-service attacks against MasterCard, and PayPal, and the rest of them. 

You know, there are a lot of different interpretations of hacktivism. More recently there have been a bunch of hacks against spyware vendors. Spyware. Spyware is a family of software that is used in—you know, to track a spouse, or a kid, or a dissident. There are sort of Defense-Department-grade versions of it, there’s police-department-grade versions, and there’s, you know, back-of-a-truck versions of it. And a lot of it is—it’s deeply unpopular, especially when used against dissidents in undemocratic regimes. And there are a number of major vendors that have been hacked by people who say they are hacktivists. 

Actually, in the book I make a circumstantial argument that it may well be Russians because these particular major spyware vendors were serving the West as well as obnoxious regimes elsewhere. But certainly some of this hacking is morally motivated; if not those big ones, then some of the spyware vending is hacking.  

So hacktivism can mean a lot of things, and you can disagree. Many reasonable people will disagree about what’s appropriate behavior and what isn’t. 

HEALEY: And so, Katie, earlier you had talked about this being a profession. 

MOUSSOURIS: Uh-huh. 

HEALEY: And when I think about profession, I think about—well, part of a profession is having a code of conduct. And if you want to consider yourself in a profession and professionals, then you are signing up to the Hippocratic Oath, or you are signing up to—you know, as a military professional, you know—(inaudible)—laid out what that profession was going to be and what that code of conduct meant in that space. What do you think that is in here if you are an ethical hacker, if you want to go down this, are there codes of conduct that can kind of help steer behavior for folks when they are coming up to that yellow light? 

MOUSSOURIS: Well, in terms of vulnerability disclosure, the standard terms that have—I’ve helped define them, you know, over the course of my career but, you know, they existed before I was doing that type of specifically vulnerability disclosure work. But the standard terms were always something along the lines of, give us some time to fix it before you go public, right—some kind of—give us a reasonable time to fix it. And that reasonable time has changed over the years—some good reasons and some not-so-good reasons coming from the vendors.  

It’s understandable to ask for maybe thirty days or so to fix a software vulnerability and maybe an additional grace period to allow your users to apply the patch that you create, but when we get into IoT hardware, medical devices, things like that, it takes longer to not only create those fixes, but also to push them out into the field. We certainly see that with car hacking these days, you know. The only vendor—car vendor up until recently that could do over-the-air updates to their cars was Tesla, and I believe Jeep Chrysler has begun testing some for their newer models—over-the-air software updates. 

So it has gotten more complex as to how much time you give an organization, and then since it’s sort of—you know, you are trusting that organization to be truthful with you—the reporter, the hacker who is telling them about the bug—when do you decide that, actually, I think more people will remain at risk, and I have a moral obligation to warn the public when this vendor isn’t taking it seriously. 

So I think there isn’t—there isn’t a set guideline saying, you know, thou shalt never drop zero-day, which is basically dropping full technical details about a bug before there is a patch available. There is no such thing as saying, I shall never drop zero-day; therefore, I am a moral hacker. 

I have dropped zero-day before because the organization was not responding. It took about four months—gave them four months to get back to us with anything, even an acknowledgment. And, you know, when we decided to go ahead and release the advisory, we didn’t publish all the technical details it would take to exploit the vulnerability, but we easily could have and actually done so in the guise of saying to affected users, use this code to test whether or not you are vulnerable. Use our exploit code to test yourself, and then go ahead and apply, you know, some kind of remedy that we suggested. 

So moral ambiguity, I think, is much more severe and needs to be dealt with on the receiving end than on the giving end because we have been dealing with this as hackers and dealing with various temperature readings on vendors for, you know, over twenty years. 

HEALEY: Yeah, and there’s one thing I just want to point out because a lot what we’re thinking about, if you think about hackers, you’re thinking about vulnerabilities, you know, I always think about what the technology was like when I was growing up, right? I remember talking about computers and, all right, we’re talking about Microsoft. And I’m really glad that you brought that up. Now that we’re getting an Internet of Things, right, we’re talking about what do you do if you stumble across a vulnerability in embedded medical device, a hospital device, a—you know, something that might even affect a whole city, like Baltimore just got affected. Sorry, go ahead. 

MENN: So I’m really glad you brought up this issue of sort of ethical codes because one of the arguments I make in the book is that there is a real lack of it, not just in the hacker world—I think the hackers are actually—have been thrashing this stuff around among themselves for quite a while—but mainstream engineers inside these big companies, the Facebooks and the Googles. 

So Bruce Schneier makes the point that at Harvard Law School recently they were distraught because only 20 percent of the graduating class was going into public service, and that was terribly low. In engineering colleges, the number is essentially zero, and nobody seems to have been paying attention until now. 

There needs to be a tradition of public service technology, which could be, you know, big companies allowing their employees to not just do side projects, but side projects for the public good. It could be fellowships, it could be working for the government. It could be lots of different things.  

And the engineering code of ethics—IEEE has a code—but there’s like a couple paragraphs on ethical things. And the schools—the engineering schools, many of them now to be accredited, you have to take a philosophy course, but it’s not—it’s somebody from—you know, coming in and telling you about Plato. It’s not like the Challenger case studies—too often—it’s not to stop—where you really see, like, in a company how would this play out, what would you do. There needs to be all of those things, and I think that the ethical codes really need to be fleshed out.  

Lawyers have a tradition of pro bono work. Doctors have a code, you know, where they treat people even if they can’t afford to pay. Engineers need to have this kind of moral shift where they are dealing with this stuff all the time they realize they have an obligation to society. 

They didn’t have to until three years ago because everybody thought the tech was good. That’s not the universal view any more, and they need to earn it back. 

HEALEY: Yeah, and I like Katie’s phrase on that: there’s the moral obligation to warn the public. And what really struck me when we were talking earlier is that, you know, I spend a lot of my time looking at international relations and cyber issues, and there’s a lot of talk about, well, there are no norms in this space. And maybe that’s true with government. I don’t fully buy it with governments, but maybe that’s true with governments. But there are the norms, or even hearing about the norms of a moral obligation to warn the public about, hey, I found a vulnerability. I should tell someone about this.  

It sounded like what the cDc and some of these other groups were doing—and L0pht—of saying this system of the Internet is hugely vulnerable, and we need to do something about it. It sounds like they were driven by this need to make sure the system would stay up and running. It almost sounds like that was their ethical compass to some degree—is if you say what’s right and good, it’s let’s make sure the Internet is going to be sustainable, and it’s going to be there for everyone. 

MOUSSOURIS: Oh, yeah, we all—by definition—really enjoy computers. (Laughter.) For sure—and wanting to keep them up and running, and for our ability to talk to each other. 

HEALEY: Great. And we can compare that to states in a bit. OK, so—let’s see, I’ve got something—go ahead, please. 

MOUSSOURIS: Do you have one more thing— 

HEALEY: Thank you. 

MOUSSOURIS: —because I wanted to talk a little bit about surveillance software, and there is this moral ambiguity going on that has been informing regulation and legislation a lot around this type of software. 

On the one hand, obviously, you know, it is by definition software that is designed to take over a phone or a computer and invade someone’s privacy, right? There are lawful applications of this such as if you have a warrant, and you want to surveil a suspect, and you want to track their whereabouts and see who they are making contacts with, et cetera, and gather evidence. But there are also much more damaging in terms of human rights and actually domestic violence. Tons of these companies openly advertise finding your spouse, and some of the images they use are absolutely horrific in terms of their absolute support of, you know, domestic violence, and tracking, and control. 

But the answer is not more regulation. You would think that, you know, well, if we just track who is making the software, where are they selling it to—let’s just regulate this, you know, so that only law enforcement can use it. Unfortunately, such attempts to regulate that type of software were made, and they were introduced into some export control regulations in 2013. 

How many of you are familiar with the Wassenaar Arrangement? OK, so the Wassenaar Arrangement is an agreement between formerly forty-one countries—now it’s forty-two. India has been added. But it was originally forty-one countries, and they would normalize their export control laws around—their individual country export control laws around a top-level Wassenaar-level agreement on special technology and equipment for weapons. So think advanced radar systems technology would be—and systems themselves would be regulated through Wassenaar. 

Well, they added surveillance software and surveillance software technology. And I’ll just fast forward to the end. It was too broad. They thought they had narrowed it to just this class of software. It was way too broad and it actually endangered the internet’s ability to defend itself because think of the next WannaCry worm ripping through the internet. If every responder had to fill out an export control application and then wait for it to be evaluated before they could share samples of this to get it taken care of, obviously, that would be a problem.  

But it also posed a problem for vuln disclosure. And vulnerability disclosure, you can’t necessarily expect anyone to know what it is that they have or whether or not it matches this definition and they should have gotten an export control license when they’re literally just trying to get a bug fixed. 

So the State Department asked me and one other person to accompany them as official members of the U.S. delegation to renegotiate this Wassenaar arrangement and so we spent the better part of a year and a half in several meetings back and forth to renegotiate this and they actually needed some of the technologists who were familiar with technical exploitation techniques to be able to draw those lines around what they meant to control and what they were actually controlling with the original language. 

So we got some carve outs, and it’s still not perfect. There’s still exemptions that need to be made. But the process itself, if you think about it, required my time and another expert’s time, five meetings that were one week long per year, and the process itself, even when it gets ratified at Wassenaar, still takes a year or two to be implemented in U.S. law. 

So you begin to see there’s a problem with our knee-jerk reactions kicking off our traditional policy levers that we have in international relations and trade relations around this technology, and the fact that you need to rely on the volunteered time of specific technical experts who are very familiar with vuln disclosure, in one sense, or incident response to be able to reshape these things and it’s a multi-year effort. It’s not really sustainable the way it is. So to Joe’s point, having the fact that we don’t have enough crossover discipline folks coming out of universities is a huge problem. I never meant to hack policy and yet here we are. 

HEALEY: Yeah, and I don’t know what’s—you know, that you’re an official member of the delegation is incredible— 

MOUSSOURIS: I was, yes. I was. 

HEALEY: —and that you were doing it as a volunteer. 

MOUSSOURIS: Right. Exactly. Official member, unpaid. Thank you, U.S. government. 

HEALEY: Wow. Wow. 

MOUSSOURIS: Thank you so much. 

HEALEY: Incredible. I remember when we were having a conversation—we can start getting ready for the mics—we were in a meeting at the—at the Atlantic Council when the U.S. government was telling us that we needed to do this Wassenaar thing and they said, you don’t know; things are really bad. And we said, yeah, yeah, we kind of get that; we kind of get it’s bad. He’s, like, no, no, there’s malware and it’s getting—malicious software and it’s getting handed all around, the attacks really—yeah, we get that.  

And one of the things that they said was, look, it’s getting really bad; there’s this thing Stuxnet. This is the U.S. government—one of the U.S. government representatives was telling the private sector that one of the reasons the U.S. government had to regulate this was because when attacked the United States was itself behind. OK, and so we’ve got the first question up here. 

MOUSSOURIS: No comments on that. Yeah. (Laughter.) 

HEALEY: And then—and then, second, we’re going to go to the—over here. So please. 

Q: Hi. Michelle Caruso-Cabrera, CNBC contributor. 

You mentioned that they were eager to force the Chinese to allow the free flow of information back—as far back as 1997. That doesn’t seem to have worked out very well. We still have the Great Firewall of China. What do they—I mean, they’re disappointed? What happened or— 

MENN: I think disappointed is fair, but it’s—the battle lines keep moving and changing a lot. So, I mean, their—in their VPNs in China their Tor has spread a lot in China. There are some people that were influenced by cDc that have worked to put Tor on more secure versions of Android—the mobile versions of Tor. So it’s a running battle. There are a lot of—China is one of the biggest markets for Tor. 

Unfortunately, the VPNs are getting shut down more aggressively recently by the government and, of course, the government there—the government there has been twisting arms of American companies that are, you know, drooling over the giant market there. So I think one of the actually most interesting things is the prospect of a tech trade war with China. The hacktivists wanted that. You know, they were—the hacktivists wanted a break. They were screaming at American companies to stop doing business with China in ways that would impact human rights there and they wanted the—there to be—human rights to be tied to trade deals. 

We may be getting there by other means. You know, it may leave—it’s a complicated issue because, certainly, some of the U.S. tech companies would argue that them being there does some good. But a lot of the tech employees aren’t buying that. Google rank and file are upset about the prospect of Google returning censored search to China. 

I actually think that one of the lasting impacts of the Cult of the Dead Cow is in this sort of rank and file engineer activism in Silicon Valley now on all kinds of issues—cooperations with ICE, sale of facial recognition AI to police, and dealings with China. You know, I’ve covered Silicon Valley for twenty years. I’ve never seen anything like this. It’s really interesting. 

HEALEY: Is there—is there any equivalent hacktivist, hacktivism, in China? I mean, when I think about China in this I think about patriotic hacker groups, just very different kind of hacktivism—of hackers that would say, you’re insulting China, you know, in this—you know, stand over the Scarborough Shoals and so they’ll hack the Philippines. 

MENN: So if there is, it’s very underground. I mean, one of the reasons that they marketed the idea of hacktivism so successfully is that they made a group up that was doing this—the Hong Kong Blondes. 

MOUSSOURIS: Oh, right. The Hong Kong Blondes. 

MENN: Turns out to be somewhere between 80 percent and 100 percent bogus. (Laughter.) But it inspired people and actually got some people to do some good, at least with the Tibetan community, and allowed them to communicate with people inside China still. 

MOUSSOURIS: I don’t know. I think it was the first recognition of us as that first wave of hackers—that first generation of hackers—it was a recognition that we could affect the wider world because the wider world was more and more being controlled by the internet and being reachable by the internet. 

So that was—I think that was a beginning of sort of our realization that, wow, this hobby thing that we love and that we will go to Vegas and, you know, party about and have good times and everything has some broader implications. I think carrying that idea forward, in 2010 a number of major tech companies were hacked by China.  

It was called the Aurora attacks, and I remember Google led sort of the public discussion about that because they had been compromised and et cetera. And I remember I was working as a strategist at Microsoft at that time and Microsoft had been hit as well, as had Apple, as had many others. Law enforcement were the ones who actually found the attacks and notified the company. So we were unaware until law enforcement came and told us.  

And, to me, what was interesting about that was a turn of now it wasn’t the hackers making their political statements about China and privacy, et cetera. It was a major corporation. It was Google, and Google, I believe, was only about ten years old at the time. So they were a very young company still. I know, it’s incredible Google is, like, twenty or twenty-one years old, but it is. It’s really young.  

And Google sort of turned this thing that would have been disastrous PR wise for Microsoft or Apple on its own, you know, saying, we were—we were compromised. Even saying it was a nation-state and we couldn’t, you know, deal with it back then was still kind of iffy ground. 

So they turned it into a, you know what, we’re going to stand up for human rights and we think—we think what they’re doing in China is wrong and stuff. And so for a while Google was shut down in China. So it was interesting to see from a decade earlier this kind of, you know, anti-censorship stance being taken by the hackers and then ten years later taken up by a company.  

Now, you might argue, cynically, that they were able to turn this terrible breach around and turn it into a human rights platform that they had. But the fact that they did so and they did so successfully was really interesting to me. 

MENN: I think it’s just another illustration also of how central this stuff has become. Like, these security issues which sounded, like, pretty obscure twenty-five years ago are now the lifeblood of, like, the biggest companies on the planet and of, you know, our national security—everybody’s national security. I mean, these, you know, weird hobbyist misfits when they were thirteen were interested in these things and now, you know, the White House has to be dealing with these things. 

HEALEY: Great. And go here for Alexander. 

Q: My name is Alex Yergin. I work in the data technology space. 

My question is, with the rise of sort of data-driven AI and increasing—both governments and companies increasingly providing personalization, being able to track people, whatever, based on data and patterns, how does that change hacking? I mean, not—are hackers not only going to, like, try and take data but maybe actually try to actively interfere in data that’s fueling these AI engines to sort of change their results, so to speak? 

MENN: Let me—I’m sure Katie has stuff on this but let me say a couple of things on that quickly. So, first of all, AI is already being used both on offense and defense. We hear more about the defense because people don’t—the people really good at offense tend not to advertise it. But they are using AI to make their attacks more effective. That’s just—that’s the march of progress for offense and defense. 

What I think is really interesting is that there are, clearly, huge ethical issues around AI that we are not really coming to terms with yet and it is racing ahead all around the world and there’s just—you know, there are some, like, pleasant symposia and things about it but where people talk in abstractions.  

But the reality is AI is what’s driving you to watch more YouTube videos than you’d planned when you first sat down. It’s what’s promoting certain content on Twitter, on Facebook, and elsewhere, and some of that is, frankly, bad for society. A lot of that is bad for society. 

So one of my motivations in writing the book is that I hope that these big companies do some more rigorous thinking and internal debating and external debating about the morality of how they’re applying AI. 

MOUSSOURIS: So in terms of AI being used in offense and defense, that’s absolutely happening in AI and ML—machine learning. As we know, the training data is what makes the good response. There have been presentations by hackers dating back at least four years where they are deliberately manipulating the environment such that it’s designed to fool the AI and the ML. 

So the folks—the vendors that are producing, you know, these AI- and ML-enhanced defensive technologies are really having to be informed by the offense side that’s working to create all of these different ways that you can fool the AI or trick the AI or ML. So that’s, certainly, happening. 

And then on the vulnerability discovery front, how many of you heard of the DARPA Cyber Grand Challenge that happened, right? So DARPA, before they did this, they came to me and among other—many other people but they asked me a series of questions and they said, what would it be like if we did a contest at DARPA to create—you know, to basically encourage people to create AI- ML-driven vulnerability discovery, so find ten thousand bugs using just AI? 

And I said, well, you will quickly overwhelm every security team at every major company. You will create what I call bug foie gras. You will be shoving far too many bugs—(laughter)—down the hole than anyone can process, and I gave the example of, you know, somebody had given us seven hundred crash dumps that we had to investigate at Microsoft. Just the putting—you know, do we open this as one case with seven hundred sub nodes?  

What if they’re all different root causes? Like, all of these things literally for just one friendly finder throwing us seven hundred crash dumps, and we had the largest funnel of intake in the world. We still do—former we, at Microsoft. Over two hundred thousand non-spam email messages a year come in to Secure at Microsoft—AI- ML-enhanced bug reports that then would have to be disambiguated. I think you would quickly crush the response capabilities of any organization. 

Now, could you just put some AI and ML on the receiving end? No. (Laughter.) The answer is no. It’s not really—it’s not really acceptable if you were to close something as not a bug and it’s an accident of the AI and the person who reported it says, well, actually it is and I’m going to drop Zero-Day and suddenly over a billion machines are infected. 

So there’s always been a human behind, you know, the—answering the email of Secure at Microsoft and there always will be. But, I mean, it is—it is what ended up creating DARPA’s sort of two-phased approach where they said, OK, one, we’re not going to target any live software; we’re going to deliberately write software that isn’t belonging to any vendor and produce that as the target. And then they also said that they were going to do an automated bug discovery and automated patching as part of it. 

Now, why hasn’t that been popular? We’ve been on the internet long enough. Its application compatibility, backwards compatibility. It’s the testing of the patch in so many different deployment environments. It’s not appropriate for an AI solution. 

HEALEY: And if I can just highlight this, and we’re going to go here—and by the way, thank you. Everyone’s been really good about stating your name and your affiliation, and just a reminder that we are—we are on the record. Going to be next here and we’ll do next here, and then here. Is just think about that. DOD, through DARPA, funded a project so that we could find vulnerabilities faster than we can patch them, and that was partly intentional from the Department of Defense, who wants to be able to hack better and faster. 

And so when you’re looking back at, all right, so why are the problems so disastrous, right, I mean, we’re really caught, in Washington, D.C., between wanting to secure ourselves and wanting DOD and the intelligence community to be able to hack better. And so this was one specifically where when we say, hey, AI might be the answer, well, it’s probably an answer that helps the attackers more, and that was specifically what DARPA seems to have been funding. 

So we’ll go over here. 

Q: (Off mic)—Cohen (sp), Princeton University. 

I have a question for Katie. Your example of a hacker or a group of hackers finding a significant vulnerability, telling the company or whatever that they’re given a period of time before they drop Zero-Day and tell the whole public about the bug to give them a chance to get a patch, where is the compensation involved in that or do you earn compensation in other parts of the business? 

MOUSSOURIS: That is a great question. So the—in the olden days when this first started, the only compensation you could get really was, potentially, a resume builder. So if you got your name credited in a Microsoft bulletin, you had a series of these things you could point to, or a mega corporation thanked you for your skills— 

HEALEY: And the olden days were when? 

MOUSSOURIS: The olden days were, I mean, literally—at Microsoft it was before 2013. (Laughter.) 

HEALEY: OK. Well, I remember back when—yeah. OK. (Laughter.) 

MOUSSOURIS: And—yes. But, actually, before 2010 most organizations offered no compensation if you were not already a professionally contracted penetration tester who was hired specifically to find bugs. In 2010, Google started what was called a bug bounty and they were following the lead set back in 1995 by Mozilla, who became—or sorry, it was Netscape back then but they became Mozilla.  

And so the very first sort of acknowledgement that maybe this should be paid but also that there may be perverse incentives if you didn’t watch out was a Dilbert cartoon written in 1995 where it showed, you know, the pointy-haired manager said, we’re going to pay for all these bugs, and then Dilbert and some of the other folks said, oh, awesome, I’m going to write me a new set of golf clubs; I’m going to write me a minivan—(laughter)—you know, because they were writing the code so they were, basically, going to collect, you know, the bounty on the other end. 

To this day, some of the economic work that I did to create Microsoft’s first bug bounties, which were the ones launched in 2013, were around these economic incentives, setting them at the appropriate rate such that you’re giving a token of appreciation but it’s not so much that you are cannibalizing your own labor pipeline and creating, you know, perverse incentive opportunities that would be lower risk, you know, or low enough risk that your internal folks may do it. 

And one more thing about bug bounties. Now there are commercial, you know, companies that will offer to run these bug bounties for you, sort of like a middleman platform, and they are really good for things like, you know, handling all the bug reports, making sure there’s no spam that comes to your team, et cetera. They’ll handle the payments really easily, and that’s an important thing. 

But a lot of them are trying to encourage more and more companies to just keep paying more bug bounties, higher bug bounties, and actually forego professional testers in lieu of this crowd source model. You can begin to see when most of these bug bounties are being paid out to individuals in India and most of our—a lot of our software is outsourced to be built in India—you begin to see the Dilbert cartoon come back into full focus. 

So yes, it’s possible to get paid. No, you can’t pay them so much for such things that, you know, you now have alienated your own internal workforce and can’t hire a new college student ever again because why would they want to come to work for you when they can backpack across the Andes and collect your bug bounties instead? 

HEALEY: What was the biggest check that you wrote? 

MOUSSOURIS: I, personally, wrote a bug bounty check at Microsoft for $100,000. That, at the time, was the very largest ongoing bounty offered by any vendor. But it was set at the defense market going price. There was a contest called Pwn2Own and it was once a year. But hackers were sort of hoarding their bugs and hoping that none of the bugs in their exploit chain were blown by anybody else or patched by the vendor accidentally, and they would hoard it until this contest, which is what I call an exploit art walk. It’s a beautiful time in the spring in Canada, you know, where the bugs come out. But here is the thing. (Laughter.) You know, it’s in bloom. The bugs are in bloom.  

But here was the thing. Microsoft—we decided we didn’t want to wait once a year to get those types of information. So we offered year round and it’s still going. It’s now, I believe, up to $200,000. But it’s for a class of issue that were so rare—bringing us back to China—were so rare for us to get this class of issue that we were getting one of these, on average, every three years before the bug bounty.  

After we offered the bug bounty—oh, and these were issues that would take many years and possibly several versions of the operating system to fix. This wasn’t a patch Tuesday type of issue that we were looking for. We were looking for kind of primary architectural issues on the platform. 

So before we used to get them once every three years and usually after the fact. They were already public. An academic would have done it. Afterwards, we were getting several of these coming in per year, and even from such places as we couldn’t even pay them. 

HEALEY: Oh. 

MOUSSOURIS: We would ask the question, you know, where is your country of residence, what is your citizenship, et cetera, before we would even take the submission. We got one from someone who answered the question and said he lived in Syria. And so I personally wrote back to him and I said, I’m sorry, due to U.S. sanctions we cannot pay you. You know, we’re really sorry about that. If you have any bugs to report you can certainly report them but we can’t pay you for this new exploitation technique. And what happened was he said, oh, that’s OK, and he gave it to us anyway. 

HEALEY: Oh, wow. 

MENN: Oh, wow. 

HEALEY: Ted, did you have any— 

MENN: So Katie is, obviously, one of the pioneers in this whole space. But I’d just like to—for, like, a thirty-thousand-foot level, you can now make a living doing this and you couldn’t used to, and you will never be able to make as good a living doing this as you could by selling to offense. So it’s—you know, hackers are the unacknowledged legislators of the world right now and there’s a sliding scale for your morality. You can say, like— 

HEALEY: Totally tweetable. 

MENN: —I want to make this much money and just do defense or I’m going to make that much money and I’m going to sell to offense, or this much money and sell to offense on the other side of the planet. 

MOUSSOURIS: And to get back to our morality, selling to offense is not the sole arbiter of whether you are a moral person. If you are selling to your own government, you are a patriot. That government may not be the United States government. So, I mean, thinking about these things in broader terms and thinking about us as humans, we are still human. I mean, this is—this is my natural hair color. (Laughter.) 

HEALEY: And we’re going to go one here and then there and then—yeah, so this one. I’ll try to— 

Q: Roman Martinez, a private investor. 

Actually, sort of related questions and a follow-up. Could you talk a little bit about the participant profile and the economic model behind it? In other words, how much of these activities—what you just described—or the Dilberts or the consulting firms or the pure criminals who might be stealing credit card information? 

MOUSSOURIS: OK. Well, I usually look at this as a labor market exercise, and a number of years ago I did a labor market study on the bug bounty labor market and it was part of a larger set of work done with MIT, Sloan School, and Harvard Kennedy School, looking at the system dynamics of the vulnerability exploit market as a whole. And when we look at bug bounty labor, that’s the defensive labor in this economy, if that makes sense. Those are the folks are selling their bugs for a smaller amount because it’s a bug bounty to the vendor itself. 

When I look at this labor market, I look at the stratification of skills. At the very top of the pyramid you’ve got very few individuals who have the capability to find serious security holes and exploit them at the nation-state level. That number has stayed fairly steady. 

HEALEY: Really? 

MOUSSOURIS: It has. Even though we’ve got more and more people online and more and more hackers online, the number at the tippity top has stayed steady at a few thousand individuals worldwide. The reason that number has stayed steady is because technology advances and some people who had that skill set three years ago no longer have that skill set, you see. So there’s sort of a churn in that top group. Very rare that it’s all the same people going over through time.  

But you have sort of an ingress of labor and then a natural attrition for various reasons. And then you’ve got sort of this middle class of skill level and that would be your professional penetration testers, maybe some folks who have the skill and capability to sell to the offense market, and then you’ve got that bottom layer, which is sort of the I could literally teach everyone in this room how to walk out of here and find bugs in that bottom layer.  

I’ve done it for screenwriters. I could do it for you. It is not complicated, and there are free and nearly-free tools available to scan, you know, for bugs at that bottom layer. That group is growing. But we’re still not seeing a lot of growth at the top. 

So I guess I would say, you know, the—my take on it is that we’re always going to have a very narrow set because of technological progress of these people who are sort of nation-state capable at any given time. It’s a matter of assume that half of them are already locked up in their own nation-states, ours included, and maybe it’s the other half that you have the chance to influence. And part of my life’s work has been how do we create more influential paths to grow this skill level of people and then move them into—move them into this area. 

HEALEY: Yeah. And I know you have that follow-on, but we’ll try and stick around after so we can. Let’s go here, and then the gentleman here, and then, hopefully, we’ll have time for one more. 

Q: Thank you very much. Ronnie Heyman, GAF Corporation. 

I wanted to ask you about the recent $350 million gift to MIT to advance AI and, I believe, counter—shore up cybersecurity. That sounds like a tremendous amount of money in a philanthropic sense. But when you compare it to what a Google, a JPMorgan, and a Microsoft might be expending, could you comment on how you think that will be deployed and what its additional efficacy will be? 

MOUSSOURIS: So not having anything to do with that donation, I can assume that for—if it’s to support research, I know that the research that we had done was supported with about a quarter million dollars of donation from Facebook and that was who donated to get that research done for us. So I know how much—how far that goes for a study of our size. I have no idea how they intend to deploy it in terms of funding for studies or other injections into, you know, this economy. 

HEALEY: Go here. 

Q: Ricardo Tavares from TechPolis, a California, a technology policy company. 

So there is a lot of talk about the offense-defense balance and I want to ask you how do you feel the U.S. government is performing along that balance. It seems that we are being hacked by our own tools. 

MENN: So this is something I’ve written a lot about over the years, going back before Snowden. The people that I’ve spoken to—high-ranking intelligence and ex-intelligence folks in the U.S.—are pretty much unanimous that we spend way too much on offense. When you’re starting from a playing field that’s already tilted towards offense, attacking is much, much easier than defending. The attacker has to be right once. The defender always has to be right. 

But if you include signals intelligence, which involves penetration—and, you know, actual destruction is a lesser included case of that—then it’s 90/10. That’s the figure that people in the NSA have given me over the years, and we seem to be making that worse rather than better. An example is—it was colloquially known inside Washington as the Dick Clarke five, the group. The President’s Commission has a very, very long title after Snowden that was looking at issues including software vulnerability, and their recommendation—one of them—was to spin out the defensive arm of NSA known as IAD—Information Assurance Division, or Directorate—to put in DHS. Maybe it should go to the Pentagon. Put it somewhere where it’s not controlled by people who are all about offense and interception, and that went nowhere. In fact, they made IAD disappear. So there is no more IAD. It’s been—the mission has been sucked up into broader NSA, which is about offense, and so we’re actually going in the wrong direction and that’s why—one of the reasons we’re so much more vulnerable than we need to be right now. 

HEALEY: And Joe really has written more on this than almost anybody else. So, really, follow him on Twitter and check out his stuff. And then if we can go all the way in the back. All the way in the back. 

Q: Thank you very much. I’m Stephanie Ma (ph) with Xinhua News Agency. 

I have a question on the recent Huawei issues. As cybersecurity professionals, what’s—what do you make of the U.S. restriction on Huawei and the ban’s impact on the global chains of the telecom industry? And will the global rollout of 5G network be affected by the ban and just— 

HEALEY: Great. Thank you. 

Q: Yeah. Thank you. 

HEALEY: Thank you, Stephanie. Anyone? 

MOUSSOURIS: Do you want to take first and then I’ll go next?  

MENN: Oh, boy. 

MOUSSOURIS: Because I have—I have feelings. 

MENN: I don’t think I’m going to be able to thrash through this in a couple sentences here. There’s a lot going on. There are systems that are vulnerable. There are systems that are subject to influence by governments. There is some protectionism in play here. There’s a lot going on.  

It’s hard to unpack. I’m not going to be able to do a decent—a decent job. I think that the aforementioned possibility of a tech cold war, tech trade war, is going to have a disastrous impact on American companies as well as others around the world and I don’t—I don’t know how it’s going to play out. I think there may be a fair amount of brinksmanship involved. It wouldn’t be the first time that there’s been a lot of noise and there winds up being some sort of reasonable compromise, you know, just ahead of an election or something else significant. 

MOUSSOURIS: Yeah, and I would say that, you know, when we see these attempts at the balkanization of software for different countries—we already saw it, you know, with the Kaspersky software ban in the United States—what that opens up for the United States and, I think, tech companies based in the U.S. is reciprocation that, one, economically we’re not really going to be able to absorb long term, and two, the fact that, you know, the technology layers that are built upon.  

So I think the 5G deployment question itself there is—you know, I think the 5G deployment can still occur worldwide but it’s a matter of whether or not our companies will end up suffering because of similar reciprocal trade blockades. We are certainly not in a position in the United States, or most Western countries, to fabricate all of our equipment domestically or with countries that are not China. So I think that there is going to be a global dependency in the supply chain no matter what we do apart from Huawei. So I think it’s a—I think it’s a path that we should tread very, very carefully. 

HEALEY: And I kind of like it—thank you for the question—because it ties us back to the first question from Michelle about China in this, right, and in the early day—you know, in the—in the late ’90s, early 2000s, it was about openness and it was about what can we do to make sure and the hacker ethic of information wants to be free or the technology ethic of—technology wants to be free and that was, largely, the U.S. policy, and now the policy is so much being driven by the national security concerns. 

And, of course, that’s a great—we’ll finish it there because we know the CFR is a great place to talk about technology issues. Thank our guests. (Applause.) So we promised you it was going to be a fascinating conversation and I hope you enjoyed it as much as we did. Thank you. 

(END) 

Top Stories on CFR

India

The election date for the world’s largest democracy is set to begin April 19 and last six weeks. What would the results of a third term for Prime Minister Modi mean for India’s economy, democracy, and position in the Global South? 

RealEcon

The response to the temporary closure of the Port of Baltimore—from a deadly tanker collision—demonstrates the resilience of U.S. supply chains despite fears of costly disruptions.

Terrorism and Counterterrorism

Violence around U.S. elections in 2024 could not only destabilize American democracy but also embolden autocrats across the world. Jacob Ware recommends that political leaders take steps to shore up civic trust and remove the opportunity for violence ahead of the 2024 election season.