The Road Ahead for Autonomous Vehicles

Thursday, February 27, 2014
Speakers
Erik Brynjolfsson

Schussel Family Professor of Management, MIT Sloan School of Management; Director, MIT Center for Digital Business; Research Associate, National Bureau of Economic Research

Chunka Mui

Managing Director, Devil's Advocate Group; Coauthor, The New Killer Apps: How Large Companies Can Out-Innovate Start-Ups

Jennifer Healey

Research Scientist, Intel Corporation Research Labs

Presider
James Shinn

Lecturer, School of Engineering and Applied Science, Princeton University

Once thought of as science fiction, the autonomous vehicle may soon be a reality. Three leading thinkers in the field, MIT's Erik Brynjolfsson, Intel's Jennifer Healey, and Chunka Mui of the Devil's Advocate Group join James Shinn of Princeton University to discuss the future of driverless cars and the economic, legal, and policy questions that they raise. Though fully self-driving automobiles may yet be several years away, more limited applications of the technology, such as automated collision avoidance and self-parking are already nearing the market.

The Emerging Technology series explores the science behind innovative new technologies and the effects they will have on U.S. foreign policy, international relations, and the global economy.

SHINN: So welcome. Welcome to the Council's new series on emerging technologies. Today's topic is Driverless Automobiles: Silicon Valley Dream or Next Big Thing?

And we have some terrific, and particularly well-informed expert guests to have this conversation with us today: Jennifer Healey, who is a research scientist at Intel Corporation; Erik, who's...

BRYNJOLFSSON: Are you going to say my last name? Brynjolfsson.

SHINN: ... Brynjolfsson, who is director of the Digital Business Center, Center for Digital Research at MIT, and a professor as well; and Chunka Mui, who is the managing director of the Devil's Advocacy Group. And I should add that all three of our guests are technically very savvy, with a good engineering background, and all share roots at MIT, at one point or another, in their career.

My name's Jim Shinn. I teach at Princeton's faculty of Engineering of Applied Sciences, which someone described last week as a feeder school for Google, Goldman and MIT's faculty of graduate computer science. So I'm really glad to be here.

This is on the record, as you know, though it'd be good to keep all of our listening devices and cell phones off. We welcome the members that are here, as well as those who are participating in the meeting by the video feed.

I'm told the format for today is divided into two halves; first, a conversation with our guests, and then following that, the second half we'll have a to-and-fro of questions and comments with the audience with the members here, as well as those who are participating by video link.

This, by the way, is a really cool series. It's a wonderful reflection of the fact that the Council has realized that emerging technologies, like the driverless automobile, really are having a transformational impact on the global economy, and on U.S. foreign policy.

In fact, if you thumbed through your copy hot off the presses of the March-April issue of Foreign Affairs, it's called, "Next Tech," and five of the eight lead articles in this issue are in fact about emerging technologies.

So the first question, I think, would be for Jennifer. What—what do you think the key enabling technologies of the driverless automobile are? What underlying innovations, or what combinations of innovations are in the platform?

HEALEY: Sure. Well, what's making the driverless car possible right now is a technology called "LIDAR." And it is basically a laser scanner, and it basically can map depth.

It can actually do a 3D picture of the room, and using a 3D LIDAR camera, the car can know where all of you are, and the depth. And using that, plus some computer-vision technology, but mainly 3D LIDAR, can navigate and not hit anyone.

SHINN: Before turning to Erik, we could illustrate this with a short clip not intended as a Google ad. And Erik actually has had the experience of riding in one of these vehicles, which if you could share with us, I think would be terrific.

(VIDEO CLIP SHOWN)

SHINN: So with very high confidence, all of you in the room will have this experience sooner or later, your first ride in a driverless automobile. But Erik's already had the benefit. What was it like?

BRYNJOLFSSON: It was remarkable. I have to say, in a way, it was humbling, because it caught me off guard how rapidly this technology caught on.

About 10 years ago, 2004, 2005, I was teaching a class at MIT, and one of the topics of discussion was what machines could do, and what they can't do. And I gave an example of something machines can't do, was driving a car.

I said, you know, it's structured tasks, good for computers. But there's too much visual and other information that comes in. No set rules of the road, especially in Boston, about what needs to be done.

And in fact, I wasn't alone. Frank Levy and Dick Murnane wrote a wonderful book called "The New Division of Labor," where they gave driving cars, an example, that couldn't be done.

So in the summer of 2012, when I got to ride in a driverless car, I was like, you know, surprised at how effective it was. We rode down out of Mountain View up—down Route 101 up to San Francisco and back again.

And I kind of had three kind—three reactions successively. The first reaction, honestly, was a little bit of fear. We were driving down through the road, and Route 101, middle of the afternoon, you'd think it would be pretty clear.

But the reality was that the car in front of us came to a complete dead stop; not just slowed down, but just stopped right in the middle of the highway. And I couldn't help just sort of clutching my seat a little bit, wondering if the Google car was going to notice that.

And fortunately, it did. And what was a little bit reassuring is it has—they were nice enough to have a little laptop in the passenger seat that displayed the LIDAR and everything else that it was seeing. Looked kind of like a video game. So you could see out the windshield, you could see the car in front of us.

And then you could also see little outlines of what it was seeing, you know, drawn there, not just the car, it also, actually, interestingly, a little side note, it had these little boxes in the back right of each car that were in red, kind of blinking.

I asked, "What are those?"

And they said, "Well, those are the other cars' blind spots."

Not Google's blind spots, but its understanding of the blind spots that the other cars—and the Google car doesn't want to just avoid hitting the other cars. It also wants to avoid driving in other people's blind spots.

So it had a bunch of these little rules of thumb, but also didn't regulate speed by one car ahead, but it'll look two, or three, or four cars ahead, depending on which one it thought was the right limit. So it had 101 little rules that maybe you were taught in driver's ed, but I don't know how many people actually followed those kinds of rules. But they were programmed in.

After about five or 10 minutes, I got to be pretty comfortable with it. My next phase was exuberance. And I was kind of thrilled. And Andy McAfee, the co-author of my book, "The Second Machine Age," and I were just waving—literally waving out the window at the people as we drove by, "Hey, look at us."

SHINN: Both hands out the window.

BRYNJOLFSSON: Both hands, exactly. We were sitting in the back seat, just to be—you know, they had an engineer sitting in the front seat.

And then, you know, that lasted for maybe another five or 10 minutes. And after awhile that got kind of boring. And then we drove all the way up and all the way back, and the traffic was—never went more than 55 miles per hour, even though the other cars were whizzing by us at times.

And most of the ride, I think my main reaction was kind of boredom. You know, it was like, "OK, I get this." And the car drives very, very smoothly, kind of like, you know, your grandmother might, very carefully. Of course, it follows all the rules.

And after a while, I was like, "OK, I get this." It was kind of like you might react to, I don't know, watching a dishwasher run, or something like that.

I think that's a microcosm, some of the ways that a lot of society will react to it. You know, first fear and exhilaration, but after a while, it will come to accept this technology the way we have so many other technologies.

You know, elevators used to have humans in them to make sure people felt comfortable riding in them.

SHINN: If I could ask, which part of the system in that vehicle was, in your view, and the views of our other panelists, what was the real breakthrough? What was the real improvement in function, or in cost of function that led to you being relatively relaxed going down 101?

BRYNJOLFSSON: So they worked on self-driving cars at MIT as well. They didn't succeed as well as Sebastian Thrun and his team. And my MIT colleagues tend to go, "Well, that's because Sebastian kind of cheated."

And the way he cheated was, which I think was the breakthrough, was that instead of trying to make this car completely autonomous and figure out on the fly what was going on, they have a map in advance of everything, not just the roads, but every stop sign, every light pole, every stoplight, everything that's there on the road.

So as it drives by, it doesn't have to figure out, you know, "What is that? Is that a stop sign? Is that a light pole?"

It already knows all those things. They're all annotated. All it has to do is match them. And as long as what it's seeing is sort of roughly close to what it's expecting, it's comfortable.

Now, if there's a snowstorm, or if it's very foggy, or if there are other situations—or for that matter, if there's construction, and somebody moves around some of the light poles, then the Google car says, "Oh, I don't feel comfortable," and it switches over, it says, "You, human, you navigate this."

So that—by doing that, it had an enormous amount of data --it's really a big data problem—about the world, and was able to navigate much more effectively than what most people are trying to do, which was figure out the car, how it could run on its own.

SHINN: That is really interesting. So Chunka, you've written a lot about—about inflection points, right, how – as has Eric – what sort of combination of events and technology together produce some really transformational technology that's fielded at scale.

What do you think the next step is going to be that'll make driverless automobiles a reality, so that many of the people in this room could drive them, rather than an engineering exercise on 101?

MUI: Well, I think there are still technology problems to be dealt with, but those are not going drive the inflection points. What's going to drive the inflection point, I believe, is the audacity of a region—whether that's a state, or a city, or a country—to create the environment for large-scale testing, and then capital and technology, to walk into that opportunity.

Because I think the benefits—you know, a million people get killed every year driving cars. And 50 million people get injured driving cars.

The benefits, if you could make self-driving cars to reduce accidents, are so huge, that once the demonstration example happens, that shows that the technology works, all the barriers will drop away. So that's the inflection point.

But it still requires technology. It requires capital, which is easy. And it requires policy, from a standpoint of creating the environment for testing.

SHINN: Jennifer, do you see a technical inflection point before this sort of next fielded trial, or do you think all the pieces are there together?

HEALEY: Well, I think there's going to have to be a cost inflection point. And I know we have a slight disagreement about that. I wanted to say these maps that Google has are 3D LIDAR maps. So it's 3D maps. You're just doing it—and they use GPS.

So you overlay it, and then you get a very, very good view. But, as you mentioned, snow, right, snow will change the way the map looks, because to a laser, snow is as solid as concrete. A leaf is as solid as concrete. So it actually—it won't—it won't get that match.

So if you have a leaf, or if a paper bag blows in front of your LIDAR, it'll look like a wall just suddenly appeared. And I'm sure the Google car doesn't stop short if it's going—like they say, oh, it could be running at 160 miles an hour, except if a paper bag flies in front of your face, and you have to stop at 160 miles an hour.

MUI: What it does is—what it does, since it's watching three cars ahead, it knows that nobody else stopped for that paper bag. So it can sort of assume that.

HEALEY: And that's going to get to my controversial technology point, which is this idea that you could have V to infrastructure, or V-to-V communication.

SHINN: "V" being...

HEALEY: Oh, I'm sorry. Vehicle, sorry. I'm so deep in the world—so, yes. So we just rattle these acronyms off. So vehicle-to-vehicle communication, or vehicle-to-infrastructure communication.

And although LIDAR is a fabulous technology, it can't see ahead forever. It has limitations. But information from forever-ahead can be transmitted back to the car, you know, if it went through the cloud, if it went through roadside infrastructure, if it—through an intelligent gateway.

Or if a cars actually were all on the same frequency, they could talk to each other and say, "Watch out for the paper bag," or—you know. Or you could do communication with the car ahead of you and sort of do platooning, and you wouldn't—you would take that as evidence versus, you know, the paper-bag blockade. So it's intelligent sensor fusion between different modes.

SHINN: How important do you think this kind of network effect, automating not just one vehicle, but multiple vehicles in the traffic pattern, so they can talk to each other? How important do you think that is in the inflection point?

HEALEY: See, I think it's pretty important, because I think it's going to allow more coordinated action between both autonomous vehicles, semi-autonomous vehicles, and human-driven vehicles, so that you'll have information about—people could communicate information about their plans. And the autonomous vehicles, aside from knowing people's blind spots, might be able to know their intentions, too.

MUI: And some people believe it's irrelevant. Because if you require a vehicle to vehicle communication, you'll never get there, right, because if you need to have all the cars talking to each other in order to make this work, the cars will never all talk to one another.

So if you get to a point where you have no benefits until they all talk to one another—so I'm not arguing that you might not need it. But what I'm saying is if you need it, we'll never see this stuff.

BRYNJOLFSSON: Well, there's a tradeoff between need and benefit from it. So what I see is that it's not a black or white thing that, you know, on such-and-such day, 2022, we're going to have autonomous cars. It's going to be a much more gradual adoption.

That's one of the things I got to appreciate from riding it, and talking to someone like my colleagues, who were much more into the technology. John Leonard at MIT has come up with this five-phase—five levels of autonomy going from no autonomy at all, to complete self-driving autonomy.

And he would put Google at level 4, which is pretty high up there, which is it could drive on a highway with no intervention. But, as Jen was saying, even in a situation, you know, it starts snowing, or a bunch of leaves blow by, or something, it'll say, "Wait a minute. There's something wrong here. I don't understand."

And it'll turn control over to the human. So you still want to have a human there. And then there's the debate about how much time you need to have for warning before you do that switchover.

The highest level, though, is complete autonomy, where you maybe don't have a human at all. You say—you tell the car, "Go pick up a pizza for me," or, "Bring my child to soccer practice."

Jen was suggesting she'd like to have that for her kids. Without an adult human driving there, that I think is much, much harder. It's not like a little bit harder. It becomes a real leap when you go from, say, 95 or 99 percent autonomous, that last 1 percent is very, very difficult.

So we're going to have, I think, an adoption of a lot of things. In fact, those of you who watched the Super Bowl, you probably saw the Hyundai commercial, where there was a car that had automatic braking. If it saw something go in front of you, the car would apply the brakes.

Chunka was telling me earlier that a large percentage of the people in car accidents today, you go back and look at the records, and they didn't fully apply the brakes, for whatever reason. So there's opportunities to creep in with technologies like that.

Super-duper cruise control, you could call it, when you go down the highway. Maybe self-parking systems that get increasingly sophisticated, not just in front of the—park the car—the space to parallel park.

But maybe you are like a valet. You go to the restaurant, and then you push the button, and the car goes off into a little predefined parking lot that doesn't have other humans in it. And the car can slowly park itself. Those kinds of intermediate steps I could see happening much more quickly.

SHINN: That's a good segue, actually, to a point that Chunka has researched quite a bit, has written about, which is who wins and who loses? There are more of a few people in the audience who—members who are interested in the effect on industries, if not on individual firms, of this kind of a transformational technology.

In terms of broad swaths, where are the gains, and where are the losses?

MUI: Well, the first place to look is, if this the stuff works, we spend about $450 billion a year in this country on businesses that are dependent upon collisions. $100 billion on auto repair, parts, things of that sort. We spend $200 billion a year on auto insurance premiums.

That's a direct—that's directly correlated to how much it costs to fix the cars with accidents. So if you reduce accidents by 25 percent, we're talking hundreds of billions of dollars just in the U.S.

So the immediate losers are the people who depend on accidents. Now, the winners, of course, are the people who are no longer getting into accidents. And I think that the winners outweigh the losers.

But in the long term—in our book, we talk a lot about the economic consequences of this technology. And one of the things we point to is that we spend about $2.5 trillion a year in this country on auto-related businesses, the revenues of auto-related businesses.

And the business model of transportation changes with driverless cars. So I'll give you one example: $600 billion a year flow through auto dealers, auto dealerships. If you have driverless cars that are shared, a significant percentage that won't be bought by individuals, they'll be bought by the companies that do the—that manage the car sharing. They don't buy through auto dealers.

So it's not that all the money goes away. But there's a tremendous amount of money that shifts. And then, you know, the way you finance, the way you service, the way you rent cars, all that stuff changes.

SHINN: If, by the way, you didn't have a chance to write all those numbers down, this is a shameless plug for his book. The first case study actually has some interesting calculations upon the winners and the losers.

HEALEY: I just wanted to make a comment, it's just an important thing to call out. There's a very serious difference between autonomous cars and driverless cars. And what we're talking about is what Erik talked about, which is that last 5 percent.

And I am not even comfortable with someone who has 90 percent blind being—driving that Google car. Because when it says, "OK, now you take over..."

"If you reduce accidents by 25 percent, we're talking hundreds of billions of dollars just in the U.S."
—Chunka Mui

"I'm blind, and there's a problem."

He can't. He can't, unless he has some sort of drive-by wire telemetry that someone could—he could do a call center. You know, could there be some sort of—you need that backup. Who's that backup? What's that backup? That's the difference between autonomous and driverless.

SHINN: Which is one of the most difficult policy issues, the first of a long string of policy issues that you have to deal with, not just come to an inflection point, but actually to deal with the broader consequences to the economy.

What do you think they are? I mean, you've all written and talked about the kind of public policy that would both accelerate or enable this kind of inflection point. And then there's any number of them which would inhibit it, both here and in the sort of global—global arena.

MUI: Well, you just have to scratch the surface and look at all the business interest that would be harmed by this technology working, to rip off, you know, a whole range of issues. So we have a whole bunch of issues in liability, to jobs, to insurance, to how do you regulate?

You know, I was at a session in Florida recently where they're trying to create the environment for good testing of these issues. But one of the regulators was telling me that she was still in the process of trying to figure out how they would do a driver's license test for people who came in with cars that had rearview cameras.

Do they also make them able to drive without the rearview cameras? How they deal with—so there's 1,000 little issues that, if we allow to, could get in the way of these things.

And I think the important thing that we'll have to remember is that it's just not economics. There are a lot of lives and limbs involved here. So there's a lot of motivation for setting aside these issues.

SHINN: What's the most important public policy problem that would have to be dealt with in order to enable this, even if it's a—were a gradual slope rather than sort of a sudden step function?

BRYNJOLFSSON: I would agree with Chunka, that a lot of the technology issues are racing way ahead of where the public policy is. And to pick up—to answer your question, probably the most important one that comes to mind for me is liability.

One of the concerns that Google and other companies have, is that they have very deep pockets. And even though I agree, I think, with both my panelists here, that there would likely be far fewer deaths—there are about 30,000 highway deaths, 34,000 highway deaths in the United States now, about 10 million accidents of various types. It's likely that that would be dramatically lower, but it wouldn't be zero, OK.

And so the first time that one of these autonomous cars runs over, you know, a four-year-old boy or something, and worse yet, does it in a way that no human would have made that same mistake, it makes a different kind of mistake. So, you know, if you were 90 percent of 99 percent safer, you're still having thousands of potential deaths, and suing a deep-pocketed company could, you know, totally change the economics of it.

So that would be something that we have to think about. I'm actually—I can see a scenario, though, where the insurance companies come around on the other side of it. I see that some of their revenue should be threatened.

But it wouldn't be very far-fetched for me to imagine that sometime in the future, the company would charge higher premiums to people who didn't have some of these autonomous safety features in place in their cars, or even say, "Look, we're not going to insure you unless you have, you know, this automatic braking, and all these other features that are now state-of-the-art," because that would lead to a lot lower claims.

HEALEY: I'm just going to hardly agree. I think the liability is the one thing the public policy could really make a difference here.

I mean, the current proper use of the Google car, and other autonomous cars, is that the driver's supposed to have their hands hovering at 10 and two, and be ever-vigilant, is all the driver's fault.

It's the same fault model that we have, even at 95 percent autonomous. You switch to driverless, there's now no human, right. So now the car has liability. What if it's a shared car? What if it's like an autonomous, you know, little bus? Now, who's liable when that thing makes...

So the question is, the public has to say, "Do we want the current number of deaths?" or, "Do we want half that number of deaths?" even if half that number of deaths was caused by, you know, some, you know, terminator-like machine, right.

So what can we do to limit the liability? Can you put a price on human life? I mean, human life's being lost. But now it's being kind of lost by accident.

MUI: We put a price on human life all the time. It's called "insurance." So I think it's—liability is a real issue, but it's solvable. And part of it is whether or not we're willing to innovate.

So, you know, how an insurance company treats it—some insurance companies will say, "Not in my lifetime." Other insurance companies will say, "This is an opportunity," you know, "to take advantage of this technology."

But what if—you know, what if we had a trial of 20,000 massively-shared cars in Ann Arbor, and Google says, "You know what? I'll buy a $5 billion bond."

You know, you can solve these kind of issues if you're—if you're creative about...

SHINN: Who do you think will solve them first? I mean, given how litigious the U.S. is, not just in this particular innovation, but others, what are the odds are that the first place where this will really scale up will be someplace like Singapore?

BRYNJOLFSSON: What's funny, you should mention Singapore, because we were just talking about that before. Daniela Rus, who's the head of the Computer Science and A.I. lab at MIT is doing a lot of work on driverless cars. And she has been coordinating with the governor of Massachusetts.

Now, you might think that they'd be coordinating someplace in Massachusetts, from the Massachusetts Institute of Technology, working with the governor of Massachusetts. No. They were working in Singapore.

And they have set up a part of Singapore where they have autonomous cars, not just on the highway, but actually in sort of an urban environment. And a lot of it is exactly what you say, Jim, that there's a difference in liability.

They don't have the same kind of type of democracy that we have I guess you could say. And so they're able to push forward with some ideas that would be a lot harder to do in the United States, or even in Massachusetts.

MUI: I'll give you a more startling example. If you think about it, what automotive company in this world is most wedded to the idea of safety, and depends upon it from a brand value, is Volvo. Who owns Volvo? A Chinese company.

What happens if Volvo develops this technology? It would be a tremendous economic benefit to the owners of this technology. Where are the biggest markets for automotive—for cars these days? China.

So I think there's a tremendous geopolitical set of issues here as well, because we're talking about the biggest innovation since the Model T. And the question is, who's going to enable it?

SHINN: To the degree that you can—you have control over designing both the vehicle itself, the kind of sensors that are embedded in all the other vehicles, the urban transportation infrastructure, and then the hard part, the legal—the legal and political institutions that surround all of that, not limited just to liability, you think that may be one of the more important factors in where this starts?

MUI: Absolutely.

SHINN: And I guess probably how it propagates?

BRYNJOLFSSON: You mentioned a lot of interdependent parts. And I think the real key—and I think Chunka mentioned this earlier—is that you don't want to wait to have all the other pieces in place before it works. So the successful governments, companies, systems, are the ones that figure out how to scale it up piece by piece.

Can you implement a part of it, you know, a self-parking system just for parking lots? Can you get some benefit when 5 percent, or 20 percent of the cars are able to talk to each other, but not 100 percent? Can you get some benefit from having the car be autonomous on the highway, but still need humans in the city?

And by having those incremental stepping stones, you're much more likely to get adoption than if you say, "Well, we're going to wait until we have the whole package in place."

HEALEY: We were talking about this earlier. And the LIDAR is still a very expensive technology. But with just stereo camera, something like 2D LIDAR, which is a fraction of the cost, you can do the autonomous parking, you can do platooning.

So if the government was to designate an HOV lane as now the autonomous lane, doing just front-back platooning is a much simpler technical problem, which is much less risk than trying to drive in the streets of New York.

So if we created those things, we could have partial adoption, which creates markets, which reduces cost when you get in—what you were calling, Chunka, I think the virtual spiral of adoption. And I think that...

BRYNJOLFSSON: Just to give you a quick mini tutorial, we've thrown out the word LIDAR. I don't think anybody defined it. So LIDAR is basically light-based radar. And those systems cost on the order of $80,000. If you see the Google car, if you saw that little thing on the top there, it's spinning around, and it's detecting everything very rapidly with this light-based radar.

They also, I think, have actual radar—at least the car I rode in, they're changing it, which looks a little bit further ahead. And there are a bunch of other technologies all together. That whole package of technology I'm told—don't quote me on this. I guess we're on the record. But my guess is—my guess is...

SHINN: You could be quoted.

BRYNJOLFSSON: I was told that it was—cost about $150,000 for the whole set of electronics. Now, that number is falling very rapidly, because when you do things at scale, it can be done much more quickly.

One of the most dramatic examples of that was what was done to solve the SLAM problem, which up until 2008—there was an article in a journal saying this was a very difficult problem. SLAM is the problem of Simultaneous Localization and Mapping.

All of us can do it intuitively. Like when I say "go," everyone point to a door. Ready, go.

See, we all did that pretty easily. A robot, a machine, would have incredible difficulty doing that. Up until just a couple of years ago when John Leonard, the computer scientist I mentioned earlier, basically solved the SLAM problem for a room about this size.

And the way he did it was not with the really super-expensive technology. The way he did it was by adopting—adapting a Microsoft Connect device—you know, those things in the Xboxes that cost a couple hundred dollars—which was able to scan a room, and figure out where the features were. And with a little bit of extra coding, he was able to solve that problem.

So the cost of these technologies, and the breakthroughs that are happening so much more rapidly, have really—I mean, they've impressed me. And I'm trying to keep up with my optimistic expectations every time I hear about one of these breakthroughs, both on the technology side, but also on the cost side.

HEALEY: It's also easier if you have map of the room, and it says, "door."

BRYNJOLFSSON: And that's the other way. You could cheat, right. Exactly. And that's legit, too. Because what you want is a working problem. It's not just a theoretical thing. You want to have something that works practically.

MUI: I think the takeaway from the tutorial is essentially, ignore all the people who throw up cost as an issue. Because we know what the curve is.

HEALEY: And I think cost would be an issue to the initial adoption.

MUI: Yes, yes, yes. But we're talking—I mean, we're in the business of developing the technology that has trillions of dollars of benefits.

You know, and we know the cost curve of every information-based technology, what it looks like. So by the time we figure out the technology issues, the regulatory issues, the legal issues, all that stuff that will allow this stuff to be used, it's a non-factor.

I mean, of course, it's a factor. But here, let me give—let me give you a data point. For all the companies out there—I know some of you cover them—car companies who say this stuff is really way out there, Google has spent thus far about what it costs to develop a new bumper on this research.

That's—I mean, that's the scale of the investment so far. And we're talking about a massive, massive, you know, pot of potential value to be gained.

So, you know, the technology issues—need to deal with cost issues, need to be dealt with. But that's—I mean, we can see that curve. We can see that curve.

SHINN: That's true. Gordon Moore was working at Intel when I first worked in Silicon Valley back—a long time ago. Back in the '80s. And the Moore Law continues to drive the cost down of these.

HEALEY: Well, Chunka and I actually disagree, so this is our one little point of disagreement. As a—working for a technology company, I know that if there was clearly one answer going forward, and it was the only way forward, and there was clearly all that money that you mentioned, we would be going there in an extremely rapid fashion.

But the fact is that they're like, "Well, is it going to be 3D LIDAR, or (inaudible) the 2D LIDAR and the stereo cameras? Maybe the M-to-M communication."

So do we put all the money into reducing the cost of 3D LIDAR, or do we put all the money into sensor fusion? And because right now we're at sort of a period of flux, where it looks like there's many different solutions, and policy's going to come in here, too, if there's—you know, if they're like, "Oh, well, we're going to demand that everyone have," you know, "front-back," you know, "platooning capabilities."

There's more than one option.

MUI: But the great thing about that—the great thing about that is there are clearly, radically different paths being pursued to this goal.

HEALEY: Yes.

MUI: Radically different paths. And competition is good for innovation, right. So different companies are making different technology bets. But from a systems standpoint, it's driving innovation forward.

SHINN: Yes. I mean, the takeaway here is that there's going to be a very interesting mutual interaction between technical innovation and public policy, for not just enabling the inflection point, but almost certainly thereafter.

With an eye on the clock, we've come to the second half of the program where we would—we would encourage comments, questions, from members here, and out in cyber land. I would ask you, please, to raise your hand—I'll point in the direction—to use the mike. And if you would, please, speak clearly and distinctly, your name and your affiliation for the benefit of the people that actually aren't in this room, that are listening.

QUESTION: Craig Drill, Craig Drill Capital. Would you please tell the story of the development of LIDAR, and who owns it today?

HEALEY: I don't know the story. I know that Velodyne owns the technology. I know that it's a laser-range-finder that goes around, around, around in a circle. And that, you know, basically, it's a bunch of 2D LIDARs that are doing a sweep everywhere. So I actually don't know how this technology came to be. I know it's based on what laser-range-finders are.

MUI: I think the interesting thing about LIDAR—I didn't jump in on this earlier—is that all the hardware here are essentially accessible to everybody. There's no I.P. around the hardware.

I mean, somebody has to sell it. But mostly, you know, all the various researchers can get access to it. And I think the real distinction is the software.

BRYNJOLFSSON: Yes. So I don't think that it's controlled by any one person. There's different people who have different algorithms.

The other thing, just to underscore what Chunka was saying earlier, I've talked to some of the people, and I do think that those costs could come down dramatically if we got the volumes up, I mean, like down into the sub-1,000 category, or even lower than that.

Somebody—one of the people was telling me that they thought it would be comparable to a digital camera, ultimately, if we could get those kinds of volumes. And again, it's not the only technology.

QUESTION: This is probably one of the coolest conversations we've had at CFR. My name is Binta Brown, and I'm a senior fellow at the Harvard Kennedy School this year.

So I have two questions. The first is, the point at which the LIDAR stops working—snow, bad conditions—is probably the point at which we would rather human beings not drive, either. And so—in some cases, right. I mean, it sounds like very, very heavy fog, really bad snow, it's probably pretty unsafe for human beings.

And so is there a point at which we say if a computer can't handle this condition, it may also be true that a human being can't handle this condition, from a policy perspective? That's the first...

SHINN: We'll do one question at a time. How about that?

HEALEY: Kind of have a strong answer to that one, which is the computer and the human fail differently. Like when we were talking about doing the 3D LIDAR registration map, the problem is if the map you have is on a clear day, the road looks this big.

If someone's—it's not snowing. But the snowplows have now changed the contour of the road. Now you can't get a match, even if it's perfectly safe to drive.

So the computer's like, "I have no idea." But the person would be fine.

BRYNJOLFSSON: Yes. But your original point, I think there's a lot of truth to in terms of the visual. So when I was driving the Google car, I asked them exactly that question, "What are the boundaries of this?"

And the engineer said, "To a first approximation, if you can't see something, then it can't see something."

If it's foggy, if there's really heavy rainstorm, or if there's other obstructions, then it's going to have trouble as well. So there unfortunately is a fair amount of overlap in terms of what we can see and what it can see. It's not like it can magically can see around, or vice versa.

Now, there are things like the snowplows and other things that we might have the common sense to work around, but I think a lot of the disagreement between Jen and Chunka is really—has to do with the time frame.

In the shorter time frame, I tend to be much—agree much more with Jen's saying. But in the longer time frame, I think a lot of these issues will start working themselves out, and the costs are going to come down, and we'll find other ways.

You could imagine having maps under multiple different conditions, and interpolating them, for instance. You know, it would cost two or 10 times as much, but ultimately, those kinds of numbers won't be—won't be determinative.

HEALEY: If could just add something about models. But I just wanted to bring up the flip problem. If the driver is distracted, or inebriated, or if the driver is incapable of driving, potentially, you might want the A.I. to take over. So you might want to think about doing an intelligent hand-off between who's in better shape.

SHINN: Did you have a second part of that question, or do you want to...

QUESTION: It just goes to the point of transformative technology, right? So this is great from a safety perspective, potentially. It's great from a cost perspective, potentially.

But it doesn't necessarily—unless you have vehicle-to-vehicle communication, it doesn't necessarily improve the flow of movement in a major metropolitan areas. So—or does it?

And if it does, if you could speak a little bit to how it will better facilitate the movement of people, and get rid of these horrific jams so many of us get ourselves caught up in.

SHINN: Jennifer—could I make another plug? She gave a great TED talk on this, which I highly recommend.

HEALEY: This was assuming the V-to-V communication. In V-to-V communication, one of the things is the sensors are actually better than people. And you lose a lot in a car jam, because you're stopping and starting.

And the problem is, you have to, with your eyes, sense that car stopping. You have to apply your brakes. You have to leave a safe stopping distance that people can react to.

When the sensors—when we trust them, they can react much faster. They can start leaving a much lower, safe stopping distance. And theoretically, if they all coordinated—this would be with the V-to-V communication, even if it's only front-back V-to-V communication, nothing fancy. They could make a coordinated action to go forward.

BRYNJOLFSSON: Just to put some numbers on that, when a highway is totally packed and full, traffic jam, about 90 percent of the pavement is not being used, because on average, if you measure it, there's about four to five car lengths that people leave between them. The lanes are defined to be twice as wide as a car. So most of the pavement isn't being used.

With a good autonomous system, especially if you allow for the V-to-V, but even without it, you could probably double that, or more, in terms of the efficiency of the use of just the existing infrastructure, let alone a change in infrastructure.

QUESTION: (OFF-MIKE)

MUI: One last point on that is that most accidents happen in congested traffic.

SHINN: Gentleman over here. I had an engineering student at Princeton last year who went to work for Google. And she had spent all of her undergraduate research working on efficient engines. She said, you know, "Forget about designing a more efficient, combustion engine. The greatest savings in gasoline consumption is from just this."

QUESTION: That's a great connection to the question I wanted to ask. I'm David Schatsky. I'm with Deloitte. And when you hear about driverless or autonomous vehicles, you tend to hear a long series of potential benefits.

And we've heard half a dozen of them here from saving lives, to saving fuel, to reducing congestion. Google has said it would cut down on real estate expenses for parking lots.

The question is which do you see as the most compelling benefit? And if I can—an appendix, if it's saving lives, how much do we really know about this benefit, considering that the circumstances under which lives are lost on the road may not be amenable to autonomous driving alternatives?

MUI: I'll take a first crack at that. And I think this is where the beauty of the story comes in. As we're seeing, most accidents happen in congested traffic, highway-congested traffic, oftentimes. We don't need fully autonomous cars to make that problem a lot better, right.

So you start reducing congestion, you start reducing accidents, you start dealing with the ripple effect of that congestion. And suddenly, you're also reducing cost, right. So suddenly, you're gaining cost, because—by that accident reduction.

"When a highway it totally packed and full, traffic jam, about 90 percent of the pavement is not being used, because on average, if you measure it, there's about four to five car lengths that people leave between them. The lanes are defined to be twice as wide as a car. So most of the pavement isn't being used."
—Erik Brynjolfsson

So in talking to some folks, some actuaries at big auto insurance companies, there are models that estimate that if you just had a 15 to 20 percent adoption of these collision-avoidance technologies—not driverless, but just the collision-avoidance technologies—that would have material impact on the number of accidents, to the point where rates would have to drop precipitously. And that happens in the short term, not the long term.

SHINN: Question way in the back, then we'll come up here.

QUESTION: My name's April Powers (ph) from CCG (ph) Therapeutics. Quick question on sort of applying this to three dimensions.

When—what's—where are we in terms of having this for aircraft, or for other vehicles? Because it seems like it should be.

BRYNJOLFSSON: I think we're further along by and large. We all know there are a lot of auto pilots that pilots put on, and sometimes they forget to turn off, and fly past their destinations if they don't program them correctly. And there's auto-land systems, there's auto-take-off systems.

So there is—you know. And more than half of the aircraft that the Air Force buys today are pilotless, OK. So we're pretty far along in terms of some of those technologies. I don't think it'll be that—it'll probably be before we have a lot of fully-autonomous cars on the highway that we'll see UPS or FedEx flying airplanes and jets that are autonomous from point to point.

MUI: It is true that every aircraft flying above us now is—they are transmitting to each other in realtime their GPS coordinates and their velocity. And they all have a collision-avoidance system as well.

So I think to your point, it's already up there, just not down here.

BRYNJOLFSSON: Exactly.

HEALEY: They have the V-to-I collision-avoidance system, vehicle's infrastructure coordinates, because there's far fewer things to hit. I mean, that's the beauty of the 3D, that you're not surrounded by other planes. How close are you ever to another plane?

SHINN: Sir?

QUESTION: So I think you referred to this...

SHINN: Could you identify...

QUESTION: I'm Bob Mallard (ph). So I think you probably all agree that this is going to happen. It's really just a question of when.

MUI: We disagree on what "this is," but...

QUESTION: "This is," meaning driverless, autonomous, whichever the definition is.

MUI: No, I think we agree that something big is going to happen, but we disagree on what will happen. For example, I think Jen is very far from believing that we're going to have 100 percent driverless...

QUESTION: When you come back to earth 50 years from now, or 100 years, or 1,000 years, you would all agree that automobiles will be an autonomous or driverless...

BRYNJOLFSSON: Auto, automobile.

HEALEY: Real automobile.

QUESTION: So I take it there's another layer of complexity that needs to be invented, which I think you talked about in your book, which some sort of A.I. which overlays the mapping functions, I guess. But if you agree that this is going to happen, how quickly do you think that technology will—will develop, the artificial intelligence part?

HEALEY: I think this is our primary disagreement, is the timeline, I think—I believe that things are going to happen incrementally. I don't think policy changes that rapidly, unless something, you know, really dramatic—there's a really huge demand for it.

So I think it's going to be slow adoption. I think it's going to be gradual adoption. I think we're going to see advanced driver assist, advanced driver assist.

You're going to see self-parking. Then we may see autonomous driving lanes. We may see zones of cities where there are these like autonomous golf carts that are free, that just take people around, and there's no cars anywhere.

On the other hand...

MUI: Well, there are a bunch of different pronouncements. Serge Brin from Google has said that he wants it in five years. I think he said that last year, so it's four years now. The chairman—the chairman of the Chinese holding company that owns Volvo recently said that he believes that 2020—by 2020, fully-autonomous cars.

The prediction that I love the best, because there's some intrinsic connection to me about it is Chris—Chris Urmson, who leads the Google car project at Google, had said that he would want the car to be available for purchase by the time his son is old enough for a driver's license.

My son is nine. His son is 10. I like his prediction.

BRYNJOLFSSON: I think that's a little on the ambitious—I do think it's important to make this distinction between the autonomous cars where there's still the potential to hand off to a human, versus one that tries to be truly 100 percent without the human.

That last bit I think is going to be much harder. But I've been wrong before, so I want to be careful about that.

Morgan Stanley put out a report just last week that predicted 2022 for the fully-autonomous car. I would be surprised if it was for like city traffic in New York or Boston, at that level. But I have no problem for highways, for parking lots, for other situations where you have a more controlled environment.

"I think it's going to be gradual adoption. I think we're going to see advanced driver assist. You're going to see self-parking. Then we may see autonomous driving lanes. We may see zones of cities where there are these like autonomous golf carts that are free, that just take people around, and there's no cars anywhere."
—Jennifer Healey

MUI: Maybe a way to think about it is that I believe that by the time Chris's son is 16, that we'll have a car available that we'll look at from today's standpoint and say that is a radically different animal than what we have.

SHINN: Gentleman at the table right back there.

QUESTION: I'm Paul Sacks from Multinational Strategies. We know that in voice communication, bandwidth availability has become a constraint to the development of the industry.

And I guess my question is, do you see that as a constraint to this process? Will there be enough bandwidth for driverless cars?

MUI: Depends on when that—Jen is right or I'm right.

HEALEY: Yes, if Chunka's right, we won't need any bandwidth. It'll be fine. All the sensors are self-contained. And that's a question of using the bandwidth intelligently. And it might mean making, you know, a lot of intelligent gateways, if you want to do drive-by wire along the road.

And again, it has to do with the adoption and cost issue. If you think that 3D LIDAR is just going to come down in cost, there's no reason not to use that. You're not going to be as dependent on bandwidth.

If you want to try to get away with just front-back collision and stereo cameras, and try to offer autonomous driving for $1,000 in a package, yet the government wants to provide drive-by wire, and there's like autonomous safe routes to get around with these discount autonomous cars, then you're going to have bandwidth, which is going to have to be addressed by, you know, repeater stations, investments in infrastructure.

BRYNJOLFSSON: Or—to Jim's earlier point, this is a place where government could make a big difference, because there's an enormous amount of bandwidth being allocated to television stations. A lot of it—and even the space between television stations that could be freed up. And a fraction of that bandwidth would go a long way towards making these technologies a lot easier.

SHINN: Sir? Nancy, is there a way to take questions from members in cyberspace? Not yet?

QUESTION: I'm Steven Berkenfeld from Barclays. I'm going to ask the question I always ask about these sort of things. We talked a lot about the benefits. But one of the negatives I see is the impact on employment and jobs.

Even in a limited application where you need a standby driver, I see over-the-road trucks being able to be operated 24/7, instead of having to stop for 10 hours, even if you can eliminate 80 percent of the accidents in a limited way, there's a lot of people who repair cars. So how does this factor into your consideration, this technology?

And then maybe no different than all the other technologies we're developing, but I think it's part of the discussion, especially on something which, by its nature, are local jobs. It's a little bit different than the globalization and outsourcing that we talk about with other technology.

MUI: I think that this is different. And the reason it's different is that we're not just talking about cost savings. So we don't—I think that it may come down to us choosing between lives and jobs; not dollars and jobs, but lives and jobs. Lives, the million people killed a year, you know, and jobs. And I think we have to choose the lives.

BRYNJOLFSSON: So, Steve, you've thought a lot about these issues. And I'm glad you brought up—because it is—it is something that I think will also be a factor.

There are on the order of 4 million people who drive trucks professionally, maybe another half million who are taxi drivers, chauffeurs, you count some kinds of delivery, and people who do it part-time. So there are a lot of jobs that potentially could be affected.

I would say there also may be some new jobs created, but probably not in the same scale. I think there'll be new business models created, and there may be certainly new technologies that will be implemented and developed. But the net effect probably would be that that category of jobs there would be fewer, less demand for it, and some of them would change.

I could imagine some of those long-haul truckers, you know, there might be—the truck goes most of the way, and there's sort of like a docking, you know, a tugboat comes in and drives it the last bit. Or maybe the person is sleeping in the truck and doing other work, you know, clerical work and other stuff, and then occasionally is available for, you know, if there's construction, or something unexpected happens. It would be a very different kind of a job.

But like, with a lot of these technologies, there's going to be a big shift in the demand for the kind of people. There'd be more demand for people doing software and creative solutions, for these autonomous vehicles, the V-to-V, the V-to-A infrastructure, everything else, and less demand for people who are doing relatively routine kinds of tasks.

And that's something that will have effects on inequality, as well as jobs.

HEALEY: Yes, I agree, it's going to be a job shift. I mean, we don't have the Bell telephone switch operators. We don't have typists. We don't have elevator drivers now. I mean, you know, jobs are going to go away.

Hopefully, you know, accident—personal injury law—lawyers will go away. So those jobs will change. People will lose jobs. I agree. I mean, you know, but it's, I think, you know, it's a better technology. Hopefully, people will find better things to do with their time.

MUI: There are hundreds of thousands of people in this country that work in auto-insurance call centers. What are you going to do? Say, "We're not going to do this stuff because we have to keep their jobs"?

SHINN: We were joking among ourselves how terrified we were, the prospect of all those unemployed, personal liability lawyers.

HEALEY: What were they going to do with their time?

MUI: Using their creativity in other ways.

SHINN: In the far back?

QUESTION: Hi. Jessica Harris from NPR. I'm just wondering if there've been any—if there's research done on the safety of having the LIDAR, and the radar, and the GPS, and all this concentrated in your car. I mean, I'm somebody who walks around with her iPod—with her phone not like in my front pocket, because I worry about my ovaries.

So I'm just wondering if there have been—I shouldn't have shared that. But anyway, I'm wondering if there has been much attention to, you know, the safety of all this? And again, I don't know the underlying technology enough to—it's probably all pervasive anyway, already.

HEALEY: What I know about LIDAR is it's light. So it's a lot safer than radio waves. I think there's probably not been enough studies on the dangers of radio waves.

But this is a different technology. It's way above your car, and it's pointing outward. So I think it's both far removed from you, and it's less potentially harmful technology than cell phone technology.

SHINN: Sir, by the pillar here.

QUESTION: I'm curious to know...

SHINN: Identify your name and affiliation, please.

QUESTION: Earl Karr (ph), thank you. I'd be curious to know, could you see any scenarios in which this technology could not be adopted in the future? What could potentially derail your scenarios from this actually coming to fruition?

MUI: Oh, I think there are 1,000 ways where this could get derailed. I mean, from one standpoint, you look at—earlier question—you look at all the entrenched business interests for whom this is a very bad day. We already know that auto dealers have tremendous amount of political—local political power.

You look at auto dealers, and cab drivers, and truckers, there are a lot of—there are a lot entrenched interests that could delay this, delay it by, you know, throwing out the regulatory kinds of issues. So the politics, the regulation, is an issue. The liability, you know, if we don't address that is an issue.

There's still technology problems to be solved. I mean, those could be issues. But I think that's less of an issue, because we can get incremental benefits for those, from that standpoint. So, I mean, there's a lot of—there's a lot of hurdles that have to be dealt with.

SHINN: The gentleman right there?

QUESTION: Phil Huyck from Encite, a micro fuel cell company. I can imagine a scenario where cars—as cars as we think of them simply disappear. You're talking about autonomous transportation. I suspect in 25 years, we'll all be getting in our own pods.

But my question goes to every time there's a major technological innovation, there's also a vulnerability that comes with it. Is there any chance that hackers could get into this system, and you would have the mother of all traffic jams?

MUI: Oh, for sure.

BRYNJOLFSSON: Oh, there's a certainty. I don't think there's a chance. I think that that's one of the new kinds of risks that we would have, that we wouldn't have previously, is that, you know, our cars will become computers, increasingly already are. Already, people have hacked into parts of automobile systems.

And so you're going to have to have layers of security. And I hate to say this, but probably there's going to be holes in them from time to time. And people will, either maliciously or by accident, bad things will happen.

So I still think that when you weigh the costs and the benefits, it's very uneven. The benefits vastly outweigh the costs. But I wouldn't say that there's zero cost of those kinds of security things. And so of course, not just cars, our entire lives, everything.

We were talking earlier about the Internet of things. All of the objects that we'll be interacting with increasingly are going to communicate with each other electronically, and that means potentially there's room for hackers to abuse that.

HEALEY: I think there needs to be security at all levels, you know, firmware, software, communications—it's difficult, because with every security measure, you know, you don't want any denial of service. You don't want to—you know, because of some cyber encryption didn't work right, you don't want to have to not be able to drive your car. So it's a very complex and important issue.

SHINN: How many hackable operating systems are embedded in both microprocessor and the microcontrollers in that Google car, do you suppose?

HEALEY: Do we understand the question? I'm sorry.

SHINN: How many—how many individual operating systems are embedded in that—in that car in its sensors?

HEALEY: There's a lot of different ways that sensors work in cars. I'm not exactly sure how the Google car works. The best of my understanding, it has its own after-market solution, which is the communication from the LIDAR. And that interfaces with the controls of the vehicle.

I don't know what they're using for that. I would assume that it would be one of the Google-friendly platforms, something like

SHINN: I suspect so, yes.

HEALEY: Yes. I'm not—I don't know what they're using. But I do know that currently there's a lot of embedded controllers in your car that are all independent, and they work in very old languages to prevent this kind of thing.

I know that the new technology is very advanced. But I don't know what their security is.

SHINN: Certainly, the vulnerabilities is connected to the show-stopper question, which (inaudible) is obviously related to the public-policy question that seems to be on the mind of a number of the members of the audience.

HEALEY: Just with regard to public—this is I think important when you're talking about the technologies. The LIDAR and the sensor fusion, that's one level of software integration.

And then if you're just doing front-back detection, that's a single sensor. So there's a difference between doing fault detection in a single sensor versus fault detection in a system. So different level of risk.

SHINN: Gentleman in a French blue shirt. Is that right?

QUESTION: I'm Dennis Neal (ph) of the Fox business. And now I'm with a tiny media company. The one thing I haven't heard discussed at all is consumer resistance. And Silicon Valley guys all the time are giving us great new features because they can, instead of because we wanted them.

And the car is one of our last domains—I don't own one, I don't drive one, but I know people who do—of our own privacy, our own control, our own power. When you're on that traffic jam for 45 minutes, you're away from your spouse and your kids. And you don't have to e-mail anybody. And driverless cars will change all of that.

Are you guys underestimating that we consumers are unwilling to give up control?

MUI: No.

SHINN: I think that's a rhetorical question.

MUI: I think—I think you're only partially right. Because your description fits a segment of the population. So the key will be whether or not whatever techniques, whatever products are advanced, what level of adoption is required in order for the benefits to be gained?

So I think what we need, is we need technology that can be adopted incrementally, and doesn't depend on 100 percent adoption. And then you can keep your car. Actually, you don't have one, but you dream of having a car. And then somebody else can choose to have a driverless car, where they're chauffeured around.

Now, I'm shocked that you don't believe that being chauffeured around is more luxurious than having...

QUESTION: (OFF-MIKE) I noticed that even the blind man had to site at the wheel. I think that we're even farther away, aren't we, from you being able to sit in the back seat and just trust it 100 percent? I mean, I don't think that Jennifer is comfortable with that.

HEALEY: I have a big problem with the 95 to 100 percent. And I would want a different backup system. If there's not a human driving, I would want a different set of checks.

BRYNJOLFSSON: These are both really the same point, which Chunka made, which is that it is not an all-or-nothing. It's not taking away your existing car. It's a matter of how you can set up an incremental adoption that some people will flock to.

Because there's people at the other end of the curve as well, the early adopters that will adopt almost anything. And there's going to be people in the middle, there's going to be people at the end.

And the real key is the people who develop business models and technologies that are scalable smoothly through that entire adoption curve that don't require everybody to buy in before anybody can buy in.

HEALEY: And I just wanted to make an analogy between like the manual transmission and automatic transmission. I think there are people—and I know them—out there who want the manual, because they want that kind of control. It's not really driving without that.

BRYNJOLFSSON: Exactly.

HEALEY: And I think there will be those people. I just think that segment will shrink as the technology proves its benefits.

MUI: There's an interesting aspect to this question, is that I think a lot of those people sit in car companies. So I think there's actually the danger here is that the folks in the car companies will take that view of, "That's not a car. I'm not going to build that," and they're going to miss out because of that.

SHINN: One of the utterly inflexible rules of the Council on Foreign Relations is that we end meetings on time. And we are at 2:00. I would ask you to wait at least until you step out onto 68th Street before you text your tech trades from this meeting. And I hope you'll also join me in thanking our three remarkable guests.

Top Stories on CFR

Sudan

A year into the civil war in Sudan, more than eight million people have been displaced, exacerbating an already devastating humanitarian crisis.

Iran

The unprecedented Iranian attack on Israel presents U.S. officials with mounting challenges in trying to contain the conflict and maintain a deterrence against Iran and its allies.

Japan

The highlights from Kishida Fumio's busy week in Washington.