Symposium

Robert B. Menschel Economics Symposium

Tuesday, April 18, 2017
Jeffrey Liebman and Maya Shankar discuss with Zachary Karabell

The Council on Foreign Relations held a two-session symposium on April 18, 2017 that addressed the importance of economic irrationality, crowd behavior, and other elements of behavioral finance in understanding the global economy and making effective economic policy. The symposium was presented by the Maurice R. Greenberg Center for Geoeconomic Studies and was made possible through the generous support of Robert B. Menschel.

Session I: A Conversation with Daniel Kahneman

Daniel Kahneman discusses insights from behavioral economics.

MURRAY: Can we get started? Good. Thank you all for coming. Welcome to the Council on Foreign Relations Robert B. Menschel Economics Symposium.

My name is Alan Murray. I’m the chief content officer for Time Inc. And I’ve really been looking forward to the conversation today with Daniel Kahneman, who I think most of you know, needs no introduction. He is a psychologist and economist, winner of the Nobel Prize in economics, and author of what I would argue is one of the most important and influential books of the—of the last decade—apologies to those in the audience who have written their own books—(laughter)—I’m sure they have been all very important and influential as well, but—“Thinking, Fast and Slow,” which is a fascinating exploration of how the human mind works and how intuitive thinking, which you call “System 1,” interacts with deliberative thinking, “System 2.” And my conclusion after reading it is that System 1 is the 600-pound gorilla here—

KAHNEMAN: Yeah.

MURRAY: —more than we realize.

KAHNEMAN: Absolutely.

MURRAY: Can you talk about that a little bit?

KAHNEMAN: Well, I mean, the claim, you know, in the book is that what we are conscious of, we are conscious of our conscious thoughts. I mean, we are conscious of our deliberations. Most of what happens in our mind happens silently. And the most important thing that happen in our mind happen silently. We’re just aware of the result. We’re not aware of the process. And so we—the processes that we’re aware of tend to be deliberate and sequential, but the associative networks that lies behind it and that brings ideas forward into consciousness, we’re really not aware of it.

So we live with System 2. We’re mostly aware of it, you know, those terms. And System 1, which does most of the work, is not recognized.

MURRAY: But because we’re aware of the deliberative thinking, we tend to think our decisions are more deliberative than they really are.

KAHNEMAN: Absolutely. I mean, what really happens is—what I find very remarkable is you ask people why they believe certain things or why they have chosen to do certain things, and people are not at a loss for words. They tell you. (Laughter.) So we always have—we always have reasons, or we think we have reasons. But actually, when you look more closely at it, very often the reasons have very little to do with—they’re not causes, they’re explanations.

MURRAY: After the fact.

KAHNEMAN: Yeah, after the fact.

MURRAY: So, you know, Malcolm Gladwell wrote a book probably around the same time as your book called “Blink” that made the argument that the System 1, that intuitive decisions are often better than deliberative decisions. Do you buy that?

KAHNEMAN: Well, in some cases that is certainly true. So there are many cases where there are expert intuitions. So a master chess player just sees only strong moves. I mean, all the moves that come to the mind of a master are strong moves, and then there’s some deliberating choosing among them.

A lot of research has been done on firefighters, and experienced firefighters do things without knowing why they do them. So when you talk about that, it sounds like a miracle. But all of us are intuitive drivers. So we’re experts, and we do things without thinking. And if we stop to think about everything we do when we drive, we couldn’t do it. All of us are experts in social contact with other people. So we intuitively understand social situations without knowing why we do. So there are many domains in which intuitive expertise is really quite reliable and much better.

MURRAY: So who needs deliberative thinking? (Laughter.)

KAHNEMAN: Well, you know, in many domains we don’t have expertise. And in the domains where you don’t have expertise, it’s going to be fine if you didn’t—if we didn’t feel that we have expertise when we don’t. So many intuitions are right, but we also have intuitions—and subjectively they feel just the same—which are not based on expertise; they are just based on other processes.

MURRAY: So I don’t want this to get too political, but the current president of the United States—(laughter)—has said on a couple of occasions that one of his great strengths is that he can, without expertise—he’s not a big book reader; he probably hasn’t read your book. Everyone here has, I promise you, but he may—but he says without a lot of—without a lot of expertise, he can make better decisions than other people by applying common sense. How does that fit into your framework? (Laughter.)

KAHNEMAN: Well, let’s put it that way. I think I can understand why he’s so happy with it, I mean, with the way the mind works. And when you are not aware of doubts, you know, when you are really not very clearly aware of what you said before, and when you are—it’s a happy state to be in. (Laughter.) And not having doubt is clearly part of his makeup. He is not faking it. I mean, this I’m sure, you know, is quite true. He really thinks that he’s great.

MURRAY: Well, you have a whole section of your book on overconfidence—

KAHNEMAN: Yeah.

MURRAY: —which you seem to feel is one of the great problems of the way we—

KAHNEMAN: Yeah, I didn’t have—I didn’t have President Trump in mind when—

MURRAY: No, no, I’m—I wasn’t—I wasn’t suggesting that. But talk a little bit about that. I mean, that’s—you highlight that as one of the great dangers of this way of thinking.

KAHNEMAN: I mean, one of the main things that happen is that we live in a world—subjectively, we live in a world that is much simpler than the world out there really is. And that is true for all of us; that is, we all live in a simplified world. And one of the thing(s) that happen that, you know, we can recognize is that whenever an event happens, its explanation comes with it immediately. That is, we understand the world even when we couldn’t predict it. We understand almost everything after the fact. We have a story.

MURRAY: Especially people in my business, yeah. (Laughs.)

KAHNEMAN: But not only in your business. I mean, you know, you cater to something that happens in everybody’s mind. And this ability to tell stories after the fact and to believe them—because they are—because they come without alternatives—this feeds a sense that the world makes sense, the world is understandable. And that sense that the world is understandable is very much exaggerated. I mean, so we’re not aware of complexities because we somehow rule them out, and that’s part of our makeup.

And that leads to overconfidence. Overconfidence basically is because it’s quite often extremely difficult to imagine alternatives to the way that you think. And one of the things that—I mean, I’m going to say something you haven’t asked me about, but when—

MURRAY: People do that to me all the time. Feel free.

KAHNEMAN: I know. (Laughter.)

But, you know, when I—when I see this glass full of water, I know it’s there. I mean, it’s reality. And I’m sure you see it too, because I assume that you see the same reality that I do. Now, about this glass it’s probably true, but the same kind of false consensus, as we call it, that works out in many business situations, many situations of decision-making. And where there are differences, large differences among people that they’re not aware of and they can’t imagine those differences, this is very abstract.

MURRAY: Do you—do you have an example?

KAHNEMAN: I’ll give you an example. So here is an example. It’s a large insurance company, and they have many underwriters, who from the point of view of the company are interchangeable; that is, you know, they get assigned essentially at random to different cases. Now you take a set of cases—I’m describing an experiment we actually ran—you take a set of cases and you present them to, say, 50 underwriters, and they put a dollar number on them. And now I ask the executives in that company something that you can ask yourselves. Suppose you take a random pair of underwriters and you look at the two numbers they produced, and you take the average of those two numbers, and then you take the difference between those two numbers. So, in proportion, you know, how large are the difference? And you probably have an intuition because most people do, actually, in a well-run business, when you have experienced underwriters. What they expected was something like 5 to 10 percent variability. This sounds about right. If you’re a pessimist you’ll say 15 (percent). It is actually 50 (percent), five-zero.

MURRAY: Wow.

KAHNEMAN: And that is experienced underwriters. Experience doesn’t reduce that variability.

MURRAY: And this was their own underwriters?

KAHNEMAN: Their own underwriters.

MURRAY: So they had been working with them for years, they knew what—

KAHNEMAN: Their own underwriters. It’s the same with claims adjusters. You know, in the business—in the underwriting—in the insurance business, it’s extremely important when a claim comes in to set—to determine what it might be worth, what it might cost. And about 50 percent variability.

MURRAY: This is another big theme of the book: We as human beings tend to be extremely bad at making statistical judgments.

KAHNEMAN: Yeah.

MURRAY: We get it wrong.

KAHNEMAN: In this case, though, what I want to emphasize before I get to statistics is that what I found most strange in my interaction with that insurance company was the problem was completely new to them. It had never occurred to anybody. They had what I call now a noise problem, and they had a huge noise problem. Obviously, you know, if there is 50 percent variability, something needs fixing. And they realized that because they believed the numbers. They had generated the experiment themselves. But how could people not be aware that there is a noise problem?

And there is just an assumption: When I see a case, I think I see it correctly. I respect you. You are my colleague. I assume you would see the same. And so there is that assumption that people agree when, in fact, they would not agree.

MURRAY: Fascinating, yeah.

Now, on—more broadly, the—I mean, part of the reason I ask this is because I studied economics in graduate school and I learned this whole elaborate discipline, which was built on the notion that people are reasonably rational and do fairly well with the statistical problems that face them in daily life—until I read your book, which brought that all down. It’s a pretty dramatic change in economics that’s occurred over your career and lifetime.

KAHNEMAN: Well, I mean, I don’t think there’s been such a deep change in economics—

MURRAY: Really?

KAHNEMAN: —as a result of our work. But people in economics are more tolerant than they were 30 years ago of the idea that people make mistakes.

I don’t like the word “irrational.” I mean, I think, you know, we are—

MURRAY: Because?

KAHNEMAN: Oh, because, you know, we have been—Amos Tversky, my colleague, and I, you know, were sort of called the prophets of irrationality. We really never wanted to be thought of that way. I think of irrationality as sort of impulsivity and, you know, sort of a crazy thing. What is really happening is that the standard of what economists call rational is totally infeasible for a finite mind. And so there is something deeply unreasonable about the theory of rationality.

MURRAY: (Laughs.)

KAHNEMAN: And then people, you know, they muddle through, but they’re not. You just cannot meet those standards.

MURRAY: Did you get some resistance from your colleagues in the economics profession arguing that there was something deeply unreasonable about an assumption on which the profession was built?

KAHNEMAN: Well, yeah, I mean, we got—(laughter)—we got a fair amount. I mean, mostly what happened—mostly what happened is that economists ignored us for a very long time. And then a few economists—

MURRAY: Did that bother you?

KAHNEMAN: No.

MURRAY: You didn’t mind being ignored?

KAHNEMAN: Absolutely not. I mean, you know, we would not—we did not do our work to reform economics.

Behavioral economics actually started out in a bar, I would say. (Laughter.) That’s the origin story. There was a conference of, I think, of the science. And the person who at the time were the vice president of the Sloan Foundation and who eventually became the president of the—of the Russell Sage Foundation approached Amos Tversky and me at that bar, and he said that he wanted to bring economics and psychology together, and did we have any idea about how to do that. And I actually remember what I told him because I told him that this was not a project on which he could expect to spend a lot of money honestly because there just isn’t that much you can do.

MURRAY: (Laughs.) Not going to happen.

KAHNEMAN: And mainly, don’t give any money to psychologists who want to reform economics. (Laughter.) But look for economists who might be interested in psychology and support them.

And the first grant that was made at the Russell Sage Foundation was to an economist, a young economist called Richard Thaler—

MURRAY: Wow.

KAHNEMAN: —to spend a year with me in Vancouver. And I think that that year and that grant was, you know, foundational, and the rest he did. I mean, it’s not that we reformed economics; you know, there was a movement among economists, among young economists who were interested in that field and that developed it.

MURRAY: Fascinating.

KAHNEMAN: It’s still a minority field, but it’s now accepted.

MURRAY: Accepted, yes,

So you have people in this room who make a lot of important decisions and consequential decisions every day. So tell them how to improve their own decision-making. We’re going to do a little self-help here.

KAHNEMAN: No. (Laughs.)

MURRAY: How do they improve their decision-making processes?

KAHNEMAN: Well, I would say that when you talk to an individual, I generally simply refuse to answer that question because, you know, I know how little studying this problem has done for the quality of my decisions, so I’m not—(laughter)—I’m not going to—

MURRAY: You don’t—you don’t feel like you make better decisions after the last 30 years?

KAHNEMAN: Seriously, no. (Laughter.) But organizations are different. I am really much more optimistic about organizations than about individuals because organizations have procedures and processes, and you can design procedures and processes to avoid some problems.

MURRAY: Yeah. Give me some examples. We were talking earlier about the premortem.

KAHNEMAN: The premortem is an idea—not mine, but it’s one of my favorite—and this is that when in an organization a decision is about to be made—it’s not finalized yet—you get a group of the people who are involved in making the decision and you get together a special meeting where everybody gets a sheet of paper. And the meeting is announced as follows. Suppose we just made the decision that we are contemplating. Now a year has passed. It was a disaster. Now write the history of that disaster. That’s called a premortem, and it’s a splendid idea, I think. And it’s very good because in an organization typically as you approach a decision, and especially if the leader of the organization is clearly in favor of it, it becomes disloyal to express doubts. Pessimists are unwelcome. Doubters are unwelcome. And what the premortem does, it legitimizes doubt, and it gives people an opportunity to actually be clever in raising doubts that they never would raise in the normal run of—

MURRAY: Does it very often change the decision?

KAHNEMAN: I don’t think so. But I think it—(laughter)—but I think it often improves the decision. That is, I—it’s difficult for me to imagine that if a group is set on doing something they will just abandon it because somebody thought of something. But they will take precautions they might not have taken. They will consider scenarios they might not have considered. So this is a way to improve decision, not to modify them radically.

MURRAY: Yeah.

Let’s take a few minutes. I’m going to open it up to questions in just a few minutes, so prepare them. They need to be brilliant.

But let’s just take a minute to talk about artificial intelligence. And I really—two things I want to ask you. The first is, this process that you describe of how the human brain works and the interaction between the intuitive and the deliberative, do you think that can ever be replicated by algorithm?

KAHNEMAN: Well, you know, the answer is almost certainly yes. You know, this is a computing device that we have. It won’t be replicated exactly. You know, it shouldn’t be replicated exactly. But that you can train an artificial intelligence to develop powerful intuitions in a complex domain, this we know.

I mean, last year a program devised by deep minds in London beat the world champion at go. And I met them a few months later, and they said that their program is currently so much better than it was when they beat the world champion.

MURRAY: Wow.

KAHNEMAN: That it’s not even close. And the reason is that between the time they beat the world champion and the time I talked to them, that program had played with itself 30 million times. (Laughter.) And so the kinds of progress that you can make—and I pick go because it’s an intuitive game. I mean, it’s a game that typically people cannot really explain why they feel a move is strong or not. And the observers—there’s going to be a film—I think the premiere is at the end of this week—called “AlphaGo,” because they made a film of that story. And evidently there were moves that the program made that people who are experts at go recognized immediately as an extremely strong move and a completely novel one, and go has—

MURRAY: Wow. And will artificial intelligence improve the quality of decisions?

KAHNEMAN: Well, you know, it depends in what domain. So there are certain domains where we can see it coming. I mean, there are professions that are rapidly becoming obsolete. I mean, dermatology—diagnostic dermatology is obsolete. Diagnostic radiology, it’s just a matter of time.

Now, when will it be the case that you will have a module for evaluation of business propositions? Not immediately, but do I see any reason to believe that there won’t be one within 10 or 15 years? I guess—I think there will be one.

MURRAY: And will it operate on the principles of System 1, intuition? Or will it operate on the principles of System 2?

KAHNEMAN: I think, in a way, neither. I mean, it will look more like—you know, it will be very fast, and in that sense it will look like System 1. But you might also—and that will happen, too—you will—there will be programs that will reach a conclusion very quickly through a process of learning, like learning go, with big data.

MURRAY: Yeah?

KAHNEMAN: So you learn from very big data.

MURRAY: Do you think it’s possible that we will then integrate the intuitive and deliberative better than the—than the human brain does?

KAHNEMAN: Well, one of the things that, you know, having developed that kind of software, we’ll also develop programs to explain it in System 2 terms. So it’s going to be separate because what generates a solution is the deep learning. You know, it automatically looks at the big data and develops—

MURRAY: Can’t really be turned into a story that we can grasp.

KAHNEMAN: No, it’s not—it’s not. But I’m quite sure that that is a development that is coming, that we’ll develop programs to tell stories about those decisions so that we can understand them in terms of reasoned argument.

MURRAY: So you have a fascinating personal background. You were born in Tel Aviv. Your parents were from Lithuania, moved to Paris, were there when the Nazis came over. Your father was held for a period of time. How has that affected your unusual career choice?

KAHNEMAN: I’m not sure it has.

MURRAY: Really?

KAHNEMAN: I’m not sure it had any effect at all. I mean, I—

MURRAY: There weren’t a lot of people running around putting economics and psychology together at the time you did.

KAHNEMAN: No, but you know, that’s an accident. I mean, you know, that is the kind of thing that happened accidentally. Why I became a psychologist, I—when I was introspecting about this, I thought it’s because my mother was a very intelligent gossip. (Laughter.) And I—gossip was—you know, intelligent gossip was really an important part of my life as a child, just listening to it. And people were endlessly complicated. There was nothing simple, and there was a lot of irony. And that, I think, is something that I grew up with and that maybe turned me into a psychologist.

MURRAY: And then how about the economics part?

KAHNEMAN: Oh, the economics was completely accidental. I had—I had a colleague, Amos Tversky, who was more of a mathematical psychologist. He was not an economist either. But we did work on decision theory, which we published in an economics journal—an econometric. The reason we published it was that it was the most prestigious journal, you know, to which people would send that kind of theory. And having published it, if the same article had been published in a psychological journal, no economist would have looked at it. But because it appeared in their journal, quite a few economists took it seriously. It was an indication that we were worthy of a certain kind of respect. And so we were adopted by economics. It’s not that we ever had an ambition to change economics.

MURRAY: So I have—I could go on for a long, long time, but I want to open it up to the members because they already got about five hands up in the air. Just a couple of things. A reminder that this meeting is on the record, so feel free to use it. If you have a question, wait for the microphone, speak directly into it, state your name and affiliation, and limit yourself to one question; no multiple questions. If you ask multiple questions, he’s not going to answer it, so.

Start right there in the back and then over there on the other side.

KAHNEMAN: And I’m hard of hearing, so don’t be insulted if I ask for the question to be repeated.

Q: I’m Lew Alexander from Nomura.

The question I’d like to ask is really about historical analysis, and it’s essentially a question about how optimistic or pessimistic I should be. On the one hand, history has the problem that you look back for a reason and you tend to find what you’re looking for. But at the same time, there are procedures that good historical analysis has to deal with that: adherence to focus on primary sources and whatnot. And I guess my question to you is, do you feel like good history is possible? Or are we—are we sort of doomed to kind of the confirmation bias kind of problem in historical analysis?

KAHNEMAN: Well, you know, it’s hard for me as an outsider to define what “good history” would be like. History, by its nature it’s going to be a story that we tell with the strength and limitations of stories. And making it as factual as possible is—but it’s not going to be a science. It’s not going to be general because it deals with individual cases and with individual stories. So I don’t know what—I don’t really know what an answer to your question would look like because I have no idea what good history would look like.

MURRAY: You said something interesting earlier; why is it so hard to come up with alternative stories?

KAHNEMAN: Well, this is really a characteristic of the perceptual system, that when we perceive things we make a choice. And frequently, when stimuli are ambiguous, we can see it this way or that way. And the remarkable—everybody here, I’m sure, has seen the Necker cube. It’s the sort of cube that’s—it’s flat on the page, but it appears three-dimensional, and it flips. If you stare at it long enough, it—there are two three-dimensional solutions that you see, and they flip automatically. It’s nothing voluntary about it. And it flips all at once, and you only see one interpretation—

MURRAY: You can’t see them both at the same time.

KAHNEMAN: You don’t see them both. You know that there are two, but you see only one. And what happens is a process where once a solution gets adopted, it suppresses others. And this mechanism of a single solution coming up and suppressing alternatives, that occurs in perception and it occurs in cognition. So when we have a story, it suppresses alternatives.

MURRAY: Fascinating. So, for a historian—even though it’s not your chosen field—I mean, are there tricks you could use to test your thesis or make sure you’re not suppressing alternative versions?

KAHNEMAN: Well, I mean, very likely you are suppressing alternative versions, and that may not be a bad thing. I mean, it’s not—you probably want to check with other people because this is not something that you yourself are likely to see. I mean, we—because of that process of inhibiting alternatives, we tend to go with one option.

MURRAY: There was a question in the back. Yes, right there.

Q: Robert Klitzman from Columbia University. Thank you so much.

A lot of voters in the last election, I think, when faced with the complexities of the modern world, relied on type one—System 1 thinking and looked for simple solutions: let’s blame these people, those people, et cetera, et cetera, and many voted for Trump. And I’m wondering how we might address that. In other words, is there better messaging—especially with social media, that focuses, I think, on sort of short answers that may not be correct that sort of appeal intuitively? Are there ways that we might address that better than we’re doing? Thank you.

KAHNEMAN: No, I’m not sure that this was special to this election. I think that what was different in this election were the candidates. But the process of making decisions on an emotional and simplified basis, that I think is true for every election. So I’m not even sure that this election was very different from others.

And what can be done about it? You know, I don’t know who would be doing the doing, you know. (Laughter.) Who would we want to be doing something about it? I can’t—it’s very difficult to imagine an alternative.

MURRAY: Is there—

KAHNEMAN: I would have liked personally a system in which voting is compulsory, which it is in some countries, so it’s the default is you vote. And once the default is that you vote, it changes the character of it. I think it would encourage—compulsory voting would encourage deliberate voting. This is—you know, this is a hunch that I have.

MURRAY: Why? Why is that? Why would compulsory voting encourage deliberative voting?

KAHNEMAN: Because you have to make a choice.

MURRAY: It’s not a passionate—

KAHNEMAN: At the moment—at the moment, you know, it’s—the people are either very involved and they know what they want, and so they participate in the system, and the others don’t participate. But if you’re really—you have to do it, I think that would be a very good thing, in part because the population of nonvoters is really very poorly represented in the system. And you would have probably a very different political system if everybody voted.

MURRAY: Do you think there’s a role for education here to help people be more conscious of this battle between System 1 and System 2 and how to balance it?

KAHNEMAN: Well, I’m not very optimistic that—you know, it—look, it’s more in the culture, I think, than in the educational system. I mean, does the culture encourage or approve of deliberation, or does it actually approve of gut decision-making? And my sense is that actually people want to be led by a leader who’s intuitive; maybe not quite as intuitive as—(laughter)—

Q: Hi, Doctor. Andy Russell. I’m a student of psychology and business and have spent the past 18 years building companies in digital media, social media. And actually your firm, TGG Group, is an investor of one of my companies. And I’ve recently built a software to kind of use your biases and everything I’ve studied about you to help people collaborate around the world to do better things.

What you know, having been the author of this entire science, and what we all know about big data and what’s available through social media and how easy it is to either influence or manipulate people’s decision-making, I believe we’re living in a dangerous time. And I’d like to hear your thoughts on that.

KAHNEMAN: I think we can all agree that we’re living in dangerous times. I’m not—I don’t know enough about social media and, you know, how this has really changed life. One has the sense that it has made a very big difference, but—

MURRAY: You don’t have a Twitter account?

KAHNEMAN: I think—(laughter)—I do—

MURRAY: You’re not an active user—

KAHNEMAN: No, I’m obviously not.

MURRAY: —of your Twitter account. (Laughter.)

Yes, right here, and then over here.

Q: Hi. Jove Oliver with Oliver Global.

When I was in grad school, the buzzword was sort of interdisciplinary. And I recently heard the director of the Santa Fe Institute talk about anti-disciplinary. So I was wondering, as someone who’s got something to say about that, do you think that college campuses are a bit too siloed these days?

KAHNEMAN: That’s not my impression, really. I mean, you know, I think there is a lot of—at least there was at Princeton. My sense was not that people were trained to be very narrow. Graduate schools tend to be highly specialized, and that is largely because of the way the job market has evolved. There is a lot of competition and people have to publish a lot while they’re graduate students.

I would say that in the better universities, undergraduate training is not overly siloed. At least that’s my impression.

MURRAY: Right here.

Q: (Off mic)—JPMorgan.

Among many areas that your work with Amos impacted, public health was one that surprised me the most, and especially developing evidence-based medicine. Were you surprised to see how far it reached? And how do you foresee medicine, health care, and public health in general, incorporating more and more of your ideas?

KAHNEMAN: Well, we were very surprised. You know, our work had a lot more impact than we ever thought it was going to have. Clearly, you know, when you look at a development like that, clearly there was readiness on the part of—there was an audience waiting for something like that. And we happened to arrive at the right time and with a message that was easy to assimilate.

And it’s very clear that many developments in terms of evidence-based medicine, evidence-based everything, are very compatible with the message of, you know, behavioral economics and with the message of the kind of psychology that Tversky and I were doing.

Clearly—and this is happening, and it’s encountering resistance because of the appeal of intuition that we were talking about earlier. So evidence-based medicine is not having a very smooth run. I mean, there is a lot of opposition to it, and it’s quite interesting to see where it comes from.

I believe that eventually, you know, truth will out, and eventually—eventually, evidence-based medicine will be accepted and we will know. It will be accompanied by knowing when we can trust intuitions. I mean, this is really the key issue is we don’t want to give up intuition. We don’t want only evidence-based.

And one of the real dangers of evidence-based, algorithms and so on, is that experts will be discouraged and wither. Expertise will wither. And how to find the balance is—that is going to be a serious challenge, I think, because eventually I see evidence-based everything taking over. And yet there will be a time at which this will have to be resisted, because we are going to be losing something.

MURRAY: Interesting.

We had an interesting conversation in the back room about—talking about public-policy applications, the increasing use of nudges, of public-policy mechanisms that don’t take away your ability to choose but push you in a certain direction, understanding the psychology of the decision, an example being pension opt-outs instead of opt-ins for savings, for instance.

KAHNEMAN: Yeah.

MURRAY: Can you talk a little bit about the power of that?

KAHNEMAN: Well, yes. I mean, that’s—this idea—I should make clear, all this line of thinking comes from Richard Thaler himself. I mean, it’s not something that—in which I had a—I’m a disciple of this, and certainly very enthusiastic about it. But the best example, and it’s an early example, it’s something that Dick Thaler and a student of his, Shlomo Benartzi, called Save More Tomorrow. That’s the plan. And the plan is offered by an organization to its employees.

And the idea of the plan is that you don’t increase your saving right away, actually. Your saving will increase automatically, and actually by a fairly large number, that—3 percent in the first—by 3 percent of your salary, not of the increase, next time you get an increase in pay. And it will go on increasing every time you get a raise until you stop it.

That is a very, very clever use of psychology. And it was done entirely by economists. But what is clever about it is that it avoids losing anything. There’s no loss. There’s no sacrifice. It’s a foregone gain. Furthermore, it’s not an immediate foregone gain. It will happen later. And it will happen at a happy moment, and you’ll barely be aware of it.

And then procrastination, which normally is a terrible force against saving, in this case encourages saving more and more, until you find yourself saving too much and you stop it.

MURRAY: (Laughs.)

KAHNEMAN: So this is—this was really a brilliancy, I thought, and extremely effective. I mean, in the original application, I think it raised the saving rate in an organization from 3 percent to 11 percent. So those are not small effects. Those are enormous effects.

MURRAY: Yes, right there.

Q: Eva Giappagin (ph), World Bank.

Most of the decisions that we make is based upon a reference point or the belief or the subconscious mind, right. I’m interested about people in resource-poor setting who don’t have education, who don’t have skills, who don’t think they are capable. How do we change their subconscious mind?

MURRAY: That’s a small question for you.

KAHNEMAN: Yeah.

MURRAY: (Laughs.)

KAHNEMAN: I mean, the answer, how do we change their subconscious mind, is easy. We don’t. I mean, you know, we can’t. The only thing that we can do is change the context in which people make their decisions. And if we change the context, which Richard Thaler and Cass Sunstein have called choice architecture—that is, the set-up in which people make their decisions—with the same minds, conscious and unconscious, people can be led to make better decisions, decisions that are more in their own interest and, you know, that make more sense.

It won’t happen by influencing the people themselves. That’s not the way to do it, really. The way to do it is to change the environment in which people live and the environment in which decisions are made. And that can be done.

MURRAY: Right here, Consuelo, and then right there.

Q: Alan, thanks, number one, for a lovely interview. It’s really great to see you.

MURRAY: Thank you.

Q: And Dr. Kahneman, thank you so much for being here as well. My name is Consuelo Mack. I work at “WealthTrack,” my show on public television. And I’m very intrigued by your—the premortem concept and that you’re very optimistic about corporations with systems and processes being able to make better decisions.

There is a current working assumption that the more voices you bring to a corporate table, the better the business outcome will be, and specifically the more women you bring to decision-making and the more minorities you bring to decision-making. What’s your view—what is your thought about that? Does diversity make for better decisions?

KAHNEMAN: Well, I really have no expertise in that. And I’m relying not even on primary sources but on secondary sources. So what I understand is the case is that there is an optimum level of diversity. When you have too little diversity, it’s not very good. And when you have too much, it’s paralyzing. And so there is an optimum to be sought.

As for men and women, I really don’t know the research. You know, I have hunches, like everyone else. And they’re not worth more than anyone else’s hunches. My guess is that—my guess is that this is very salutary, because I really believe there are differences in orientation and that this kind of diversity is very—is going to be useful. But it’s just an opinion. It really—there’s no expertise behind it.

MURRAY: Well, although, I mean, you’ve done a lot of fascinating experiments over your career. You must have looked at gender differences in the course of those experiments. What did you—what did you discover about—

KAHNEMAN: Well—

MURRAY: —gender differences that you’re willing to share with us? (Laughter.)

KAHNEMAN: Well, I—I mean, my main confession is I’ve never been very interested. And my sense is that within the kind of thing that we did—

MURRAY: Not that big.

KAHNEMAN: —if it had been very large, we would have known about it.

MURRAY: Yeah.

KAHNEMAN: We never—we never—

MURRAY: It never jumped out at you.

KAHNEMAN: We never—no, it never jumped out. And, you know, there were the questions. There has not been—there’s been one very substantial case that nobody can explain of a difference favoring men, which is—I don’t know how many of you are familiar with the one-item test that turns out to be very good, the puzzle, the bat-and-ball puzzle. How many people know the bat-and-ball puzzle? Oh, OK. Younger people—it’s better known in very young audiences.

A bat and a ball together cost $1.10. The bat costs a dollar more than the ball. How much did the ball cost? Now, what’s interesting about this puzzle, it’s about System 1 and System 2. It actually was devised in that context by a post-doc of mine, Shane Frederick. And everybody has an idea. It comes to everybody’s mind that it’s 10 cents. But 10 cents is wrong, because 10 cents and $1.10 is $1.20. So the correct answer is five cents.

What’s interesting is that the majority of Harvard students fail that test, and MIT and—I mean, so very, very smart people fail it. Now, why on earth did I—

MURRAY: We were talking gender differences.

KAHNEMAN: Yeah. On this problem, there is a gender difference.

MURRAY: Really?

KAHNEMAN: Yeah. Men do better than women. Now—and it’s not a small effect either. It’s fairly large. Now, my wife, who, you know, was a—who had a national medal of science—she claims that this is a kind of puzzle that only men would be interested in. (Laughter.) So, you know, this is—that was—you know, that was—

MURRAY: She is a wise woman. (Laughs.)

KAHNEMAN: By and large, no, we haven’t found much.

MURRAY: Go ahead, right here. And I’m going to go over here and then come back to you.

Q: Hi. I’m Jonathan Tepperman. I’m the managing editor of Foreign Affairs Magazine.

Two things. First, if you’d ever like to write for us, we would love to you have, please.

MURRAY: (Clears throat.)

Q: I can give you my card afterward.

MURRAY: It’s just one question and no—must be pithy. (Laughs.)

Q: That wasn’t a question. It was a proposal.

The question is as follows. What’s the big-picture implication of your work for high-level foreign-policy decision-making, or policy decision-making in general? You know, if you look over the last four presidential administrations here, we’ve seen a range from an extremely organized, deliberative style under President Obama to a more disorganized, deliberative style under President Clinton to an organized, impulsive style under President Bush, to a disorganized, impulsive style under President Trump.

Is there a happy medium somewhere in there that you see? Thank you.

KAHNEMAN: Well, you know, there’s probably more than one happy medium. And it really depends on what you’re looking for. In terms of what the public is looking for, the voting public, I think Obama was too deliberative. And I think it cost him politically. You know, that’s just my impression, that actually people want a leader who is more—it’s very odd. They want a leader who knows—you know, who knows immediately, who has intuitions. And deliberation is costly in terms of the confidence in the public in the quality of the decisions. So there is an optimum there.

Then, if you look for an optimum in terms of the quality of the decisions that are ultimately made, then, you know, my bias would be to say that there isn’t too much deliberation when you’re dealing with issues of peace and war. But the politics may lead to different places.

MURRAY: Right here on the aisle, and then here.

Q: (Daniel has ?) a lot of implications for investment committees and investment decision-making, like the first person to speak—(audio break)—across cultures. You know, do these dynamics work differently in an Asian culture versus a Western culture, that sort of thing?

KAHNEMAN: I have no expertise. I don’t know—(audio break)—is we don’t know more. I mean, I—again, this is the kind of thing where I suppose that if there had been very clear answers, somebody would have told me. But—(laughter)—so I’m inclined to guess that there are no very clear answers.

I know that some features of—some basic features of decision-making, like loss aversion, are very general. I think there are large differences in other features of decision-making, like optimism. There are more optimistic cultures, very clearly, and others where pessimism is considered more intelligent. So—

MURRAY: And you see—and that’s a deep difference.

KAHNEMAN: That is a deep difference, I think. I think that’s a deep difference.

MURRAY: Yeah.

KAHNEMAN: And it has a lot to do with—my guess is that being unrealistically optimistic is very good for entrepreneurship and so on. I mean, that’s the—it’s—the dynamics of capitalism require a lack of realism, I think.

MURRAY: (Laughs.)

KAHNEMAN: So—

MURRAY: Right here.

Q: Hi. I’m Jack Rosenthal, retired from The New York Times.

I wonder if you’d be willing to talk a bit about the undoing idea and whether it’s relevant in the extreme to things like climate denial.

KAHNEMAN: Well, I mean, the undoing idea, the Undoing Project, was something that I—well, it’s the name of a book that Michael Lewis wrote about Amos Tversky and me. But it originally was a project that I engaged in primarily. I’m trying to think about how do people construct alternatives to reality.

And my particular—my interest in this was prompted by tragedy in my family. A nephew in the Israeli air force was killed. And I was very struck by the fact that people kept saying “if only.” And that—and that “if only” has rules to it. We don’t just complete “if only” in any—every which way. There are certain things that you use. So I was interested in counterfactuals. And this is the Undoing Project.

Climate denial, I think, is not necessarily related to the Undoing Project. It’s very powerful, clearly. You know, the anchors of the psychology of climate denial is elementary. It’s very basic. And it’s going to be extremely difficult to overcome.

MURRAY: When you say it’s elementary, can you elaborate a little bit?

KAHNEMAN: Well, the—whether people believe or do not believe is one issue. And people believe in climate and climate change or don’t believe in climate change not because of the scientific evidence. And we really ought to get rid of the idea that scientific evidence has much to do with people’s beliefs. I mean, there is—

MURRAY: Is that a general comment, or in the case of climate?

KAHNEMAN: Yeah, it’s a general comment.

MURRAY: (Laughs.)

KAHNEMAN: I think it’s a general comment. I mean, there is—the correlation between attitude to gay marriage and belief in climate change is just too high to be explained by, you know—

MURRAY: Science.

KAHNEMAN: —by science. So clearly—and clearly what is people’s beliefs about climate change and about other things are primarily determined by socialization. They’re determined—we believe in things that people that we trust and love believe in. And that, by the way, is certainly true of my belief in climate change. I believe in climate change because I believe that, you know, if the National Academy says there’s climate change, but—

MURRAY: They’re your people.

KAHNEMAN: They’re my people.

MURRAY: (Laughs.)

KAHNEMAN: But other people—you know, they’re not everybody’s people. And so this, I think—that’s a very basic part of it. Where do beliefs come from? And the other part of it is that climate change is really the kind of threat for which—that we as humans have not evolved to cope with. It’s too distant. It’s too remote. It just is not the kind of urgent mobilizing thing. If there were a meteor, you know, coming to earth, even in 50 years, it would be completely differently. And that would be—people, you know, could imagine that. It would be concrete. It would be specific. You could mobilize humanity against the meteor. Climate change is different. And it’s much, much harder, I think.

MURRAY: Yes, sir, right here.

Q: Nise Aghwa (ph) of Pace University.

Even if you believe in evidence-based science, frequently, whether it’s in medicine, finance, or economics, the power of the tests are so weak that you have to rely on System 1, on your intuition, to make a decision. How do you bridge that gap?

KAHNEMAN: Well, you know, if a decision must be made, you’re going to make it on the best way—you know, in the best way possible. And under some time pressure, there’s no time for deliberation. You just must do, you know, what you can do. That happens a lot.

If there is time to reflect, then in many situations, even when the evidence is incomplete, reflection might pay off. But this is very specific. As I was saying earlier, there are domains where we can trust our intuitions, and there are domains where we really shouldn’t. And one of the problems is that we don’t know subjectively which is which. I mean, this is where some science and some knowledge has to come in from the outside.

MURRAY: But it did sound like you were saying earlier that the intuition works better in areas where you have a great deal of expertise—

KAHNEMAN: Yes.

MURRAY: —and expertise.

KAHNEMAN: But we have powerful intuitions in other areas as well. And that’s the problem. The real problem—and we mentioned overconfidence earlier—is that our subjective confidence is not a very good indication of accuracy. I mean, that’s just empirically. When you look at the correlation between subjective confidence and accuracy, it is not sufficiently hard. And that creates a problem.

MURRAY: Yes, sir, right here.

Q: Thank you, Alan.

Professor Kahneman, very nice to see you again. Daniel Arbess.

I’m going to try to articulate this question back to Alan’s original question about algorithms and behavior economics. Say you have a kid who’s at business school and he’s trying to figure out—he wants to be involved in the investment business. He’s trying to figure out should he follow systemic data-driven investing, algorithmic investing, or should he, like his father is telling him, learn about the world, learn about human behavior? (Laughter.) He’s studying behavioral economics but he’s being drawn to data analytics.

So I want to go back to Alan’s question, because I’m not sure I completely caught your answer as to how we’re going to reconcile increasing dependence on data analytics with the fact that the decisions that come out of data analytics are only as good as the data. If the data set is incomplete and we’re faced with a different scenario and relying on these systemic strategies, what will that produce?

KAHNEMAN: Well, the question is what the alternative is. And, you know, it’s easy to think—it’s easy to come up with scenarios in which big data would lead you astray because something is happening that is not covered in the data. The question is whether intuition or common sense is, in those cases, a very good alternative. There is, I don’t think—I don’t think there is a general answer to this question.

MURRAY: But you did say earlier that you thought big data, artificial intelligence, will improve the quality of decision-making.

KAHNEMAN: I think it’s coming. I think in every—you know, we can see it happening. It’s going to take a while. But there is no obvious reason to think that it will stop in particular somewhere.

MURRAY: So, in general, better decisions will be made 30 years from now than are today.

KAHNEMAN: Well, yes. I think the world will have changed so much because of artificial intelligence, and it’s—but in many domains, yes, better decisions will be made.

MURRAY: I have one last question. You’re going to get it, but that puts a big burden on you. This has to be a gripping question.

Q: I’ll try to get a grip.

Would you be willing to speculate on the trajectory of artificial intelligence and machine learning and its social impact? Are we going to end up with Ray Kurzweil’s singularity, or are we going to end up, as Arthur Koestler once speculated, lucky if they keep us as pets? (Laughter.)

MURRAY: And before he answers that, would you identify yourself and your affiliation?

Q: Yes. Phil Hike (ph), Insight LSE (ph), which is a fuel-cell technology development company.

MURRAY: Thank you.

KAHNEMAN: Well, you know, you can only guess and speculate about that. The movement—what is very striking, at least to an outsider, about AI is the speed at which things are happening. And I mentioned the Go championship earlier. And perhaps the most striking thing about the Go championship was that, six months before it happened, people thought it was going to take 10 years. And things like that are happening all over the place, I mean, especially through deep learning.

So what it’s going to do, it’s going to change the world before the singularity. The singularity is when there will be an artificial intelligence which can design an artificial intelligence that is even more intelligent than it is. And then, you know, the idea is things could explode out of control very quickly. But long before any singularity, our world will change because of artificial intelligence.

Now, you know, some economists say that we’ve been there before. Technological change does not really cause unemployment. Pessimists—and I am one—tend to think that this is different. And I think many people think that this is different, in part because of speed; so the kind of social changes that are going to occur when you create what Yuval Harari calls superfluous people for whom really there is nothing to do.

This could be happening within the next few decades, and it’s going to change the world to, you know, an extent that we can’t imagine. And you don’t need singularity for that. You need a set of advances that are localized. And we can see those happening. So, you know, obviously self-driving cars, you know, that’s just one example. But, you know, it’s going to happen in medicine, in law, in business decisions. It’s hard to see where it stops.

MURRAY: Dr. Kahneman, I’m sure everybody in this room would agree that this was an hour very well spent; a round of applause. (Applause.)

It’s a coffee break, coffee break, and then back in here at 2:15.

(END)

Session II: Behavioral Insights into Policymaking

Experts discuss behavioral insights into policymaking.

KARABELL: So welcome to the second part of our discussion for the Robert Menschel Economics Symposium. This is more of the let’s flesh out some of what we heard from Daniel Kahneman.

You’ve heard the sort of Council on Foreign Relations dos and don’ts. This is on the record. I know there were many things that might have been said about behavioral economics off the record had it been so, but unfortunately everything we say will in fact be on the record. So that deeply constrains our conversation, but so be it.

We’re also being live streamed to whomever out there in the greater—as said, to whomever out there—(laughter)—in the greater ether, so welcome.

And we will obviously have questions at the end, which will also be on the record.

My name is Zachary Karabell, I’m head of global strategies at Envestnet which is a publicly traded financial services firm. I’ve written a bit about economics and statistics and history. And I’m also a journalist and a commentator. So I am here moderating this, although I don’t have a background in behavioral economics.

And the three panelists’ bios are in your programs as per CFR. And you can read those more about the impressive credentials, but just briefly we have Jeff Liebman who is the Malcolm Wiener professor of public policy at the Kennedy School at Harvard and served two stints in the U.S. government in the Office of Management and Budget most recently and in the White House in the winding years of the Clinton administration. To my left is Elspeth Kirkman who is senior vice president for The Behavioural Insights Team which is based in New York and does a variety of consultancy work on behavioral economics for both policymakers, cities, private foundations and was prior in London doing similar work. And joining us with our now effective technology—we had a little bit of a glitch, but it’s now working—is Maya Shankar who I think has a lot of experience on effective and defective technology having worked at the White House Office of Science and Technology and was the first head of the behavioral insights group in the White House that was mandated by President Obama. And I think we’ll hear a little bit about what the future or lack thereof is of that approach to policymaking in subsequent governments.

So we’ve talked a bit about the general arc of behavioral economics and behavioral insights into policymaking. This is a recent phenomenon within the policy community, which has gathered steam fairly quickly. And to the degree to which it’s I wouldn’t say, and obviously you all have much more experience in this, that it is deeply embedded in any developed world bureaucracy, but it is becoming part of the understanding of how to make policy both at a national level and perhaps, and Elspeth will talk about this, a little more deeply embedded at the local level, about how policy can tweak and nudge ala Richard Thaler, how it can deal with all these issues of modeling and information bias and confirmation bias and all the things that Kahneman and Tversky and their academic children and successors have also developed more fully.

The OECD recently put out, I think this year, a study of about a hundred cases of policy that has been informed by behavioral economics and behavioral insights and what the legacy of that is. It’s an interesting resource if you want to look at an assessment of this. But really, much of this has been in the past years, not even the past decade.

So, as Zhou Enlai apocryphally or actually did say to Kissinger when asked about the French Revolution, we can say about the efficacy of behavioral economics and behavioral insights in policymaking, probably too soon to tell, although I hope we don’t have to wait, you know, 200 years before we figure it out.

So first to Professor Liebman. I can call you Professor Liebman now that you’re at the Kennedy School, professor of the practice, professor of economics. You were in government in the late ’90s. You were in government more recently in the teens. And how did that landscape change in how behavioral insights and behavioral economics was actually used as an applied tool for policymaking?

LIEBMAN: So I think it was really completely different between the Clinton and the Obama administrations, not because Obama was somehow cooler or because he happened to know Richard Thaler from the University of Chicago, which he did, but simply that the advance in behavioral economics was so great in the interim.

You know, if you went back to the late ’90s, behavioral economics was mostly a collection of a set of anomalies. We had a bunch of examples of places where people behaved in ways that they weren’t supposed to behave according to our models, and it was sort of cute and interesting and curious. And what happened really between I would say the mid ’90s, late ’90s, and, say, by, you know, 2007 or ’(0)8 when I was going back into government, was behavioral economics evolved into a science where we can make predictions. In certain circumstances, we can predict how people will behave pretty accurately because we’ve seen the same things happen over and over again.

And we’d also seen a bunch of situations where people or firms had taken the insights of behavioral economics and actually produced much better results, and as it came up in the earlier session, the example of firms defaulting people into defined-contribution pension plans was the strongest one, yet by the time that we were starting work in the beginning of the Obama administration, all of the economists on that team were well-aware of lessons from behavioral economics. And I can’t think of anything important that we did, certainly in the two years I was there, where we weren’t incorporating those insights in some way, whether it was designing the Recovery Act or designing the Affordable Care Act or thinking about how to further encourage broad retirement savings for Americans. And so it was a completely different environment, but, again, not because it was something different about us, it was that the science was not ready to be used by policymakers.

KARABELL: So, if you had used some of these tools in the late ’90s, do you think there would have been different policy outcomes, better policy outcomes?

LIEBMAN: Let me give you one concrete example. Late in the Clinton years, the budget surpluses emerged for the first time, and that was a remarkable event. And we were trying to think about how one could use them, and one of the problems we were trying to work on was retirement income security and the problem that something like a third of Americans when they get to their late 50s have essentially no retirement savings.

And the policy remedy we came up with in I guess it was either ’98 or ’99 was that we should set up a way to match the savings of low-income taxpayers so that they would have the same kind of incentives to save that higher-income taxpayers who work in firms that have very generous 401(k) plans have. And later research, including a randomized experiment that I was part of the team that conducted a few years after that, but that policy, by the way, never got passed, but that was the solution we were working on. Later research suggested that maybe if you worked really hard and really matched savings for low-income people, maybe you could raise their saving rate from 4 or 5 percent to 15 percent, the number of people making savings in a given year. And so, in that era, that was the policy response.

By the time we got to the Obama administration, we knew that defaults could get people, up to 70 or 80 percent of the people, saving. And this idea of matching people’s savings, which is, by the way, expensive from a budgetary standpoint, you know, you have to raise more revenue to go out and do the subsidies, so, you know, was, you know, was just it was clear there was a much better option and, by the way, a cheaper option that would probably have four times the impact. And so you can just see that completely different policy options were before us because of the learning that had gone on during the decade in between.

KARABELL: So I want to turn to Maya now. By the way, with all the discussion of artificial intelligence and ghost in the machine, I know you’re there and I know you’re a real person, but I feel like this is what it’s going to be like if we’re actually talking to an artificial intelligence interview on a panel years from now.

So you joined the White House, you’re in the Office of Science and Technology Policy, and then I guess 2015, the use of behavioral insights and behavioral economics becomes sort of codified as this will now be part of the policymaking process. Maybe talk about how that came about. Or, you know, I know that there certainly was the Cass Sunstein initially in the White House and was trying to apply some of the nudge ideas in practice, but maybe talk about how that became ever more a part of the policymaking process.

SHANKAR: Yeah, absolutely. So I joined the Obama White House at the beginning of the second term. And by that point, Cass Sunstein had served as administrator of the Office of Information and Regulatory Affairs for several years. And he really brought a unique lens to that particular role by looking for applications of behavioral science to policy.

So I had seen already a strong precedent for these insights being successfully applied to public policy, and there were also a number of government agencies, like the Department of Labor, the Department of Health and Human Services, the Department of Education, who had all also successfully applied behavioral science—(audio break)—lunches or trying to think about how to devise student loan—(audio break).

And so I think, Michael, why I joined was to try to institutionalize this work so that we were applying behavioral science to policy in a systematic way that involved rigorous experimentation in ways where we could quantify the impact of our applications, figure out what was working, what wasn’t, what was working best. And so I made it my goal to create a team really modeled off of the U.K.’s Behavioural Insights Team because I had seen it obviously be quite successful in that institutional form.

And so we pulled together a team. And in our first year, I wouldn’t say we necessarily had a sunset clause like, you know, the U.K. Behavioural Insights Team, but I would say we were certainly hanging by a thread when it came to having an identity within the federal government. So we really had to prove that this stuff is valuable.

So, in the first year, we developed about 12 pilots with government agencies. Each of them involved a randomized control trial so that we could quantify impact. And I think that those wins helped solidify the importance of this work as something that, you know, leadership in the White House should take very seriously and should try to institutionalize.

So, by the time 2015 came around, armed with these successes, President Obama signed an executive order that not only institutionalized our team, but also issued a broader directive to federal agencies to use behavioral science and rigorous evaluation methods as a matter of course, as a matter of routine practice within all of their operations.

KARABELL: So you talk about a bunch of test cases and successes. Maybe what are a couple of those specifically?

SHANKAR: Yeah. So one example was we were trying to get veterans to take advantage of employment and educational benefits that they had earned through their years of service abroad. And typically, the VA relied on word-of-mouth practice or other methods of communication, but there was very low take-up.

And so what we did is they had an outbound email message that was about to go out to veterans saying that these veterans were eligible for the benefit. And we just changed one word in the email. We said you’ve actually earned the benefit, you’ve earned the benefit because you’ve been in service for many, many years and we’d like to basically give you these benefits as a way of acknowledging your service. And we found that that one word change led to a 9 percent increase in access to the benefit. So that was one of the quicker wins.

We also in the area of government efficiency tried to get more vendors to honestly report their sales to the federal government because they had to pay a fraction of that amount in fees, in administrative fees. And so, because typically this relied on self-report, we were finding that vendors were often underreporting their sales. So we changed the form to include an honesty prompt at the top of the form where vendors had to acknowledge up front that what they were about to fill out was truthful and comprehensive. And we know from behavioral economics research that if you require people to sign it at the bottom of the form it’s too late because they’ve basically already lied and they’re not going to go back and change the values. (Laughter.) But if you had them sign up front before they filled out all the information, they’re sort of primed for this honest mind-set.

And just introducing the signature box at the top of the form led to a $1.6 million increase in collected fees in just a three-month period. So, if we scaled it up, which we did, and the effect persisted, that would be about $10 million of revenue for the federal government just because of a signature box at the top of an electronic form.

And then finally I’ll just give one other example because Jeff mentioned retirement security. Prior to 2018 when a new legislation will kick in that automatically enrolls new military recruits, military service members have not been automatically enrolled into retirement savings plans in the way that civilian members have been. And so we introduced an active choice at military bases. So when service members were coming to orientation, they were already filling out paperwork, already doing drug and alcohol abuse counseling, they had a bunch of things to sign, we basically slipped in a form around the Thrift Savings Plan, which is the federal government’s workplace savings plan, so we had a quick lecture that happened. And as part of orientation requirements, the service members were required to select yes or no, saving for retirement. And we found that this active choice prompt led to a roughly 9 percentage point increase in sign-up rates.

And so that was another example of a more—you know, sometimes with—the ideal proposal, in this case it’s automatic enrollment, isn’t going to be possible for some time, so behavioral economics provides them interim tools that we can use that preserve freedom of choice, but are also aggressive enough to have an impact. And that’s what we leveraged.

KARABELL: Fascinating. And in order to do this session, I had to sign a form for CFR, a release form saying that it was OK to be on the record. The signature was underneath at the bottom of the page, but it was before the session, so I don’t know which of those takes precedence. (Laughter.)

SHANKAR: Well, I signed a form saying I wasn’t an AI right before then. I guess I’ve revealed that now.

KARABELL: That’s good to know. Although if you were, you’d probably, presumably, be able to fake it. (Laughter.)

So, Elspeth, you were nicely teed up before, the work of The Behavioural Insights Team, as being a sort of a leader in this space. How did you get into that? Why did the British government initially underwrite this? And is there a cultural difference in how behavioral economics is applied in policymaking? Or are these tools really the same and it doesn’t matter whether it’s Westminster or the White House or France, it’s the same set of tools, just different particular problems?

KIRKMAN: Sure. So, on the first part about how we kind of came to be, it was very much a kind of flagship idea and a flagship team within the coalition government that was brought in in 2010, and I think similar to Maya’s experience. I should tell everyone as well that Maya is on a screen, weirdly kind of directly in front of me, so I’m kind of talking to her. (Laughter.)

So, similar to Maya’s experience, I think everybody thought that we were a little bit quirky, and that’s maybe a generous way of putting it. And we had the sunset clause, which was that we had to basically recoup a return on investment greater than the cost of the team in order to continue to survive. And maybe a sunset clause is just a smart way of hiding the fact that you’re hanging by a thread of credibility before you get good results.

So it was a time of austerity and there were all of these kind of measures around, you know, no increases to headcount, freeze on government spending, all of this kind of stuff. And so the very opportunistic, simple thing that we were able to do was to apply this stuff almost immediately to raising revenues for government. So a lot of our kind of flagship initial work was with the Revenue and Customs Agency. And we’ve got some really simple now, pretty kind of tried-and-tested, well-rehearsed examples of things that we’ve changed, simple lines, for example, on tax letters to collect delinquent payments, telling people nine out of 10 people in the U.K. pay their tax on time. Just that simple line really kind of makes a very big difference in terms of how much people pay.

And the small tweaks like that that we’ve made to tax letters, again, evaluated through experimentation, randomized evaluations, over the course of one fiscal year they added to up a couple of hundred million pounds in additional revenue brought forward, which kind of grabs people’s attention in a time of austerity and makes them think, OK, maybe I thought you were, you know, slightly quirky, bizarre, a little joke outfit, but maybe I’d like a piece of that for my policy area as well.

And then actually, that point on how this translates and whether there are kind of differences, I think for me the main differences are about how you kind of frame this and position it within the agenda of a particular administration or a particular kind of set of government services and departments. But in terms of the insights themselves and this idea of, you know, humans being wired to make these sort of very predictable kind of shortcuts in the way that we make decisions, a lot of that translates very well. So the idea of social norms, telling people nine out of 10 people pay their tax on time, the reason that works is that we’re all wired to think, OK, I can’t make every single little micro-decision I’m faced with every day, so a really good kind of substitute for me making the decision is to just look at what other people are doing and just follow that. And we like to think that only kind of 13-year-olds use that logic, but actually all of us adults do that.

And there are situations in which that works and which it doesn’t, but we see pretty consistently that in the situations where the gap between people’s expectation of what other people are doing and whether or not they’re conforming, for example in paying tax, and the actual behavior of other people, when there’s a gap and you tell people about that gap, they’re very likely to start complying, whether they’re, you know, Singapore or Guatemala or the U.K. or the U.S. or all these others places that this has now been tried and tested.

KARABELL: Sort of a broad question I think for all of you to address, which stems a little out of a point that Jeff had made. There’s the quantifiable, right, that we need to prove that these tools as applied to policy either save outlays of government money or generate more revenues in terms of what is taken in, or, as Maya talked about, being able to actually distribute money that’s been allocated or programs that have been put in place, but aren’t being well-utilized. What about the nonquantifiable? I mean, is there an issue? And if you go too far in the direction of having to prove the efficacy of this by numbers that maybe you don’t get to, whether it’s policy as shaping animal spirits, right, people being confident, which you kind of know by behavior, but you don’t necessarily know by numbers, or just positive social outcomes, or is that asking too much right now?

I mean, so maybe each of you could talk a bit about that. Is it asking too much to say, well, this is going to improve policy and social outcomes as opposed to we can prove by the numbers?

LIEBMAN: I think we’re seeing the insights from behavioral science and behavioral economics being used in both contexts. We’re seeing very specific A/B testing of different wording and seeing big impacts from that. But I really think we’re seeing big policy decisions being informed by these insights as well.

And just to give you another example, when we were designing the Affordable Care Act—sorry, I was going to do a different—when we were doing the Recovery Act, we were trying to figure out how to get as much money into people’s hands and get them to actually spend it rather than save it, as possible. And so one of the components was called the Make Work Pay Tax Credit. And the question was, how could we design a tax credit that would maximize the impact on aggregate demand, the amount of spending that people would do out of it? And so we did two things that were informed by behavioral science.

The first is we decided to make it a rebate against people’s payroll tax, so it was people getting their own money back again as opposed to some big bonus. Because we thought if people got told they got some big bonus, they might think this is a weird one-off thing, and in their mental accounting they might say, well, I’m going to save that for a rainy day or something. But we wanted it to feel like they were getting their own money back. So that was the first thing we did.

And then the second thing is we decided to have them get the money by adjusting the withholding tables in the tax schedule so that it would just show up in their weekly paychecks or their monthly paychecks and they wouldn’t even notice that they got it. And because so many people simply spend everything that comes in in their paychecks every month, that would maximize the extent that it was spent rather than saved. And so we did that, and we think that was, you know, was the right way to get it, to maximize the fraction of that tax cut that was actually spent.

Now, most people think that we committed political malpractice. Because it was hidden in the adjustment of the withholding table, the president didn’t get credit for sending checks to everyone. And so while I think we did the right thing on the economic side, many people think that as a, you know, a matter of giving the president credit for rescuing the economy, this was exactly the backwards way to do it. But I’m still, you know, proud that we did what was good for the economy on that one.

KARABELL: And, Elspeth, then we’ll go to Maya, what are your—

KIRKMAN: So I’m just enjoying political malpractice. (Laughter.)

So on the kind of—to the point about the sort of unmeasurables or the difficult things to kind of quantify, part of our kind of reason for being is to sort of fit into the existing policy environment and accomplish outcomes and, you know, measure the things that were already being measured and that already count in certain ways, so I think sometimes we have to be really smart about how we do that and find good proxies, for example, particularly if we’re looking at policies that may have sort of long-term effects.

So, for example, we might do some evaluation of something like body-worn cameras in policing, try and look at what the kind of long-term outcomes are in terms of, you know, whether you get kind of better, more fair policing, whether people’s relationship with the police becomes better, whether you get higher social trust. All those things take a long time to ripple through. But it would be very easy to neglect looking at other things that might ostensibly be influenced by wearing a camera, so we might look at things like police absenteeism or, you know, well-being scores on police staff engagement scores, which are clearly very predictive of whether they’re going to burn out, whether they’re going to kind of, you know, have all sorts of other issues that end up costing the public purse quite a lot of money.

So I think there’s the fact that we need to be kind of, you know, clearly we do need to measure this stuff, we can measure most of this stuff if we’re smart about it and also the way that we approach things. And I think what’s kind of, in some ways, quite prosaic and, in some ways, quite kind of the opposite in terms of this work is that we’re not grappling with a huge kind of top-level issue, we’re not trying to sort of wrap our arms around things like how do we reduce unemployment all in one go. We’re trying to really chip away and break down that problem and say, OK, we can’t kind of change the sort of, you know, the direction of the entire economy, but maybe we could look at whether a small kind of inefficiencies in terms of whether people looking for work are choosing to go to the recruitment events that are most likely to land them a job. So we’ve done a lot of work on this.

For example, in the U.K. we’re simply changing the language around a government-sent tax message to job seekers, telling them that they could go to a recruitment event and where they were actually very likely to get a job. Just changing the language, making it more personal and adding the line “I’ve booked you a place, good luck” from their job adviser got people from a 10 percent show-up rate to a 26 percent show-up rate. And that’s this tiny little tweak on something that you would think, yeah, of course, you can’t kind of muck around at the bottom level and change employment outcomes, but when people are actually landing in jobs as a result of it, it turns out that you very much can.

KARABELL: And, Maya, any thoughts on this sort of mix between what can be measured?

SHANKAR: Yeah. To add to those very good perspectives, I think there are also instances where it’s simply not appropriate to be measuring or testing outcomes.

So one example of this, which is a problem my teammates and I worked to tackle in our final year, was the water crisis in Flint, Michigan. And in a case of total emergency like that, you’re not going to be A/B testing messages on the ground, right? You’re going to be leveraging your best understanding of human behavior to design pamphlets that articulate information about water quality and water safety in the most effective ways possible, et cetera, et cetera. So I think, one, there are instances where it’s inappropriate to be measuring because you’re just trying to roll out the most effective messages to everyone.

And then on top of that, there are also behavioral elements that are just challenging to measure generally speaking. So, in the case of Flint, we’re trying to repair broken trust between citizens and their government because the government had lied to them about the quality of their water, and there had been elements of deception. And so in that sort of instance, you know, we’re working on both sides, both with residents of Flint and then also with government officials to try to figure out how we might repair some of these fissures.

And I think in those cases, it’s really a long-term process where you’re not going to repair a trust overnight. It’s very hard to measure increases in trust. Any self-report is notoriously a poor indicator of how people are feeling. And so, in a case like that, we’re trying to leverage trust-building tools with the understanding that this is going to take, in some instances, you know, decades to try to fix, in the best-case scenario.

KARABELL: You know, it’s interesting about where it’s appropriate or not. I mean, this is also an issue of economic policymaking separate from the behavioral. So with the American Recovery Act when President Obama stood up in February of 2009 and said this $787 billion bill will create or save 2.5 million jobs, the problem with that ultimately was by trying to create an absolutely formulaic connection between necessary emergency spending with quantifiable future outcomes, the spending was clearly necessary, the fact that you had to then give a number created a liability, right? Because maybe it saved, maybe it didn’t, but in the time frame that was promised those jobs didn’t materialize in quite the way that was promised, as opposed to being able to say we need to spend a lot of money, things are really bad, and it’s going to take aggressive effort. So there’s always that issue of not only the appropriateness of the quantification, but the liability of quantifying something that maybe shouldn’t be, given that the action was necessary.

One last question, then we’ll turn it over to all of you and your questions—I think we’re going to go to about 3:20 because we started a few minutes late—and that’s the question of how embedded are these rather new approaches to policy, which, in many ways, are a progressive—and I don’t mean that in a political sense; small-C (sic) progressive—very similar to, you know, the creation of statistics as part of government policymaking during the 1930s? How embedded are they within various bureaucracies, such that different political parties and different political fortunes can or cannot stop the clock or interrupt this?

And, you know, there’s been certainly a rise in more nationalist governments, governments that are more—I mean, it may be unfair to say ideologically driven given that most governments are ideologically driven, but may not want the difficulty of constraining action by quantifiable measures. I mean, are these things that are going to survive current administrations in the U.K., in the United States, potentially in France, depending on what happens? Are they deeply embedded enough? Or is it really too new that we could be talking in a few years about, or, wouldn’t it be nice if this had been part of our policymaking? So, in no particular order, your thoughts about that. Are you—

LIEBMAN: I’ll go first. I’m—

KIRKMAN: I want to—(inaudible). (Laughter.)

LIEBMAN: All right. (Laughs.) You’re not. No, I’ll go—I’ll go first. And I’ll say a few things.

And so, in some separate work, Elspeth are partners in crime in trying to help state and local governments use data and evidence more effectively. And I would say I mostly can’t tell the different between working with red governments or blue governments. Everyone’s trying to make government more effective and use data and evidence. And, you know, people want to know whether we’re spending money well. And that’s just as—I don’t find there to be a strong partisan difference in that.

And so certainly at the state and local level, I don’t think that there’s an issue. I think the movement toward using data, toward doing experimental evaluation and other evaluations, I think—I think that’s just—that’s just building. And we’re seeing so many—a new generation of public servants who, you know, are young, have grown up in an era where they have been—where they are more technologically savvy.

And actually, let me just give you a story. I was moderating a panel in the city of Boston about a new effort that they did so that when a firefighter is going to a fire, often the city knows something about that building because the thing—the hazard that actually caused the fire to happen, they had—they got a building permit for. But the data system that had the building permit information, and it wasn’t available to the firefighters on the way to go into the fire, and so they would show up, and they’d be blind, and they would find some hazard that they could’ve known about. So they fixed that problem. They got the data systems talking to each other so that now when the firefighters are going to the fire, they see in an iPad everything the city knows about that property before they get there.

So I was moderating this panel, and there was a city official, and I said, you know, how did you procure that technology, you know, the thing that made the iPad show up with the map and the—hit the—hit the—hit the property, and suddenly you know all that stuff? And I see someone waving from the audience like I’m—I couldn't figure out what was going to—like, making—gesturing. And what turned out to be the answer is just some—you know, some young city employees’ Google Maps. They weren’t even a computer science major, just some—you know, some B.A. in some normal subject. And they were able to quickly do it and set up the technology. And so, I mean, this is—this is sweeping through, you know, government in the same way, probably not quite as fast, that it’s been sweeping through business. And so I think the big picture is these kind of practices are just—we’re just going to see them expand.

And I’ll also say just in the curriculum at the Kennedy School, you know, the core curriculum has behavior economics in it. That would’ve have been true—you know, would’ve have been true 10 years ago.

KARABELL: You masterfully avoided taking about the federal government, so I’m going to ask Maya to opine on that. (Laughter.) That was extraordinarily diplomatic. Well done.

SHANKAR: Well, now that I’ve left, maybe I can give more candid—(chuckles)—answer.

You know, I think we were mindful very early on that no matter how much appeal behavioral science should have for both Republican, Democrat or independent audiences, it would necessarily get attached to the fact that this was a Democratic president. And we wanted to try and make this as bipartisan an effort as possible because I think we’ve seen tremendous enthusiasm from both sides of the aisle.

But I think with that perspective in mind, and especially my keen interest on seeing this initiative persist into future administrations, we made the conscious choice to not create the team of behavioral scientists within the White House. Instead, we had a more diffuse model where we baked behavioral scientists into government agencies across the federal government with a dedicated team within one government agency.

And I think that that has helped sort of root this in a more stable place. So some of these agencies are less susceptible to leadership changes in terms of the type of mission they have or the types of people they’re able to hire. And hopefully that helps. I mean, in some sense, we wanted to insulate it from the particular party that was in power because we felt that these techniques were just generally good for government, right? It led to more effectiveness and more efficiency irrespective of what the policy goals are.

I think at the time I could never have predicted just how significant a change would occur between President Obama and President Trump in terms of ideals and, again, the—I mean, there doesn’t even seem to be a science and technology office right now that exists within the White House, period. And so we definitely have some gratitude that we worked to change the minds of career civil servants who had been in the government for in some cases 30-plus years, who are continuing to work, and who have been trained in these tools and techniques.

KARABELL: Yeah. I mean, certainly, I guess part of the goal would be, like, various official statistical agencies, which at least until present are accepted as nonpartisan necessary features of most OECD governments, you’d kind of want that for behavioral economics. Do you see that happening let’s say in the U.K. or throughout other OECD countries?

KIRKMAN: Yeah, I think so. And another point that I would kind of—always kind of like to return to on this is that I think it’s very easy to lift the behavioral economics conversation into this sort of level of whether it fits with various different political ideologies and these kinds of things. But frankly, when we kind of embed it in departments, within organizations, a lot of the time, really, you’re not talking about things that need to be kind of dissected and discussed on a kind of ideological level. You’re talking about, are we all right with changing this form and then testing whether it works better or—

You know, and if you sometimes take a little bit of the kind of, you know, exciting, shiny kind of headline-grabbing stuff out of it, really—there’s a really great story that a professor from the Rotman School in Canada talks about which I just—I just love where he was trying to work with a government and get them to change this form, and there were all of these kind of these blockers and barriers—gave up on the project, rang someone up six months later and said, oh, did you ever change that form? They said, yeah, we just changed the—changed the form. He’s, like, wow, did you get all to sign off? And they said, no, no, the printer broke, and it turned out that it printed these specific dimensions, so we had to change the paper. And when we changed the paper, we just changed the form.

And it’s, like, you know, talking about it in terms of changing the paper and the form, totally noncontroversial; talking about it when you’re kind of discussing, oh, you know, ideologically, are we OK with behavioral science? So I think there’s also something pragmatic about, you know, if we’re not sort of egotistical about this and we just say, actually, we’re just trying to find out what works and, you know, we’re just trying to kind of make these small tweaks to the way that we’re delivering services, and over time they add up, then I think we can kind of, you know, walk back from that ledge a little bit.

KARABELL: Well, from the sublime to the mundane.

We have some time for questions, of course. Please identify yourself and—sir.

And I guess wait for the microphones which are converging on you.

Q: Jeff Shafer, JRShafer Insight.

I have a question that goes off in a little different area about how government behave rather than the people governments deal with, but I’m hoping you can shed some light on it anyway.

The most astounding thing that I have read out of the behavioral decision-making literature is that committees make lousier decisions than the people who make them up if they make their decisions in isolation. And I’m wondering, is that still kind of an accepted view within the—in the field? And if so, what kind of thinking is going on about how people can make collective decisions more effectively than they can in this groupthink environment of a normal committee?

KIRKMAN: Yeah, we’re weaker than the sum of our parts I think is the kind of summation of that.

Like all behavioral stuff, this really depends on contexts and on the way that you’re kind of managing things. I’ll tell you one very straightforward thing that we did to try and mitigate against this internally is we use a method called thinkgroup, which is, you know, an inversion of groupthink. And the idea is that when we’re trying to make a decision by committee, whether that decision is on, you know, maybe what direction we should go in in terms of designing a new policy recommendation, whether it surrounds performance appraisals and promotions and those kinds of things, you have a kind of shared document that everybody logs into. Everybody is incognito, so you don’t know, you know, if it’s the boss of if it’s the intern that’s kind of contributing an idea. And you just engage and you spend a silent half-hour kind of laying down some base kind of ideals that way.

And the benefits of this are that you get many, many more ideas than if the highest-paid person in the room just opens their mouth and everyone kind of anchors to that. And you also get ideas that are then assessed based on merit and not influenced by other people, and people maybe say things that they either wouldn’t be comfortable saying or that they just get shouted out because they’re more introverted or, you know, they’re just not feeling like kind of taking someone on that day or whatever it might. So there’s a bit of a kind of a day-to-day example.

But I think we should absolutely guard against this. And we should be very thoughtful about the way that we make up groups making decisions because there’s also research that shows that who is in that committee clearly matters, and particularly when we’re thinking about homogenous groups and more diverse groups but also when we kind of diffuse those group dynamics.

KARABELL: Sir. And then we’ll go back there.

Q: Thank you. I’m Alex Jones. I’m associated with the DailyChatter, which is an international news daily digital email newsletter.

The former prime minister of Britain called for a referendum on Brexit with the expectation that it would fail. The new prime minister has now called for early elections with the stated of purpose of bolstering the support for Brexit. Where do you infer this behavioral psychological economic policy was baked into this decision to call early elections?

KIRKMAN: I would love to answer this question, but I couldn’t possibly comment. It’s interesting and we’ll see how it plays out. (Laughter.)

LIEBMAN: But let me—let me—I’ll try to save you. I think—I have seen many examples where elected officials have exhibited the exact same behavioral we run into when people, you know, are so present-biased that they don’t save, but, you know, where they worry about how to get through the next three weeks’ problem and by doing so create a much bigger problem 18 months later.

And, you know, I think, for example, you know, there was one point where in order to get votes on a particular budget resolution, we committed to creating what became the Simpson-Bowles Commission, which then when it came out with its recommendations created all bigger question about how to respond to that. And, you know, it successfully got us the votes needed early, but then it created an even bigger problem 18 months later when the president chose not to—well, had to engage with those recommendations. So I see that kind of behavior just over and over again in government where, you know, you do something to get through this week, and you—boy, do you have a bigger problem a year later because of it.

KARABELL: It’s an interesting—I mean, we will see, right, whether or not you get the inverse reaction to the same expectation. But this one looks a little less uncertain, although I’m sure we can be sitting here in three months and have a completely different reaction.

Sir.

Q: Angel Liswo (ph) from JPMorgan.

Guilt can be a very powerful behavior and psychological agent. And I read some studies pointing out that a simple text message to public school parents can lead to extremely higher levels of engagement, parent engagement and high academic achievement and scores by the children. How do initiatives like that that looks like a no-brainer—it’s not red or blue and doesn’t cost much as opposed to several other initiatives that cost several millions of dollars—are not being—(off mic)?

KARABELL: Maya, you want to take a crack at why that doesn’t easily happen within government?

SHANKAR: Well, I think there’s just inertia effects. So it’s hard to deviate from the status quo when my government agency colleagues are not rewarded for taking risks and are basically just trying to do the job that they’re being asked to do. So sometimes introducing these new initiatives, as obvious as they may seem or as effortless as they may seem, ends up being more complex from a bureaucratic perspective and also because we might not have the apparatus for actually doing that thing.

So for—in the example of text messages, I was working with my colleagues at the Department of Education for years trying to figure out if we could text FAFSA filers and try to get them engaging in certain behaviors, and it turns out that the government texting people is a very complex thing to do, right? Who knew, right? But we had to figure that out along the way.

So, I mean, I think it’s a good thing that a lot of these interventions do seem like common sense retroactively. I think it helps to build public buy-in for some of these things. So, for example, as Elspeth was saying, you know, when I first joined, a lot of pushback from the conservative media—I was called, like, the nation’s control freak, the know-it-all 27-year-old at the time, thinks she knows how to run people’s lives—which I do actually think. No, I’m just kidding. (Laughter.)

But I think that one thing that kind of assuaged people’s fears is that when they actually see it done, both the things that worked and the things that didn’t work, they all were so obviously benign, right? So these interventions involved sending text messages to low-income students trying to get them, you know, matriculate in college, trying to get, like I said earlier, service members to save for retirement, trying to get people who had started the health care enrollment process to actually finish the health care enrollment process. And so I think the fact that, like the gentleman noted, when you actually hear about how effective some of these very light-touch, low-cost interventions are, it’s very easy to get buy-in from the public.

So, to summarize, I would say it’s actually just a lot harder to move a bureaucracy towards obvious things. But when you do in fact do those things, it does breathe a lot of trust in the public. So you kind of have to always have the long game in mind in terms of your ambitions.

KARABELL: So it’s funny, I mean, sort of back to that initial question about the government part of this equation or the bureaucracy part of the equation, a lot of the work has been focused on how can we deliver policy outcomes more effectively from government to publics. Has there been any work done on how we can more effectively allow or help nudge bureaucracies to adopt change for these outcomes?

KIRKMAN: Yes. (Laughter.) I think on the kind of hard-to-measure things, I think this is—this is one of them just because once you kind of start to infuse the spirit of this stuff and start to get people to do these things within departments, within administrations, it kind of catches on like wildfire, and you very quickly get lots of—some spillover. So it’s hard to measure how much you’ve kind of impacted it.

But we have—you know, we see often as a legacy of the work that we do, whether it’s in central government or whether it’s in, you know, municipal local-level government, that when you leave, the work doesn’t stop, and it kind of continues, and it does become commonplace. And some of it is around getting people to try safe experiments, not experiments in the—you know, in the actual sort of scientific sense of the word.

But a good example is, on that text message point, from the outside, that seems super straightforward and if you assumed that a school did have all the texting mechanisms and those kinds of things. Some of the qualities of research that we do behind studies like that shows you that, you know, if you’re actually a teacher that’s kind of the front line of this stuff and you’re having to live and breathe, you know, angry parents kind of coming in and saying, why are you texting me, you know, telling me how much my kid’s been absent relative to other people, or why are you telling me I need to talk to my kid about king penguins, or whatever we’re kind of texting them to try and get them to engage in their child’s education a little bit more, it is kind of—it does feel like a big risk to them. And all they can think of is the scenario where they annoy one parent or they cause our parent of a really disadvantaged child to say, you know what, I’ll just go to a different school and increase the amount of turnover that that kid experiences. And that success is measured on. And once you get them to kind of do a small thing like that, then maybe next time it becomes a little bit easier, and they can overcome they inertia because their counterfactual is no longer, oh, God, everything is going to go swimmingly if I don’t do this and horribly wrong if I do. It’s, you know, last time I did something that changed, it was really a factor.

KARABELL: Other questions?

SHANKAR: I think there’s also been a surge—oh, sorry.

KARABELL: No, go on.

SHANKAR: OK. I was going to say there’s also been a surge of really interesting research coming out of human resources, so a lot of work in HR trying to look at what motivates employees, how do we get them to feel safe taking risks. So, for example, the former head of HR at Google, Laszlo Bock, wrote a book called “Work Rules!” And it was basically chronicling years of research figuring out what motivates employees to feel good about their work, to feel invested in it, to be interested in innovation. And so I think that the public sector can probably borrow some of the insights from the private sector in this particular domain to figure out how—you know, I think what Elspeth was saying, you sort of, like, chip away at the problem. So you might not be able to move the whole bureaucracy all at once, but if you can change individual minds within the system and try to encourage small behavior changes, in aggregate it might have some pretty pronounced effects.

Q: Hi. Jonathan Klein, Getty Images and a few other hats.

Maya sort of helped me with the last comment she made. The private sector hasn’t been mentioned at all during this session until about 30 seconds ago. When I look at the examples that you have all come up with, with changing forms and putting signatures in different places, the private sector, especially in the e-commerce and online space, have been doing tons of this forever. In fact, 10 years ago various companies could do A/B testing and multivariate testing with 12 million consecutive and simultaneous tests. To what extent has the private sector been helpful to the government in providing technology and know-how and the ability to essentially test a great deal more? And has the public sector had the technology in terms of the machines as well as the people to get as far as quickly as the private sector?

SHANKAR: Well, I can start off with one answer, which is I think we’re certainly technologically behind, so our ability to do rapid A/B testing with multiple treatment arms is actually quite challenging. Oftentimes we’ll get a few arms at best. And we have to make sure that our treatment arms are really driven by hypotheses that are compelling and meaningful.

I will say that—I mean, I often get asked this question of, like, well, the private sector’s been doing this for years, and now the public sector is catching up. I do think that they are categorically different environments and that there has to be a strictness that comes along with experimentation and the application of behavioral science in the public sector that maybe the private sector can be less concerned about. And that’s because, I mean, we’re dealing with 300 million Americans. They’re not a testing bed for experimentalists. And it seems like we need to be more judicious about what it is that we test. We have to be more judicious about transparency and making sure that the public is informed about the things that we are doing so that we can have a public conversation about what people in this country are comfortable with or not comfortable with. And so disclosure is really important in this—in this instance. And I think we have to be very, very thoughtful about the ethics behind what we’re applying and what we’re testing.

And so I think it’s not simply a matter of us sort of catching up to the private sector as much as creating a new set of ground rules that we operate within to ensure that we are protecting people and are never taking advantage of the platform.

KIRKMAN: I think one thing I would add to that as well is that there are a number of examples of things we’ve done, whether it surrounds collecting tax revenues or whether it surrounds trying to incentivize people from underrepresented groups to do things like apply to join the police force and these sorts of things, where you find that the message that works best on average is not the message that works best for, you know, the most vulnerable or the most kind of underrepresented groups. And if you’re in a private sector business and all you care about is kind of, you know, the conversion into dollars at the end of it because that’s your kind of mandate, then it’s much more straightforward because you just need to care about what the kind of best result is on average whereas where you’re kind of thinking about this from the government perspective, you also have to kind of play the equity and access kind of aspect of it and put those into play and make some tough decisions that can end up being sort of fairly political in their flavor about what you do and don’t want to kind of pursue as a result of these tests and whether you want you want to kind of subsequently segment and treat people differently depending on what appears to work best for different groups. So I think all of those things plus the inevitable logistical hurdles that we definitely face, the private sector don’t really kind of come into—come into the fore.

KARABELL: Yeah, I mean, that’s a—I think it’s a good juxtaposition. So private sector, many more tools, a lot more funding and probably better technology but not always trying to answer the same questions. And so they’re probably applicable methods, but they’re not always—they don’t always stand one to one.

Final question.

Q: Robert Klitzman from Columbia University. Thank you.

I’m just wondering, as a follow-up, the government obviously can’t randomize citizens into different groups experimentally but can fund studies using social science. It can look at some of these issues. And I’m wondering if you think there is sufficient funding for such studies, which agencies would do it, what might be done, any hope for that happening.

KARABELL: Did you ever allocate in OMB for—

LIEBMAN: Yeah, I rarely see funding as the constraint for finding out the answer the something. It’s much more, is there someone within government who will authorize the activity, give you access to the data or to the platform. You know, more funding for researchers is a good thing, at least for people like me who are researchers. But I think—I don’t think that’s the problem in most cases. I think most of the cases is getting the ideas out there and finding a champion within government who’s willing to try to get better results with some of these techniques.

KARABELL: Actually, so let’s squeeze in one final question. The lady here had raised her hand, so—

Q: This is Sinem Sonmez. I’m an academic. I’m a professor of economics at St. John’s, and I taught for almost a decade now.

And my question is regarding accountability with government agencies, in particular with the Department of Education. For instance, you know, since I’m a professor, I get observed, both by the chair and I also get evaluated by the students, so there is—I’m being—you know, I’m being monitored at both ends. However, when it comes to the Department of Education, for instance, shouldn’t there be more accountability so that we avoid the sort of problems that we have had with the student loans given the fact that Department of Education—Navient—Department of Education was pushing loans to Navient, and then Navient was listed on the stock market, which in turn then—the incentives were not lined up. So Navient’s incentives was to increase the shareholder price. And by doing so, they were not really trying to give various alternatives for student loan payments to students. And then, you know, there was a student loan default.

So could we—could you please give us some reasons as to how we can increase accountability within the government to avoid sort of the problems we’ve had with Navient and the Department of Education and the student loan defaults and all that? I don’t know if you’re familiar with the situation, with Navient and the fact that the incentives were not lined up correctly.

KARABELL: Yeah, no, I think we’re—I mean, is this amenable to the kind of techniques of behavioral economics as they’re currently applied in government, or is this more of a regulatory policy issue from—

LIEBMAN: So, you know, tying this question in with a little bit about the one about how the private sector and the public sector are different is, government often has to be pretty democratic in who it allows to compete for business through a government program. I mean, think of the problem of letting consumers choose among health insurance plans. If you’re a firm, you might choose only one health insurer, or you might choose two or three, but you’ll have a very curated, narrow set of choices you’re going to give to your employees such that no matter what choice they make, they’re going to get a pretty good outcome. When we have the government set up exchanges for health insurance, we tend to give people 20 or 30 options because the government sort of has to let anybody who sort of meets eligibility criteria compete. When we option for Medicare Part D insurance for prescription drugs, you know, there are dozens and dozens of plans out there, and studies have shown that consumers, seniors who are buying Medicare Part D, often make the wrong choice and don’t choose the plan that gives the best, cheapest coverage for the drugs that they’re actually consuming. And so there is a challenge here because, you know, government can be really prescriptive and say, here is the best plan for you, but we don’t really actually feel comfortable with that. We actually let everyone compete for these types of systems that the government mediates.

And, you know, I think student loans are sort of an example of that. It’s clearly an area where we could be doing a lot better in giving the consumers information about which of the educational institutions are actually—what the graduate—you know, what the subsequent employment rates and earnings rates are—earnings levels are for people going through different options. And there certainly have been proposals to condition the eligibility to be part of the student loan system to actual having good performance. And so that’s—I think it’s an area where we’re going to see better use of data over time. But I think it’s also an example of this broader problem that government sort of has to be pretty fair and let anybody who wants to compete for the government—for the business that is financed by government programs do that.

KARABELL: Well, I think we’re at the end of our time. I want to thank Maya and Jeffrey and Elspeth for a fascinating discussion. Obviously, let us hope this is embedded enough in policymaking throughout the world such that we can revisit this year by year with the Menschel Symposiums and others. If not, it’s been a fun ride for the past few years—(laughter)—and vaya con dios. Thank you very much. (Applause.)

(END)

Top Stories on CFR

Indonesia

Prabowo Subianto was named the winner of the Indonesian presidential election. But it is unclear which version of Prabowo—the more moderate candidate from the campaign trail or the self-styled strongman—will govern Indonesia.

Russia

The mass casualty theater attack in Moscow was a reminder that affiliates of the Islamic State have reorganized and infiltrated even powerful states.

India

With India's development continuing to gain steam, one of the biggest challenges will be to avoid the mistake that others have made when they failed to recognize their newly acquired global systemic influence and adapt accordingly. Both China and Big Tech show that it is never too early to start managing one's own rise.