Charles Duhigg is a Pulitzer Prize–winning reporter, author, and expert on productivity. His bestselling books, The Power of Habit: Why We Do What We Do in Life and Smarter Faster Better: The Secrets of Being Productive in Life and Business are touchstones for business leaders. His newly released book Supercommunicators: How to Unlock the Secret Language of Connection, examines the secrets to great communication skills based on the latest scientific research.
He shares insights from the latest scientific research about what leaders should be thinking about as they develop good habits around AI and help their organizations tap its full power. He also explains the secret to developing the excellent communication skills you need to stay ahead in the quickly changing world of work—the ability to clearly articulate what you want is vitally important to how you interact with other people as well as AI, and it can even help you get clarity on how to best tackle challenges and seize opportunities.
Three big takeaways from the conversation:
Most people confuse productivity with busyness. Answering a lot of emails isn’t being productive—“real productivity comes from building the habits that allow us to think more deeply, particularly when thought is urgently needed,” Duhigg insists. “When we’re feeling stressed, when we’re feeling overwhelmed, when someone says, ’I need an answer right now,’ the ability to step back and say, ’How do I get myself to think more deeply at this moment? How do I get myself to be innovative on demand?’ Those are the things that lead to real productivity.”
Communication skills are becoming more important every day in our new world of work. “We’re already seeing more and more emphasis on communication ability as something that employers are looking for when they’re hiring,” Duhigg says. “The ability to get along well with others is critical to the success of teams and to individuals.” And that isn’t just a skill that’s helpful in interpersonal situations: “Communication is a technical skill in addition to a human skill. The ability to show during an interview that you can communicate and connect well with others is going to tell that interviewer something about how you also interact with technology, and that’s going to be powerful.”
The secret to getting the most out of AI is about moving beyond the straightforward queries you’d pose to a traditional search engine and into a mode of an ongoing interaction with a more organic give-and-take. “Everything that we can see about the future of tech is that it’s going to be more and more like having a conversation and less and less like using a calculator,” Duhigg says. “I have a 15-year-old son, and if he reads a book that none of his friends are reading, he’ll open up a browser window and have a conversation with ChatGPT or Bing about the book, and he finds it really edifying.”
WorkLab is a place for experts to share their insights and opinions. As students of the future of work, Microsoft values inputs from a diverse set of voices. That said, the opinions and findings of the experts we interview are their own and do not reflect Microsoft’s own research or opinions.
Follow the show on Apple Podcasts, Spotify, or wherever you get your podcasts.
Here’s a transcript of the conversation.
MOLLY WOOD: This is WorkLab, the podcast from Microsoft. I’m your host, Molly Wood. On WorkLab we hear from experts about the future of work, from how to use AI effectively to what it takes to thrive in our new world of work.
CHARLES DUHIGG: Everything that we can see about the future of tech is it’s going to be more and more like having a conversation, and less and less like using a calculator. And the faster we get into the habit of thinking about which conversation is the right kind of conversation with this particular type of AI or this particular interface, the faster we’re going to be able to use that tool effectively.
MOLLY WOOD: Today I’m talking to Charles Duhigg, a Pulitzer Prize–winning reporter, a bestselling author, and a renowned expert on the science of productivity, habit formation, and effective communication. We talked with him about how to use those good habits and human communication skills to unlock all the potential of AI, but also how to use AI to improve our human communication. Here’s my conversation with Charles.
[Music]
MOLLY WOOD: So Charles, your books, The Power of Habit and Smarter Faster Better, have really become touchstones for people who want to become more productive and form good habits. What’s your nutshell advice for leaders who are interested in these topics?
CHARLES DUHIGG: The biggest and most important thing is to understand that real productivity comes from building the habits that allow us to think more deeply, particularly when thought is needed. When we’re feeling stressed, when we’re feeling overwhelmed, when someone says, “I need an answer right now.” The ability to step back and say, How do I get myself to think more deeply at this moment? How do I get myself to be innovative on demand? Those are the things that lead to productivity. Productivity is not busyness. Productivity is about making the right choice when a choice is genuinely needed.
MOLLY WOOD: How, if at all, are your views evolving as technology evolves, the world around us evolves? I mean, it’s a combination of phones and now this question of AI and efficiency. How’s the thinking evolving, if at all?
CHARLES DUHIGG: So I think that one of the things that—let’s take generative AI, which has really been this huge disruptive force in a lot of positive ways within the tech industry and just the world at large. I’ve talked to a number of people who are working with it and designing it and developing it, and one of the things that they’ve consistently said is, this is essentially an add-on for human intelligence that makes human intelligence even more valuable. Because when you think about it, so much of what we do every day to be successful does not actually draw on our unique intelligence, right? The fact that I can sit at a computer and reply to 30 or 40 emails in an hour, whereas my competitor can only do 20 emails, means I’m going to beat them. But that’s not because I’m smarter than them, that’s because I’m more of a masochist than them. And so one of the things that AI will do, is it allows us to take those rote, unthinking activities, the activities that don’t really use our complete intellectual might, but instead just use a portion of it, and it’ll allow us to complete those tasks much more quickly. So I was talking to Mustafa Suleyman, who’s started an AI company, and I asked him, How do you think this is going to change the landscape? And he said, “I think for smart people, this will give them even more of an advantage. And for people who aren’t used to relying on their intelligence, who aren’t used to relying on their smarts, it’s going to pose a challenge to them because it’s going to force them to think in new and different ways.” That doesn’t mean they aren’t intelligent, they aren’t smart—and intelligence means different things to different people in different settings. But it does mean that now we’re going to be able to access that raw human intelligence, whereas before it was often mediated by just brute force.
MOLLY WOOD: Right, I mean, it feels like a kind of interesting, layered question of habit development and productivity. First, you have to develop a new habit around an entirely new technology, and then you have to filter for the correct habits to develop.
CHARLES DUHIGG: That’s exactly right. And what’s really interesting, though, is that it happens very automatically, right? A) because of how our brains form habits, but B) because this is what we know about technology and how we learn to use it. If you go back and you look at when telephones first became popular, there were all these articles about the fact that people would never be able to have a real conversation on the telephone. Because you couldn’t see each other, you couldn’t use non-verbal communication. And what’s interesting is that they were right at first. If you listen to early conversations or read transcripts on the telephone, they’re all very wooden and stilted. People aren’t actually communicating with each other. They basically figured this would be good for, like, sending grocery orders or stock orders. But of course, by the time you and I were teenagers, we could spend like seven hours on the phone and it was some of our closest, most intimate discussions ever. And it’s because humans have this amazing ability to learn how to use tech in the way that that tech is best used. Now sometimes that can be perverted, right? I think social media is a good example of when you can have undue influences that shape how we use tech. But for those of us who want to use tech and think about that tech, the process of building habits about when to use AI and how to use it, it’ll be very organic.
MOLLY WOOD: So you brought up this idea of communication. Your latest book is called Supercommunicators. This is a really big topic right now. Communication is one of the fundamental, so-called soft skills, and I think we’re realizing it’s one of the keys to really making AI work well for you. Tell me a little bit about what you learned in writing this book and what we can learn from people who are effective communicators.
CHARLES DUHIGG: So we’re living through this golden age of understanding the science of communication because of advances in neural imaging and data collection. A lot of the same things that make it possible for us to build things like GPT have also given us real insights into how people communicate with each other. And in particular what researchers have learned is that there are some questions that are more powerful than others in drawing people out. Those are known as deep questions because they ask about things like values and beliefs and experiences. We tend to think of a discussion as being about one thing, but actually each discussion is made up of multiple kinds of conversations, and that they tend to fall into one of three buckets, usually a practical conversation or an emotional conversation or a social conversation. But the fact that there is this science, this kind of calculus of why some people are better at communicating than others are, means that A) we can all learn how to do it, but B) it gives us insights into how we are going to need to communicate with, say, machines in the future. I mean, one of the things that’s interesting is—I have a 15-year-old son. The way he uses Bing and ChatGPT is, he has a conversation with it. He reads a book and none of his friends are reading it. And so he’ll open up a window and he’ll have a conversation with the computer about the book, and he finds it really edifying. It allows him to explore his own ideas and to get exposed to other perspectives that he hadn’t thought about. I, of course, never have a conversation with ChatGPT or Bing because I still think of a computer as a calculator, something that you give a problem and they return an answer. But everything that we can see about the future of tech is, it’s going to be more and more like having a conversation and less and less like using a calculator. And the faster we get into the habit of thinking about which conversation is the right kind of conversation with this particular type of AI or this particular interface, the faster we’re going to be able to use that tool effectively.
MOLLY WOOD: Tell me a little more about learning to be a good communicator before we talk more about using those skills with AI. It is counterintuitive to think that this is a skill that can be learned and developed.
CHARLES DUHIGG: The funny thing is it’s absolutely a skill. There are some people who are consistently good at this, and what sets them apart is not that they’re more charismatic or that they’re an extrovert—in fact, oftentimes they aren’t. What sets them apart is simply that they’ve thought about communication a little bit more deeply than other folks. So one of the things that we know about supercommunicators, for instance, is that they ask 10 to 20 times as many questions as the average person, but the questions that they ask, a lot of them we don’t even register as questions because they say things like, Huh, that’s interesting. What happened next? Or, what’d you say then? Oh yeah, what’d you think about that? There’s these questions that invite us into the conversation. And then they ask deep questions. Questions that ask us about our values, our beliefs, and our experiences. And that can sound kind of daunting, but that’s as simple as saying to someone like, What do you do for a living? Oh, I’m an attorney. Oh really? Did you always want to be an attorney? What made you decide to go to law school? Do you love your job? Those are three easy questions to ask, but all three of them are deep questions, because they get the other person to reveal something essential and meaningful about themselves. And then supercommunicators tend to reciprocate. They understand what kind of conversation is happening because they’re looking for clues. They match that kind of conversation—what’s known in psychology as the matching principle—and in doing so, they find a way to connect with someone and then invite them to match back.
MOLLY WOOD: So let’s bring this into this context of AI. We’ve talked on this show in previous episodes about how managers who seem to do the best job getting the most utility from AI are good at clearly articulating needs, delineating tasks, giving relevant context. But it sounds like you’re also saying that there is something in just being vulnerable or having a conversation or asking questions back to AI, the way you would interact with a person—or at least, is that what you’re learning from watching your son do this and combining it with what you’re learning about communication?
CHARLES DUHIGG: Yeah, that’s certainly true if we’re having conversations with other humans, right? That we tend to focus on what we want to say rather than trying to figure out what kinds of questions we can ask. And with AI, what’s really interesting—and again, we’re living through a period where we’re still trying to figure this out and we’re learning things every single day. But one of the things that we know is that, for instance, if you use emotional language with AI, you can increase its effectiveness. So if you use please and thank you, that tends to get you better answers. If you say something like, I need you to answer this question for my job, and it’s really, really important to me because the answer that you give me will determine whether I get a promotion, and I’m really hopeful that I get a promotion. Now there’s no reason why the large language model should care about you, and yet there’s something about presenting the question in that way that will raise the efficacy of the answer that it delivers. And it has to do with—obviously the training dataset that it was taught on has a lot of emotional language in it, and so it’s helping to identify which parts of that dataset it ought to pay attention to. And so it makes sense that this would have an impact. But because large language models are trained on the copus of human communication, the same rules that make humans good or bad communicators also influence whether the LLM gives us a good or bad answer. And a lot of it is about this back and forth. So one of the things that I’ve learned from my 15-year-old is, if you ask the AI questions about how it got to an answer—or more importantly, what other questions it thinks you should ask—it’ll say some really interesting and useful things. Sometimes it reveals a kind of an avenue that hadn’t even occurred to me to go down of inquiry.
MOLLY WOOD: What’s interesting is that as I hear you describe just that one example—if you help me with this answer and the answer is good enough, there’s a possibility I’ll get a promotion. That is also something that maybe wouldn’t occur to me to say to another person because it would be showing some vulnerability or it would be communicating in a way I didn’t think was that comfortable, but it would raise the stakes for that person also. It sounds like what you’re saying is, yeah, if this thing is trained on how to communicate effectively, then this is also what we should be doing with humans.
CHARLES DUHIGG: That’s exactly right. And one of the things that’s interesting about what you just said is, it wouldn’t occur to you to say that because it would be exposing a vulnerability, and you’re exactly right. There is this natural instinct not to expose that vulnerability, but what we know is that when you do expose a vulnerability, you actually make it more likely that the other person likes you and wants to help you, and most importantly, trusts you. Our ability to expose a vulnerability is at the core of the superpower that is communication. And this makes sense because, when you think about it, the way that communication evolved was it became the thing that set Homo sapiens apart. It’s the thing that allowed Homo sapiens to succeed better than any other species. Because I could take an idea or a feeling, or a hope or an aspiration, and I could share it with you, and in my sharing, you experience that hope and that feeling, and that idea. Sharing what’s going on inside our own head is what’s going on in communication. In fact, it’s known within the neural literature as neural entrainment. You and I are having a conversation right now and we’re separated by hundreds if not thousands of miles, but if we could detect it, what we would see is, even though we can’t see each other, our eyes are starting to dilate at similar rates. Our breath patterns and heart rates are starting to match each other. Most importantly, what’s happening inside our heads, our neural activity is starting to look more and more similar. That’s what neural entrainment is, and that’s the core of communication, that I can describe an emotion and feel it myself. And in describing it to you, you start to feel it. You experience it, your brain becomes like mine. And the reason why that’s really important when it comes to vulnerability is that one of the loudest forms of communication is exposing vulnerability. If somebody says something vulnerable, we almost cannot prevent ourselves from listening to them, because, historically, exposing a vulnerability meant that something was really important.
MOLLY WOOD: So now it seems like we have this duality again, which is that, in workplaces, I would say we arguably have not prioritized communication as much as we could and should. And now it is going to be the natural tendency, certainly of maybe people—I think we’re about the same age because I have a son about the same age, and our tendency is going to be to boss the computer around, because that’s just how we’ve been trained to interact, and there’s a double training that may need to happen: communicating better, full stop, and communicating better with AI to get the most out of it. But luckily it’s all the same skill, and now we just have to look for it when we hire.
CHARLES DUHIGG: And what I love about it is that we can practice with AI. So one of the things that AI allows us to do—and people are already creating AI agents that do this—is it allows us to try out different communication techniques and sort of try and anticipate how people will react. So one of the things that when I talk to researchers who are working on negotiations and teaching negotiations is that they say, they tell all their students, before you have a negotiation, we’re going to have an in-class exercise. You’re going to have to negotiate over an issue. I want you to go have this negotiation first with AI. Pay attention to what surprises you. What objections do they raise? How do they come back that catches you off guard? So we get to practice having a conversation before we actually have the conversation, which is of course, something that we normally do with our friends, but our friends get tired of it at some point and say, I don’t want to, I don’t want to role play with you anymore, I got other stuff to do. But equally, I think one of the things that we’re going to see is that when we are hiring, you’re going to see more and more emphasis on communication ability as something that employers are looking for. And we’re already seeing this. We’re already seeing that the ability to get along well with others is critical to the success of teams and to individuals. But even more, now that communication is a technical skill in addition to a human skill, the ability to show during an interview that you can communicate and connect well with others is going to tell that interviewer something about how you also interact with technology, and that’s going to be powerful.
MOLLY WOOD: I feel like there is so much potential in the idea of AI as this sort of communication sandbox, you know, whether it’s practicing for an interview or a tough conversation, or even just being better at conversation. I know people who are actually using it for exactly that purpose. It’s a really compelling use case.
CHARLES DUHIGG: Yeah. There’s a technique called “looping for understanding” that’s really powerful, particularly in conversations that are hard conversations or where there’s some conflict. Looping for understanding is this technique where you ask a deep question, you repeat back what the person tells you in your own words to prove to them that you’re listening. And then the third step, and this is the one we tend to forget, is that you ask them if you got it right. Now just even hearing that, we know how effective that is, right, that if I loop for understanding, if I repeat back what someone said in a conflict conversation and I ask them if I got it right, we know that that’s going to make the conversation go better. But it’s so easy to forget to do that when we’re in the middle of that conversation because we’re feeling heated and overwhelmed. And so think about how effective, how powerful it is just to practice looping for understanding with AI, to get into the habit of when you say something to me, that I, instead of just replying and telling you my thoughts, I just take a beat and say, Here’s what I hear you saying. Tell me if I’m getting this right and repeat it back. Habits, of course, become habits because we do them habitually, and AI gives us that sandbox to allow us to practice those habits until they just become automatic.
MOLLY WOOD: So speaking of habits, at the end of The Power of Habit, you wrote about habits you picked up as a result of writing the book. Can you give me some examples of communication habits that you’ve adopted, specifically on the topic of AI? What habits are you building around AI? How are you using it in your personal and or professional life?
CHARLES DUHIGG: I basically experiment with it all the time. And some stuff it’s not good at, but then there’s other times when I don’t even understand what question I want to ask it, and it ends up helping me understand what I’m trying to get at. Oftentimes for technical questions, I’m like, How do I make Excel do X? Or if I just ask ChatGPT or Bing, it’ll often tell me what the command is right away, and I just plug it in. And so the key is, I think, the same way that we didn’t learn how to use telephones as a species until we just experimented with them for a while, we’re not going to figure out how to use AI really without experimenting. Luckily, the experimentation is kind of fun.
MOLLY WOOD: Sometimes it turns out that all you really need is reinforcement, right? Like you think you already know the answer, but you need to have it confirmed from another source.
CHARLES DUHIGG: Right, and it lays it out in a kind of a way that it’s easier to grasp. This is one of the reasons why conversation is so useful, because not only does it help us understand another person, it helps us understand ourselves. Sometimes simply forcing myself to explain my problem to another person helps me figure out what the problem actually is. The fact that we can have these dialogues with someone who doesn’t get bored and won’t betray our confidences, or won’t allow their own biases to influence us—sometimes we learn things just in what we say, and then sometimes we learn things from what the machine responds with, where we say, oh man, I wish it hadn’t said that, but I guess that’s true, or, no no no, that’s not right. That’s not right. It doesn’t understand at all. We begin to understand what’s actually going on. I did this piece for the New Yorker about OpenAI and Microsoft and the relationship between them. And one of the things that came across really strongly was that both OpenAI and Microsoft, and I think this is a real strength in asset, they essentially are trying to introduce the technology at a pace where people can absorb it. There’s a lot that we could do with AI right now that isn’t being commercialized, in part because it’s unclear how to commercialize it, and it’s unclear how it’ll be used, but also because people aren’t prepared to use it. If you go to the doctor, right now, GPT4 is a pretty good diagnostician. But if you go in and I tell you, Here’s what the machine or the computer says is wrong with you, you’re probably going to say, actually, I’d like to talk to the doctor or the nurse. Like, it’s not just enough to get it from a machine. And so part of this is, how do we introduce this technology into people’s lives in a way that they are prepared to absorb and use it rather than alienating them.
MOLLY WOOD: Okay, so bringing this back to business… I’m a leader. What question should I be asking myself when I wake up every morning?
CHARLES DUHIGG: I mean, I think that leaders would say, What am I paranoid about? But I think probably the better question is, What’s the most meaningful thing I can do today? Particularly once you’re a leader, your day becomes so jam-packed with task after task after task, and solving other people’s problems. You can easily become reactive and spend all of your time just reacting to what life throws at you. But really good leaders say, No, I’m not going to be reactive. I’m going to find the thing that’s most important to me to be proactive on, and I’m going to make that happen. They do this with presidents, the president of the United States. One of the jobs of the chief of staff is to make sure that the only problems that end up on his desk are either problems only he can solve, or problems that correspond with what he wants to solve, because otherwise he could be drowned in the number of questions and issues that come up every day.
MOLLY WOOD: What are some common work habits you think will be irrelevant in the near future?
CHARLES DUHIGG: Ooh, that’s a good question.
MOLLY WOOD: Please tell me it is reading all my emails.
CHARLES DUHIGG: [Laughter] I think anything that’s boring is kind of potentially on that list. So take data analysis. Oftentimes, instead of doing the data analysis, I just dump it into AI and I tell it what I want it to figure out and it goes and it does it for me. And I need to spot-check to make sure it’s not hallucinating, but it’s made data analysis so much easier for me. And the reason why is because data analysis, it’s boring. Coming up with what I want to analyze? That’s an interesting question. Doing the analysis can be kind of drudge work. And you mentioned emails. The truth of the matter is there’s some emails you love to get, and there’s some emails that delight you and entertain you, and there’s some emails that you don’t mind responding to, in fact you enjoy responding to. And then others where it’s just like, okay, here’s something to put on my calendar, and I need to tell this person what my availability is. All those little drudge tasks. Those are the things that are going to disappear, and that’s for the good.
MOLLY WOOD: As you look across this intersection of business and technology, what are you excited about?
CHARLES DUHIGG: I am really excited about what I hope is going to be another Cambrian explosion in tech. When I graduated from college in 1997, it was like the Wild West. There were a thousand different companies doing a thousand different things, and that’s what I think is happening right now. We are seeing again this opportunity for a land grab, for the Cambrian explosion, for someone with a new and unexpected idea who can sort of turn the world upside down the same way that, frankly, OpenAI did. Who had ever heard of OpenAI really two years ago? I’m excited.
MOLLY WOOD: Thank you so much.
CHARLES DUHIGG: Thank you. This was wonderful. Thanks so much for having me on.
[Music]
MOLLY WOOD: Thank you again to Charles Duhigg, journalist, communications and productivity guru, and author of the new book Supercommunicators. Please subscribe to our podcast and check back for the next episode where I’ll be talking to Dr. Britt Aylor, Director of Leadership Development at Microsoft, about why we should all strive to think like an adaptive leader. If you’ve got a question or a comment, please drop us an email at [email protected], and check out Microsoft’s Work Trend Indexes and the WorkLab digital publication, where you’ll find all of our episodes along with thoughtful stories that explore how business leaders are thriving in today’s new world of work. You can find all of it at Microsoft.com/WorkLab. As for this podcast, please rate us, review us, and follow us wherever you listen. It helps us out a ton. The WorkLab podcast is a place for experts to share their insights and opinions. As students of the future of work, Microsoft values inputs from a diverse set of voices. That said, the opinions and findings of our guests are their own, and they may not necessarily reflect Microsoft’s own research or positions. WorkLab is produced by Microsoft with Godfrey Dadich Partners and Reasonable Volume. I’m your host, Molly Wood. Sharon Kallander and Matthew Duncan produced this podcast. Jessica Voelker is the WorkLab editor.
More Episodes
Eric Horvitz on the Possibilities of Generative AI for Human Thriving
If business leaders prioritize human-first technology, says Microsoft's Chief Scientific Officer, society is bound to flourish.
Will AI Make Work More Human?
Business strategist Erica Keswin explains the need for more humanity in the workplace—not in spite of generative AI, but because of it. Plus, a new framework for employee retention.