Polarization occurs over the course of decades, across regions, races and generations, between parties and people. But there is one common thread: it always inflames emotions.

Luiza Santos, a third-year PhD psychology student at Stanford University, specializes in affective science — simply put, the study of emotion. In this final interview in our discussion series with researchers from Stanford’s Polarization and Social Change Lab, she dives into polarization as it manifests in our brains: the ways it makes us feel and act, and what we can do about it. We discuss how empathy works, its role in political discourse and resisting the temptation to hate the other side.

This interview has been edited for length and clarity.

Your work centers around empathy and how it plays out in politics and everyday life. Can you walk us through the psychology of it?

Empathy has a lot of multi-components, but one of them is empathic concern. So if I see that you’re suffering, I feel bad for you. The other one is perspective-taking, and this is a more cognitive side of empathy where you put on the shoes of someone in that experience and try to live through how that experience would feel.

Luiza Santos

Even though they’re related and there’s some overlap, they can have different consequences. I try to understand what motivates a person to empathize with outgroup members or what keeps people from empathizing, and understand how we can change people’s beliefs about empathy.

What do the terms ingroup and outgroup mean?

An ingroup member is a person who thinks like you and shares some sort of belief structure that you also share. An outgroup member is a person who does not. This ingroup-outgroup relationship becomes more salient in competitive environments. If you’re a liberal, your ingroup would be other liberals. If you’re a conservative, your ingroup would be other conservatives. 

Things like threats can activate different sorts of groups. If we think about terrorism and terrorist attacks, that causes what pychologists call the “rally-round-the-flag effect” where basically distinctions between liberals and conservatives kind of disappear because they all cluster together around this American national identity instead of being divided on politics.

Not every disagreement is a fight or a threat, but they can feel that way, especially in politics. How do our brains distinguish between disagreement and threats?

Some of my research interests try to understand how attitudes become moralized. If someone comes to me and says they don’t like chocolate ice cream, I’m like, okay, that’s their own personal opinion. But if we think [in terms of] moral values and I think it’s wrong to kill people, and you come to me and you say it’s okay [to kill people], I would be like, this person is amoral. So there’s these different spectra of disagreements, from those where you could see potential compromise to those that you feel are really infringing on your central beliefs about right and wrong.

Empathy can seem like a natural emotion, so why does being challenged or proven wrong feel so unnatural and exhausting?

Sometimes we get so in depth into our group membership that we start seeing division where once there wasn’t any.

Empathy is this superhuman power that we have to try to understand the mind and beliefs of someone who could be radically different from us. The problem is that it doesn’t happen naturally. People have talked about empathy as this automatic mechanism, and we can totally see this where that’s true. Infants will cry when they hear another infant crying. Or we can care for completely fictional characters as we read a novel. There are all these ways empathy feels natural and automatic. But then when we look at ingroup and outgroup processes and see that’s not really how it happens. 

If it’s not completely automatic and we have some control over it, then we can choose how we empathize in certain circumstances. It may feel very effortful, especially when you’re trying to empathize with someone who endorses beliefs that you find fundamentally wrong. One approach we’ve been taking — and that some recent research is trying to investigate — is the reasons why people try to avoid empathizing with outgroup members. Those reasons include things like, ‘I think their views are threatening to how I see the world.’ Or, ‘I think if I empathize with them, people in my group would see me less positively because I’ll be betraying my group.’

We try to investigate the motivations people may have that can lead them to either choose to empathize or avoid empathizing.

Could empathy have a downside? For instance, could it be used to simply paper over the problems that are causing a lot of the animus and polarization in our world?

My main rebuttal to that is sometimes we get so in depth into our group membership that we start seeing division where once there wasn’t any. If you feel that your identity is under threat by someone else’s belief, I see the point of believing it’s not your job to go out of your way to try to change a person’s view. But I do think at a greater society level, if we say that it’s not worth trying [to have a conversation], you consolidate these thoughts in a way that can be really harmful down the line. It pushes people to find niche groups that agree with them. The internet is a whole new world for that, and then you only find echo chambers.

So is having open and empathic conversations always worth the effort? 

I just feel there’s no alternative. Being in a group has many benefits and gives us certainty, and when the world is kind of uncertain, it helps to guide our beliefs. But at the same time, it’s important to take a step back and understand that we’re all members of a broader group.

We’re all super tired. It’s hard to have family members that you grew up with and now you feel like you can’t talk to anymore. It’s emotionally exhausting to try. But I do believe that it makes a difference, even though sometimes it’s one conversation and one person. Having uncomfortable conversations where you try to be open-minded and then voice your concerns is one way to grow and understand different perspectives. It never ceases to be uncomfortable, but it’s still worth a try.

In a 2012 interview with The New Yorker, novelist Mohsin Hamid remarked, “Empathy is about finding echoes of another person in yourself.” 

By hearing more of these echoes, it stands to reason that we can build our empathy. But how does it work? That’s where the SPARQ lab comes in. Using insights from behavioral science, Stanford University’s SPARQ lab (Social Psychological Answers to Real-world Questions) is dedicated to finding new ways to break down barriers and build a more empathetic world. Its researchers produce and present academic research through toolkits, designed to teach us not just how to increase our empathy, but deploy it for the greater good.

Ellen Reinhart is a PhD candidate in social psychology at Stanford studying the intersection of psychology, culture and inequality. She is a graduate affiliate at SPARQ, having previously served as lab manager. Here, she discusses the psychology of empathy, its limits, and practical interventions for building more of it.

Ellen Reinhart

This interview has been edited for length and clarity.

What are your areas of research?

I study how psychological tendencies and cultural default assumptions perpetuate social inequalities, and I focus on social class and race. I’m also very passionate about research that addresses social problems out in the real world, beyond academia.  

How does SPARQ address real-world questions?

Overall we think of ourselves as a “do tank” rather than a think tank. It’s creating science and psychological insights… in partnership with change makers, which people in criminal justice and economic mobility and education spaces and health-related spaces [can use] in an applied setting. 

But another version is also taking what psychology has already found and packaging that in a way that non-academics can use in everyday life circumstances. That’s where the SPARQ Solutions Catalog comes from — it’s over 50 entries taking a psychological intervention written about in an academic paper… and packaging it in a way that anyone who comes to the website could learn about what the insight is and how they can apply that in their real life. 

What institutions have used these solutions and toolkits? 

One of the big branches of SPARQ is with criminal justice. Under the criminal justice branch, we’ve partnered with agencies and communities to make data-driven change. For example, one paper found that officers in a department used less respectful language with community members of color. The methods of how they found that was incredible, it was across audio from many weeks and many different recordings of body-worn cameras. 

“One of the powers of this study is that it demonstrates in a real-world setting that when instructed, participants could follow the reappraising of emotions.”

SPARQ is using that finding and saying, “Okay, we know it’s less respectful language and that impacts how community members see the police department. How can we help this department create an intervention that teaches officers how to use the same type of language regardless of the person they are interacting with?” 

What exactly is an intervention, and how can it lead to change?

An intervention is some kind of systematic change, targeting a very particular outcome. An individual-level intervention would be, for example, a teacher in a classroom having their students read about Harry Potter because that increases empathy for marginalized groups. But there can also be interventions at institutional levels, like a police department rolling out policies and protocols about language in these stops.

One study SPARQ features addresses political and social intolerance through “reappraising emotions.” What does that mean?

This study was run by psychologist Eran Halperin. He and his colleagues wanted to see if this common technique for managing emotions, which is called reappraising emotions, could be used to influence political intolerance. To reappraise your emotions is to be somewhat detached from them, to take an outside perspective and try to be more objective and analytical. 

How did the study work?

They had Israeli college students read an article about the Arab minority population, and it was taking place at a time when tensions were very high. This was a very inflaming piece to read. It was designed to elicit very strong negative emotions. Some students were told to read the passage in this cognitive appraisal way. They were told to take the outside perspective, be objective, be analytical, try to be like a scientist and don’t think about it personally. 

What they found is among those who were right wing in their political orientation, the experiment reduced the level of political intolerance towards Palestinian citizens of Israel. Part of the reason why they think this works is because it reduced the negative emotion that participants felt while they were reading the passage. The thought is, it allowed participants to have a more balanced perspective as they were taking in this new information, and that reduced their intolerance for others. 

What are the broader implications of this?

One of the powers of this study is that it demonstrates in a real-world setting — this was a very tense political situation taking place on the ground — that when instructed, participants could follow the reappraising of emotions, they could follow those directions. Reading these short instructions before reading the passage changed how participants read the passage, reduced their negative emotions and then changed their intolerance. I think it’s a positive sign that it’s possible.  

SPARQ also featured research on how native-born citizens and immigrants can reconcile and navigate their differences. How did that work?

With this study, psychologist Jonas Kunst and colleagues divided European-American adults into three groups. One of the groups, they focused on facilitating this common identity between European-Americans and immigrants who had [more recently] come to the country. For example, one of the quotes from this passage read, “Because we are all immigrants or descend directly from the immigrants who came to this country and built it, the United States of America and all its citizens are true and proud products of immigration.” 

A different group read a passage that emphasized differences between those who are native born versus those who are immigrants. A third group was a control condition and didn’t read anything about immigrants in the U.S. 

They gave participants a small amount of money and they were able to decide how much they wanted to keep for themselves and then how much to donate to the American Civil Liberties Union, which supports immigrants. The researchers found that participants who had read about immigrants with this common identity were more likely to give more money to the ACLU than the participants in the other two conditions. There was something about emphasizing the shared group that allowed participants to show support for immigrant communities by supporting the ACLU at a personal cost to them.

What do these stories tell us about bridging divides?

At a larger individual narrative, this common identity intervention with immigrants is showing how they are we. It’s drawing on this bigger story of what America is and saying we’re all immigrants and that’s something to celebrate and share, and because we’re all one in this group, then I’m going to treat you like you’re a part of my group, my family, my team. That impacts how people think about distributing resources and other kinds of downstream consequences.  

SPARQ has something called toolkits, which are about applying sanitized laboratory results to trickier, more complex real-world situations. Tell me about that.

The Solutions Catalog is step one of how we take social science, package it in a way that is not academic jargon, and give it to the public. The toolkit instructions show how to implement the intervention in an almost cookbook recipe way. Step one is to take your baseline, see where point A is. Step two is the change we’re going to ask you to take: read about this article, try to take a detached perspective, whatever the intervention that has already been tested academically. Step three is take that measure again to see if you’ve changed. You get your score so you see if there’s been any change over time. 

Step four that SPARQ is helping to promote is to share your story on the website, to work with others, to see what part of this is really working. It can be this iterative process because it’s not just getting science into the real world, but how do we get more of the real world back into science? That makes society better and makes research and theories better. We hope it can be this more symbiotic relationship.

Like a lot of folks, I occasionally hear people espouse values and beliefs different and often counter to my own. When I was young, I didn’t understand this. Many of these beliefs and positions seemed irrational to me. How could people believe such crazy things? But as I read, traveled and met more people, I learned that values and convictions that might seem strange to me often serve a purpose for others. 

I have come to sense that I, too, might harbor some odd beliefs – and, like others, I come up with convoluted rationalizations for them.

Sometimes that purpose is simply the sharing of these values and beliefs with others who feel the same way. Shared beliefs can foster a sense of community, or provide meaning in people’s lives. It seems that we as humans have evolved to need something that provides unity and cohesion. Sometimes that might be liking the same songs or movies. Or, if that means believing in statues that cry or space aliens, well, okay, maybe no harm done. 

I have come to sense that I, too, might harbor some odd beliefs — and, like others, I come up with convoluted rationalizations for them. I have a set of values that, to me, should be seen as self evident and should therefore be adopted by everyone. But if we’re going to find common ground and live together, we need to at least attempt to understand the mindsets of the people who think differently from us.

Some years ago the social psychologist Jonathan Haidt proposed in his book The Righteous Mind that there are six basic moral values, and how important each value is to us as individuals determines our behavior. I am not interested in debating how innate or universal these values are, or if there are more or less than six, but I do find them a useful tool for understanding those with views different from my own. These tools help prevent me from feeling superior. They allow me to imagine what someone else might be thinking and why they believe what they believe — because we actually share many of the same values.

Here are the values (and their opposites):

1. Care/Harm. We are all one family and should have compassion for everyone else, as much as we can. Suffering should be eliminated if it can be.

2. Fairness/Cheating. A society should strive to be fair. Justice should be equal for all. Cooperation is better than cruelty. The flip side is that cheaters and free riders should be scorned or punished.

3. Loyalty/Betrayal. Loyalty to one’s family, community, team, business and nation is essential. It is the force that holds things together. 

4. Authority/Subversion. One must abide by the law, whether one agrees with it or not. Our collective agreement to obey social and legal institutions is what makes us function as a society.

5. Sanctity/Degradation. Purity, temperance, restraint and moderation stabilize our world. Certain behaviors are immoral and must be shunned.

6. Liberty/Oppression. People should be allowed to be free in their speech, thoughts and behaviors as long as they aren’t hurting others.

I find that, to some extent, I can see the worth in every one of these six values. None of them seems completely wrong. That said, clearly I have my personal leanings. I value some more than others. That’s exactly the point. The idea is that folks tend to prioritize some of these values, and which ones we prioritize determines our politics, how we behave and how we think of others. I do this, you do this, we all do this. 

The values we prioritize, says Haidt, determine where along the political spectrum we fall. Whereas liberals tend to emphasize care and fairness, conservatives are somewhat more apt to value loyalty, purity and authority (though, as Haidt points out, neither group discounts care and fairness all that much — it’s on loyalty, purity and authority where liberals and conservatives really diverge.)

So, for example, as someone who places a high value on care and fairness, I might support laws that say everyone, no matter their personal beliefs, must treat LGBT people as equal citizens. But folks who rank sanctity higher than care or fairness in their value hierarchy may feel that homosexuality is “unnatural,” and that its “degradation” of sanctity therefore overrides the need for equality. 

As we’ve seen recently, some folks believe face mask rules intrude on their personal freedom. They feel that their individual rights (liberty) take precedence over those of the larger community, whereas I place more value on cooperation and the health of the collective (care). I don’t think either freedom or liberty are wrong, but in certain situations, I might feel that those values need to be curtailed for the good of everyone.

I might feel that in order to challenge unjust laws one sometimes has to break them (fairness). Others will disagree and say “the law is the law” (authority). Many Americans believe that free speech is an absolute right (liberty) and if others are hurt, offended, or feel discriminated against by what I say, well, that’s the price of freedom. In general, I personally share that value, but I don’t see it as absolute — there are times when maybe it should be regulated if it is intended to do harm or incite violence (care, fairness). The point is, I can see some worth in all of these values. 

This raises an important point. The fact that all of our disparate beliefs spring from the same six values doesn’t mean all beliefs have the same merit, or even any merit at all. Prioritizing some values — like sanctity of race or homeland, in the example to follow — can justify inhuman behavior. 

Adam Gopnik, writing about the Nazi “angel of death” Josef Mengle in The New Yorker, noted: There is nothing surprising in educated people doing evil, but it is still amazing to see how fully they construct a rationale to let them do it, piling plausible reason on self-justification, until, like Mengele, they are able to look themselves in the mirror every morning with bright-eyed self-congratulation.” The point is, we shouldn’t excuse behaviors that harm others — we should simply try to understand why those behaviors happen and the mechanisms that are used to justify them. To paraphrase the writer Hannah Arendt when she was accused of justifying what the Nazis did: To understand is not to excuse. 

What does all this tell us? It tells us that, though I may disagree with folks on certain issues, those disagreements are nonetheless rooted in values that they — and I, to some degree — share. Our beliefs and our politics may be very different, and yet, by recognizing the values behind them, I can, to some extent and in some instances, empathize and understand why someone might feel differently about an issue than I do. It helps me to not judge them as ignorant or evil, and gives us a foothold, a place to start a conversation.

One of polarization’s more nefarious effects is how it prevents people from connecting on an individual level. Learning to push across these barriers and reach out to people you disagree with may be a dying art, but it’s also one of the most valuable methods we have for fighting alienation. In this video, psychotherapist Allison Briscoe-Smith how to do it.

This is the second video in “Bridging Divides,” a five-part series hosted by Scott Shigeoka, in which we meet people whose personal experiences show us how real-life divisions can be overcome. See the transcript of this interview here

Watch the first video: Is This the Unlikeliest Friendship in America?

THIS Q+A IS PUBLISHED AS PART OF AN ONGOING SERIES INTERVIEWS WITH MEMBERS OF STANFORD’S POLARIZATION AND SOCIAL CHANGE LAB

 

Polarization isn’t a single, monolithic phenomenon. There are two types: the kind we express in wonky disagreements over laws and policies, and the kind we feel — that visceral red-vs.-blue passion that fuels partisan acrimony and take-no-prisoners elections. 

In this interview — the latest in our series of conversations with researchers on the science and dynamics of polarization — Jan Gerrit Voelkel, a PhD student in sociology at Stanford University and an affiliate of the Polarization and Social Change Lab (PASCL), breaks down these two types of polarization and explains the mercurial forces that drive political division. 

This interview has been edited for length and clarity.

Jan Gerrit Voelkel
Jan Gerrit Voelkel

Can you walk me through the two major types of polarization? 

The two major dimensions of polarization discussed in the literature are attitudinal polarization on the one side and affective polarization on the other. Attitudinal polarization is about policy views. The study of affective polarization is relatively young — it is oftentimes seen as the tendency for partisans to like their own fellow partisans and to dislike their opposing parties.

What does affective polarization — the partisan kind — tell us about how we see each other?

One American national election survey that has been going on for a long time has asked respondents how they feel towards the parties and the candidates of the parties. You can see that people have always preferred their own party over the other party. But now this gap has very much widened, and it is not so much driven by the fact that people now like their own party more — that has been relatively constant over the last two decades. But it’s more that people really started to like the opposing party much less. 

How do we know this? Is there a way to measure our feelings about the “other side?”

One traditional measure is to ask: How much do you like or dislike Republicans and Democrats? However, important research has shown that it very much depends on how you ask these questions, because people may answer this about elites or about the mass public. My view might be different for Republican elites versus the Republican voter base. They find that you need to be careful to not mistake a dislike of politics in general, for a dislike for voters for the opposite party. 

What can affective polarization potentially lead to?

The next level would be partisan spite, a real contempt for the other side and the willingness to compromise other principles just to keep the other side away from governing. [We may do this] to an extent where we sacrifice democratic principles and ignore the rules that we have all agreed on because we think we are really on the right side. 

Crushed by negative news?

Sign up for the Reasons to be Cheerful newsletter.

    Sounds perilous. Does affective polarization ever have the potential to result in a positive outcome? 

    You can see affective polarization relate to more political activity, caring more about politics, seeking out more information, being willing to protest and stand up for what you believe in. And I totally think that there can be positive externalities of that. 

    This is why it’s so important to study the links between negative partisan effects, then partisan spite or contempt at the next level, and then the things that we think are problematic for society. One thing I’m very concerned about is if we receive the same set of facts and we interpret these two facts very differently, then it becomes really difficult to find agreement on anything. 

    A functioning government that is incentivized by the voters to work together and to implement what the majority of the country clearly wants is really important. This can be undermined by polarization, because polarization can lead to more incentives for politicians to not compromise, to play to their base and hope that the other side is so disliked that even the more moderate voters who lean towards their own party will still support them.

    Why do you think affective polarization has happened at a mass scale, while attitudinal polarization, which is more about policy and facts, has been less extreme?

    This is exactly the right question to ask. My main answer is, “We don’t know.” One thing is that people have evolved into different, new circles, and are getting different views from different sources. That wasn’t so much the case for a long time when there were a few national television programs from which people would get the news. Now there’s a lot more choices and people can select into these bubbles and receive the news, and that may more strongly drive their views for the parties. 

    So do Americans actually agree on a lot of policy? Or is it more like they don’t necessarily have strong views on policy in the first place? 

    The concept of attitudinal positions has a lot of sub-dimensions. There is partisan sorting, and research strongly suggests that it has increased in the sense that people’s policy views are just more aligned nowadays with each other than before. For instance, my attitudes on climate change might be more aligned with my attitudes on gun laws and everything follows pretty much along the same party line.

    Polarization can lead to more incentives for politicians to not compromise, to play to their base and hope that the other side is so disliked that even the more moderate voters will still support them.

    Recent research suggests that people do care a lot about policy positions. If someone is from my own party but disagrees with me on abortion or on immigration policies, then I do like them less. If you ask Americans “Why do you dislike the other side?” polls show that they will answer “Well, because they have different attitudes, because they are wrong.” 

    There is a lot we still don’t know about the relationship between attitudinal and affective polarization. How are you and your colleagues working to build the knowledge base?

    Our team at the Polarization and Social Change Lab is working hard on what we call the “depolarization challenge.” The evidence that is out there suggests that we need a lot of additional information. And with the political moment that we were in, we feel like we need that information quickly to determine to what extent is polarization — and, in particular, partisan animosity or affective polarization — an issue with important downstream consequences and how it can be changed.

    As soon as we launch the challenge, everyone out there who is interested will have a chance to submit interventions [strategies]. Then a certain number of our board of experts will choose what they think are the most promising interventions to be tested in a large-scale experiment. We will hopefully have more answers. I hope that by doing this challenge during this coordinated large-scale project, we will make fast progress on identifying more of the answers that we need to determine to what extent polarization matters, and how we can change the trend that we’re in right now.

    This story was produced by Freakonomics Radio, a We Are Not Divided collaborator.

    You probably hold certain beliefs that you think no one could ever change your mind about. Well, humor us, because in this episode of Freakonomics Radio, we’re pretty sure that we can change your mind about that

    The truth is, our minds are pretty changeable. Under the right conditions, even the concepts, ideologies and general truths that we hold to be sacrosanct are often prone to alteration. We showed you some examples of how this works in our story “Are You Liberal? Are You Sure?” And we’ll offer even more examples in the coming weeks.

    For now, however, tune in to this episode of Freakonomics Radio: “How to Change Your Mind.” It kicks off with a conversation between host Stephen Dubner and Reasons to be Cheerful editors Christine McLaren and Will Doig.

    This Q+A is published as part of an ongoing series interviews with members of Stanford’s Polarization and Social Change Lab

    A divided electorate, a gridlocked government and no end in sight. Where do we go from here? Robb Willer, a professor of sociology and psychology and the director of Stanford University’s Polarization and Social Change Lab (PASCL), has some ideas. He and his team of researchers have literally made a science of polarization, examining what it is, how it functions, why it occurs and what we can do about it. Over the course of We Are Not Divided, we’ll be talking with Willer and his colleagues about their research, taking a deep dive into the root causes and complex dynamics of division.

    In the first of these conversations, Willer discusses how “moral reframing” can help us confront existential threats — and prove that we’re not as divided as we think we are.

    This interview has been edited for length and clarity.

    Why study polarization?

    There are a lot of social problems that are going to require some sort of federal action if they’re going to be ameliorated: climate change, economic inequality, or an effective response to the Covid-19 pandemic. It’s impossible for us to pass significant legislation at the federal level with the levels of [political] elite polarization that we have.

    The polarization of politicians also facilitates polarization in the mass public — people see their leaders being more different, working together less and expressing more antagonism towards one another. They solidify partisan identities and a sense of partisan rivalry, and they develop a sense of partisan animosity.

    Robb Willer

    What are some ways to bridge those ideological chasms?

    The theory that I’ve worked with the most is moral foundations theory — specifically, this persuasive technique that we call moral reframing. Moral reframing involves making an argument for a political position or candidate based in terms of moral values that you assume that they may have. We have a pretty good sense of what moral values — or moral foundations, as they call them — are endorsed more by Democrats and liberals versus conservatives and Republicans.

    What are some of those moral foundations?

    Liberals in the U.S. tend to endorse the moral values of care and protection from harm, as well as equality and justice, more than conservatives do, and thus are more likely to view politics through the moral lens of: we need to protect people from harm and protect vulnerable groups. 

    Conservatives think about those things as well, but they also think about some uniquely conservative moral values that liberals really don’t account for. Conservatives value group loyalty and patriotism. They also value respect for authority — legitimate authorities — and they have respect for tradition as well. They also value purity, moral sanctity and religious sanctity more than liberals do. 

    Once you have this sort of moral map, you have a roadmap or a guide to how to formulate potentially persuasive political arguments to either a liberal or conservative for a view they might not otherwise hold. 

    Can you give us an example? For instance, how could moral reframing be applied to an issue like climate change?

    If you were trying to persuade a conservative to be more concerned about climate change, a less persuasive argument, we find in our research, would be an environmental message in terms of protection from harm. That doesn’t tend to move the needle, at least in our research. 

    What might be more successful would be to make an argument in terms of patriotism, about the need to protect the country, the need to help the country maintain a position of international leadership relative to other countries. 

    You could conceivably make an argument in terms of national traditions around conservation and respecting the land in our country. You also can make an argument in terms of purity, the need to not desecrate and pollute our pure and sacred habitats. Those kinds of arguments would be more likely to resonate with conservatives because they fit with their moral values.

    Are existential issues like climate change truly polarizing, or does it just feel that way?

    By some measures, even a majority of Republicans believe in climate change and express a basic level of concern about it. And certainly a super-majority of Americans do. So you could make a case when it comes to the climate change problem, public opinion isn’t the leading issue. It’s that some very wealthy interests that don’t want to see climate change legislation happen have been able to have an outsized influence on the political process that nullifies that super-majority influence. 

    With moral framing you can go up to someone and say, “You can still be who you are and agree with me on this.”

    Those structural mechanics are tremendously important, and ideally we would have basic structural reform around the role of money in politics, because absent that it’s hard to know how we would really combat [climate change]. But public opinion matters too, most of all because it gets people elected.

    Right, and to overcome those structures that override public opinion, we will need broad, diverse coalitions. So depolarizing climate change doesn’t mean moderating on climate change mitigation, it just means creating policies that link different morals and values. How do we do that?

    We’ve done research into seeing if we could construct arguments for same-sex marriage that were more persuasive to conservatives. So we constructed an equality-based argument that was along the lines of the dominant rationale for same-sex marriage, which basically said, “Gay people deserve the same rights as all other people, and that’s why you should support same-sex marriage.” 

    We tested that message against a very different argument that made the case in terms of patriotism and group loyalty, saying, “Gay Americans are proud, patriotic Americans who contribute to society, they contribute to the economy, they buy homes, they build families, and they deserve the same rights.” And it’s not that different from an equality message, but it’s clearly signaling that if you value group loyalty and patriotism, you should support same-sex marriage. 

    We found that conservatives were significantly more persuaded by the argument made in terms of patriotism and group loyalty.

    What about the flip side — something traditionally conservative appealing to liberals?

    One study we did made an argument for why there should be higher levels of military spending, which is a traditionally conservative position that liberals typically oppose. 

    We contrasted two different rationales. One was about the U.S. being a major superpower and the importance of sustaining that position in the global theater. We contrasted that with a rationale as to why the military is a force for equality and opportunity in America. 

    [The argument] made the case that the military is one of the few institutions in America where the poor and minorities can compete on a level playing field and can advance proportional to their talents and efforts in a way that they would otherwise struggle to because of barriers in the wider society. It was one of the first integrated government institutions, and today it plays a key role in helping the poor and minorities gain the resources needed to access higher education. 

    We found that, when presented with that argument, liberals were significantly more supportive of higher levels of military spending.

    To play the skeptic: If I was someone who was against more military spending, hearing that would be a little scary. I might think I’m being manipulated into believing in something that I rationally think is a net negative.

    While I’ve defended moral reframing as something that can be a sincere form of coalition-building in a pluralistic society — and I think it can help us bridge divides and make political progress despite our differences — I also believe that it can be a deeply cynical and strategic tool of manipulation. Just think about the purity-based arguments for the Third Reich or equality-based arguments for Stalin’s Soviet Union. Moral reframing is not new, and it’s been used for good ends and bad ends. 

    Why should good ideas require moral reframing? If they’re truly good, shouldn’t they succeed on their merits?

    You could make a case that America is the most diverse country ideologically and demographically in the world. We’re going to have to be comfortable with agreeing to do things for different reasons because we have a lot of differences. If we’re going to hold out and say we need everybody in this coalition to not just get on board for this issue, but get on board for the same reasons as me, you’re going to be waiting a long time. It’s just not realistic. 

    But people also don’t often see that they’ve been framed up to support positions that they support. Everybody is perceiving their issue positions through a subjective filter that they’ve constructed or has been constructed for them.

    One of the things I think is attractive about moral framing is that you can go up to someone and say, “You can still be who you are and agree with me on this.” And I don’t have to try to move the fiber of your being. Maybe they do get exposed to those [different] values over time, and they just got to it from where they already were.

    The experiment’s participants were politically minded, sure of their ideologies. Which is why, upon learning that they had just expressed support for an issue they actually oppose, many of them tried to insist they must have misread the question. More than a few were flat-out confused. And, perhaps surprisingly, a handful were relieved to find that they were more ideologically flexible than they realized. 

    “They told us, ‘Thank God I’m not a left-winger,’” says Philip Parnamets, a psychologist who helped design the crafty experiment that would trick its subjects into defending a political view they disagreed with. “They were like, I didn’t know I could think this way.” Which made Parnamets realize something: “You could see this as a tool for self-discovery. It seemed to open up the possibility of change.” 

    “The idea that one arrives at their political beliefs through careful and considered reasoning only is fictional.”

    The premise of Parnamets’ experiment — that a simple psychological game could meaningfully alter a person’s political positions — is something most people probably assume couldn’t work on them. Most of us see our ideological viewpoints as the result of thoughtful, objective consideration. “We live in a world where people think political attitudes are sacred things,” says Parnamets, “that they shouldn’t be changeable at all.” 

    But a growing body of research suggests that’s not true, and that our politics may be far more flexible than we think. “The idea that one arrives at their political beliefs through careful and considered reasoning only is fictional,” says David Melnikoff, a postdoctoral fellow at Northeastern University who studies attitude change. ”Whether it’s about a country, party, policy or politicians, attitudes can be radically changed on the basis of your current stimuli.” 

    Parnamets’ experiment did just that. He and his colleagues gave their subjects an iPad that contained a series of polar opposite ideological statements. The subjects used their fingers to draw X’s on the spectrum between the two statements to indicate their level of support or opposition to each. 

    What they didn’t know was that the iPads were programmed to secretly move some of their hand-drawn X’s to different parts of the spectrum. Suddenly, an X drawn next to the statement “I support raising gas taxes” was now closer to “I oppose raising gas taxes.” The researchers then showed the subjects their iPads to see how they would react. Some of them cried foul. But more than half accepted the altered opinions as their own. 

    Even more remarkably, when asked to explain their thinking behind these opinions, many of the subjects took pains to describe in detail why they had supported a political stance that they hadn’t actually chosen. It was these participants whose political opinions shifted the most dramatically — in fact, their “new” opinions held fast even a week later when the researchers checked in on them again. 

    “We see a larger attitude change when participants are asked to give a narrative explanation of their choice because they’re then more invested in that view,” says Parnamets. Psychologists call this “choice blindness” — when people have to rationalize a choice they didn’t actually make, their preference can naturally shift toward that choice.

    Melnikoff has conducted similar experiments into attitude change, in which participants are primed through exercises to generate positive feelings toward things they don’t actually like. In one such experiment, Melnikoff’s subjects exhibited lower feelings of disgust toward, of all monsters, Adolf Hitler after being told they would have to defend him in court. “All it takes to change someone’s affective response to something is to induce them to have a positively or negatively valenced action toward that person or thing,” says Melnikoff. Even if, intellectually speaking, the subject knows this person or thing is bad, they can still “feel good” about it, like a dieter salivating at the sight of an ice cream sundae they know they shouldn’t eat.

    Crushed by negative news?

    Sign up for the Reasons to be Cheerful newsletter.

      Part of the reason our choices and beliefs can be so easily changed is that our brains have evolved to help us navigate life by avoiding friction and complications. “The brain’s job is to predict, to guide you through an environment without making too many errors, and to help you adapt to that environment,” says Jordan Theriault, a researcher who studies the neural and biological bases of behavior and judgment. “The behaviors people take on and the beliefs they hold are about managing stress and arousal and discomfort.”

      Our brains are built to anticipate and avoid friction, which may help explain where our ideologies come from. Illustration courtesy of Dr. Lisa Feldman Barrett

      Looked at from that perspective, it makes sense that our beliefs should adapt to fit our environments, coalescing into ideologies that make the world feel easier to navigate and understand. You can see this most clearly in partisan politics, wherein an ideology’s potential to bind you together with “your group” may be more important than the ideology itself. 

      “Part of partisanship is about being part of partisan conflict,” says Theriault. “You have your people and you have the other people you consider yourself against, and that’s an environment where it makes sense to have these beliefs. But if you’re removed from that conflict position, your partisan beliefs may not serve as much of a purpose anymore.”

      Removing ourselves from that conflict position is easier said than done in a world where it feels like every politician, pundit and loudmouth on Twitter wants us to do the opposite. But in those rare instances where we can manage to put conflict aside, it’s possible to free ourselves from our rigid political mindsets and see the other’s point of view.

      One technique that has gained interest in political advocacy is “deep canvassing.” Traditional political canvassing involves identifying your supporters and making sure they get out and vote — basically, it seeks to leverage partisan feelings to the party’s advantage. Deep canvassing, on the other hand, does the opposite: Canvassers go door to door, but instead of pumping up the passions of their supporters, they listen closely to those who hold opposing views. “What we’ve learned by having real, in-depth conversations with people is that a broad swath of voters are actually open to changing their mind,” Dave Fleischer, one of the technique’s best-known practitioners, told the New York Times Magazine in 2016.

      Deep canvassing can leverage the same tribalist power of partisan politics, but turn that power toward finding common ground rather than fighting to the death, according to Theriault. “Just by showing up on someone’s doorstep to talk to them about what they believe, you’re essentially building a new relationship” — a tribe of two — “even though it’s a very short one at the door,” he says. And in an age when so many political affiliations are cultivated online, the face-to-face offering of an olive branch becomes all the more powerful. “It’s difficult to be genuinely listened to on social media,” says Theriault, “so I think being genuinely listened to is a way of building a connection to people — and even working out what you believe, too.”

      In essence, deep canvassing functions not unlike Parnamets’ experiment with the iPads. Both encourage their participants to slow down, rethink their initial position, and then, engage in a meaningful narrative about the opposing point of view.

      Such narratives have powerful effects on our brains — we are more easily swayed by them than we realize. In his book Behave: The Biology of Humans at Our Best and Worst, neuroscientist Robert Sapolsky writes that simply naming a game the “Wall Street Game” is likely to make players compete more ruthlessly than if they are told the game is called the “Community Game.” Telling doctors a drug has a “95 percent survival rate” makes them more likely to prescribe it than if they’re told it has a “five percent death rate.” Subtle cues can alter even our most cherished beliefs. In one experiment conducted in the U.S., survey respondents were more likely to support egalitarian principles if there was an American flag hanging nearby.

      “I think we have an untapped reservoir for flexibility in our attitudes and beliefs,” says Parnamets, “but it’s difficult to access because there are many reasons for holding tightly to beliefs — sense of security, sense of belonging, self esteem — and those might actually close you off to other views, even if you’re the type of person who could actually hold a different belief than the one you’re holding.”

      “But if you can have a discussion with yourself, which our method allows you to do,” adds Parnamets, “we see a real possibility of change.”