Cognitive Dissonance and Philosophy

Posted on April 5, 2017 by



I want to first give credit to the authors of “Mistakes Were Made (but not by me)” – Carol Tavris and Elliot Aronson. Their talk of cognitive dissonance and the metaphor of the ‘pyramid of choice’ has inspired my comments below. Although the ideas in this book have obvious ramifications for psychology, psychotherapy, political science, behavioural economics, marketing, education, conflict resolution and so on, I will use them in this post to make a few points or raise a few questions about philosophy specifically. There is much more to be said than what I’ve put here, but I’ll keep it brief, and try not to spoil too much for those who haven’t yet read the book.

The picture below is of course my own, since it is quite terrible. Further, it, as well as the description that follows, merely represent my own personal interpretation of the core ideas in “Mistakes Were Made.”

CD[1]

There are two main points/questions I want to raise, but first I’ll offer context for this picture. Cognitive Dissonance (CD) is a real thing, in the sense that it is robustly supported by empirical evidence: it is the intellectual stress that a person encounters when she perceives an inconsistency between her beliefs, or between a set of beliefs and her actions. This stress leads humans to reliably act in various ways that we would normally consider to be quite irrational, but which are correctly predicted by CD theory as an attempt to reduce dissonance.

For example, it correctly predicts that people will like or dislike random people (whom they don’t know) significantly more after they have been asked to either say nice things or nasty things to those same people. Their actions (saying either good or bad things to a complete stranger) must be reconciled with their beliefs about themselves as decent people, even if their action came first and was without any justification whatsoever. The way to reconcile is to feel that the person must somehow deserve the action they received (because I’m a good person, and a good person wouldn’t hurt someone who doesn’t deserve it).

This is a little scary: I may very well hate you just because I did something bad to you (which could then lead to further mistreatment of you by me). However, on the flipside, it means I will like people I do good things for, and this will then be reinforced by more good deeds. The practical upshot of this, which I found totally mind blowing, was that an effective way to make someone like you is to have that person do a big favour for you. They will typically reconcile their actions by changing their beliefs about you: “I wouldn’t do anything nice for a person that didn’t deserve it, so they must actually be a good person.”

Powerful internal forces can reduce dissonance in various ways, unbeknownst to the subject: by actively shaping their beliefs (as above), by making them ignore any and all contrary evidence, and even by altering their memories (the examples in this book, done with control groups no less, will absolutely shock you.) An inability to reduce dissonance is not necessarily a good thing, as it is linked to depression and PTSD in extreme cases. However, these forces lead to an escalation of sorts if the person allows them to run unchecked, as we witness so often in disputes between countries, companies, and in our personal lives (think of the animosity between many couples getting a divorce). Enter: the metaphor of the pyramid.

The pyramid applies to cases where people are initially somewhat ambivalent about a particular decision, and the steps represent the first and subsequent actions taken in a particular direction. While at first any particular person could go left or right, as soon as they make a decision or take that first step, a wedge or polarization tends to occur. CD tends to make people “forget” that at one point they actually could have gone either way: people self-justify their initial course of action, altering their beliefs about themselves and their actions, and then take next steps and repeat through further self-justification and confirmation bias (ignoring counter-evidence), eventually pushing them down one side of the pyramid, and farther and farther apart from those on the other side. In the end you can have two people finding themselves completely at odds, and in the worst of cases, ready to go to war over a topic that was initially met with ambivalence!

Once someone moves down one side of the pyramid, moving to the other side becomes almost impossible due to the cognitive dissonance that would result, especially when a person has incurred great costs to get there. To move to the other side would essentially mean that their effort, money, actions, decisions and beliefs have been wrong-headed, and this is (almost) impossible for one to recognize and accept. Although the authors don’t say it, perhaps this is reason we have diametrically opposed “isms”, camps, and especially activist groups of any kind – all of which may contain people that we consider to be quite reasonable, intelligent, and self-reflective.

That’s my quick description, and now onto my first point, which I’ll frame as a potential criticism of philosophy. Although confirmation bias is certainly discussed in philosophy of science (logic of confirmation / induction), I’m wondering why the notion of cognitive dissonance is not also a prominent part of the discussions about science, science versus pseudo-science, and how scientific theories change? I’m not saying that no philosopher has ever mentioned this concept (since I haven’t read everything), but I am saying that this was certainly not a major discussion item when we were learning about Popper, Lakatos, Kuhn, etc. And I think that this is extremely relevant: of course science will not just throw out a theory when new (contradictory) evidence emerges – for science is made up of scientists, and scientists are people. And people have strong psychological mechanisms to reduce cognitive dissonance, including the ability to explain away contradictory evidence! This would add an interesting dimension to the whole discussion in my opinion, especially for the prospects of developing an adequate theory of demarcation.

I’m worried that this shows that philosophy is too narrow, too inward looking, and is not branching out to learn or synthesize ideas from other fields even when those ideas would add value. I am not the first to raise this concern about philosophy and academic disciplines in general, but perhaps CD is actually not the best example of this – your comments on whether I’ve missed the boat here are welcome.

But now to perhaps a bigger worry for the more practical aim of philosophy. As anyone viewing the picture above might guess, it is very difficult to persuade anyone who has already made many decisions, exerted a lot of effort, or otherwise committed to a particular side (and hence “walked down the pyramid”) that they have made a mistake, or are wrong in their conclusions. In fact, evidence in the book shows that trying to convince people already in this state using facts or arguments tends to further solidify their resolve that they are correct.

As an aside, nowhere is this more obvious right now than the current division between Trump supporters and condemners amongst US citizens. I have acquaintances who sit on both sides, and it is truly astonishing just how polarized they have become over time – and the news is providing more information on a daily basis to further divide the camps. Some now simply completely ignore any and all evidence that contradicts their beliefs, and “fake news” has become an all-pervasive meme. I of course have my own opinions on Trump, but I’ve found this polarizing phenomenon to be much more interesting…

…which leads me back to my worry: CD seems to imply that arguing, debating, or trying to persuade people who would have to deal with significant cognitive dissonance to change their beliefs is actually quite pointless. Philosophers do no good here, but not only that, they actually do harm: they further entrench the beliefs that they are trying to change.

However, it is not all bad news. Philosophy, if it catches people early during formal education and exposes them to the concept of confirmation bias, encourages the principle of total evidence, and enhances critical thinking in general, might prevent people from sliding down the pyramid too quickly (or at all). Dogmatic entrenchment can be prevented, even if it is not curable…? Philosophy does not uniquely play this role of course; indeed, the authors noted above point to ‘proper scientific training’ as being a good antidote to such entrenchment. But philosophy can certainly do a great job in this area and can / should tag team with other academic disciplines to help combat this phenomenon.

I’m curious what readers think about the ramifications of this for philosophy as a discipline, and as always, thanks for reading!

–Mike