“Look it up” should be a good response to a dispute about matters of fact where a correct answer already exists. This is why bars used to keep sports record books handy; bets could be solved quickly and conclusively.
But “look it up” relies not only on there existing a source of (largely) correct information, but also on there being agreement about what that source is. So, for example, we have overwhelming evidence that fluoridation of water reduces cavities in children, and that it is safe at the usual concentration, roughly one part per million, found in public water supplies. One could cite statements by the CDC, the EPA, the department of Health and Human Services, or the many systematic reviews published over the last 50 years.
But looking in science journals, or believing government agencies, are not elements of everyone’s epistemic practice. This alone wouldn’t be a crushing problem, if there hadn’t developed other sources, spreading opposing, and false, information. And that wouldn’t be a problem if these sources didn’t have communities which treat them as authorities.
Which presents what may be the gravest problem for knowledge in the contemporary world: it’s not that people don’t understand that knowledge claims need to be justified, it’s that they don’t have the tools to know how to justify knowledge claims. And it’s not distrust of authorities that leads to this; at least in a large class of cases, it’s trust in false authorities.
So discussions of these topics often reach a stand-still when each side cites preferred authorities. (This is true not only in scientific questions, obviously, but in political questions as well.) The problem, then, is not finding the right authority, but trying to establish why one authority is better than another, and this comes down to two large criteria: trustworthiness, and method.
Many people do not trust governments, scientists, drug companies, “the mainstream media,” etc. They will, to some extent, automatically side against these sources. And this is not simply for lack of knowledge: often people distrusting these sources do so because of knowledge of specific instances when these sources were, in fact, not trustworthy.
It’s generally easy enough to show that the alleged “alternative” sources have also engaged in untrustworthy behavior, but that only returns the stalemate. So what’s necessary, in order to establish trust, is to turn to method, and this is where conversation is generally insufficient.
First, there are as many methods as there are fields of inquiry, and then some. There are certainly some overarching marks of good methodology, and something like a critical thinking course may be a good place to lay out some of these, but concrete examples tend to register better, and here a study of, say, the methodology of medical science would be helpful.
Very few people understand how a controlled experiment works, what replication is and why it’s done, what sorts of biasing effects enter into studies, how effect size and sample size factor in to the reliability of any given experiment, how a systematic review increases knowledge, etc. And this is not something that can be explained in a single debate, single classroom meeting, or, sadly, a single legislative session.
So what’s called for is a large-scale educational outreach on how good methods produce better results, and the ways in which scientific communities work cooperatively, and in competition, to weed out bad results.
But at the same time, there is a counter-educational effort going on designed to fight this. The head of the EPA wants to have “debates” between climate change deniers and mainstream scientists, as though this is how truth is arrived at. News shows often “balance” appearances by scientists who support the scientific consensus with appearances by deniers. So the public perception is that (1) there is serious debate on the question among experts, and, what is perhaps worse, (2) that the answer comes from seeing who wins a debate.
But debate is not necessarily conducive to truth, and debate certainly isn’t the method of science, at least not where “debate” means a face-to-face discussion between two people holding opposing points of view. Many basic epistemic principles simply do not hold in debate: debaters are not supposed to revise their views during a debate, but scientists must consistently revise their views in the face of new evidence; the winner of a debate is chosen by a judge, or by a vote of viewers, whereas the “winner” in a scientific study is the result that survives intense scrutiny over many studies and is shaped and molded by the results of those studies; a debater is limited to a few responses and speeches, but science continues for as long as an issue is live; and perhaps most importantly, a debate has two, sometimes three positions, whereas true scientific study awaits any new position that withstands scrutiny, and pays little attention to positions which lack proper method.
With the most easily accessible media often presenting science as a debate, and with borderline and unsupported views included in the debate for “fairness” or “balance,” the public is given a warped view of the method of arriving at a reasonable conclusion. The introduction of the internet made finding reliable information easier for those who are already reasonably well-educated in a topic area, but it also made finding unreliable voices much easier, with contrarian views proliferating wherever they can find an audience by appealing to vanity, to a sense of rebelliousness, to fear about the government or industry, or whatever other click-baiting tricks they have available.
Actual research papers are not very click-baity. They are often incomprehensible to the lay person, and the results they put forward are rarely as exciting as headlines, even (especially!) when the headlined article is reporting on that very study.
I don’t have a solution to this problem. As a philosophy professor who teaches critical thinking classes, I try to address it, but the number of students who study critical thinking each year isn’t large enough to make a huge dent in the problem, and a single critical thinking course may not have a lasting impact. Maybe what’s needed is the introduction and repetition of true critical thinking techniques earlier in the curriculum, but I don’t see this happening. Critical thinking courses challenge beliefs that have strong political constituencies. In 2012, the Texas GOP even including an anti-critical thinking plank in its platform[1]! They also require a great deal of teacher training, without which, at least based on my work training critical thinking teachers at the college level, some very serious errors can wind up being taught as though they were central to critical thinking: I’ve had instructors tell me how they taught students that there was no truth, only persuasion; that when people talk about religion they make no objective claims; and that knowing who makes a claim can tell you if the claim is false, for example.
[1] https://www.washingtonpost.com/blogs/answer-sheet/post/texas-gop-rejects-critical-thinking-skills-really/2012/07/08/gJQAHNpFXW_blog.html
Charles V. DiGiovanna
July 6, 2017
Well written, James. Conflating debate with research and claiming “settled science” is a mark of individuals with a desire to find evidence to support a desired, for one reason or another, position or belief. It is a practice generally used by those who rely on intuition and “feelings” as their criteria for choice and decision.
To employ that practice, i.e., debate as the criterion is to debase the scientific method which is what must used to understand reality as it is, not as one might wish it to be.
LikeLiked by 1 person
Tom Clark
August 26, 2017
You say “There are certainly some overarching marks of good methodology, and something like a critical thinking course may be a good place to lay out some of these, but concrete examples tend to register better, and here a study of, say, the methodology of medical science would be helpful.
Very few people understand how a controlled experiment works, what replication is and why it’s done, what sorts of biasing effects enter into studies, how effect size and sample size factor in to the reliability of any given experiment, how a systematic review increases knowledge, etc.”
That’s a really bad example given what we know about how pharmaceutical companies misrepresent the results of studies, e.g. by not reporting negative results. Maybe that’s one of the reasons people don’t trust science.
You say “But debate is not necessarily conducive to truth, and debate certainly isn’t the method of science, at least not where “debate” means a face-to-face discussion between two people holding opposing points of view. Many basic epistemic principles simply do not hold in debate…”
It’s a mistake to turn your nose up at debate and politics. What you need to do is convince people, and debate is one of the ways you do that. You may think that truth is a matter of evidence, testing, and understanding, but in the real world where people decide what to do, where it really matters, the truth is what you can convince people of.
LikeLike
James DiGiovanna
August 26, 2017
When I mention “biasing effects,” I include publication biases. I didn’t have space in this post to go into that, but in a proper critical thinking course that’s an essential topic. And I’m certainly not saying debate is worthless; it’s just that it’s not as truth conducive as it purports to be. And I’d be very cautious about saying “truth is what you can convince people of.” That’s a self-refuting position, at best.
LikeLike