It should come as no surprise these days that science is under attack. Climate-change deniers, anti-vaxxers and a host of others feel free to doubt or dismiss hard-won scientific knowledge based on a standard of belief that is ignorant if not outright irrational. To claim that climate change "isn't settled science" or that the safety of vaccines "must be proven" reveals a level of misunderstanding about how science works that is so basic it must be addressed before we have any hope of getting someone to accept the findings of science.
So how to fight against this?
One way not to defend science is to pretend it is perfect. The myth that science should rely on proof or certainty -- or that there is some sort of "scientific method" that even flawed human beings can follow to produce guaranteed results -- is a view so harmful to scientific understanding that it only gives aid and comfort to its enemies. Science deniers love to exploit uncertainty and use it as a cudgel. Instead, I recommend that we embrace what is most distinctive about science, which is not its method or logic but instead one of its values: the "scientific attitude."
The scientific attitude is the idea that scientists care about evidence and are willing to change their views based on new evidence. It is a community standard of transparency, skepticism and willingness to test one another's work that has proven itself through time as the best means of understanding the empirical world. Scientists understand this and recognize that although they may aim at the goal of "truth," this can never be reached in practice. Instead science is founded on the idea of "warrant," which is the justification of belief based on fit with the evidence. Still, no matter how strong one's evidence, it is always theoretically possible for some future fact to come along and overthrow a theory. That is just how inductive reasoning works.
In a recent book, The Scientific Attitude: Defending Science From Denial, Fraud and Pseudoscience, I talk about how we can use a values-based conception of science to defend it from the threat of science deniers and pseudoscientists, who attack it using a mishmash of conspiracy theories, cherry-picked evidence and other reasoning flaws that are born of ideology and wishful thinking. In a world in which the White House spent two full years without a science adviser, and the U.S. Congress has many members who still refuse to embrace the scientific consensus on climate change, this is no trivial thing.
It seems important for scientists (and others who care about it) to tell the story of science, while not trying to hide its flaws. We need not only to share the results of science but also explain the rigors of the process by which this knowledge is arrived at.
You don't convince someone who doesn't care about evidence by showing them more evidence. You persuade them by engaging in conversation to show where their reasoning is flawed. One way to do this is to model the scientific attitude in practice by telling the truth to science deniers: they are right that no scientific theory can ever be "proven." But then we must immediately explain that this is an absurd standard for belief that is not even practiced by science deniers themselves (who often gravitate toward conspiracy theories for which they have no evidence whatsoever). Evidence is crucial, even if it can never amount to proof.
Yet lack of certainty does not mean that likelihood and probability go out the window. Some beliefs really are better justified than others. And some, which have no supporting evidence at all -- or have been ruled out by experiment -- can just be dismissed.
Rather than being embarrassed by their inability to offer proof or certainty, I wish that scientists would challenge the ridiculous (double) standard of evidence that science deniers often employ, whereby no evidence is good enough to convince them of something they don't want to believe, yet no evidence is required to get them to accept something they do want to believe. Indeed, if scientists would embrace a values-oriented way of defending what is special about science, they could say what they know in their hearts to be true: that far from a weakness, uncertainty is one of the greatest strengths of scientific reasoning.
Along the way, they could also take care to dispel the myth that science has to be perfect before any of its findings can be believed, by discussing some of the problems with science but then explaining the extraordinary efforts that science has taken to correct them.
For example, there are some common practices in academic research that many scientists push to the limit -- like "p-hacking" or being selective in reporting their data -- that are certainly to be discouraged as antithetical to the scientific attitude but that may be inadvertently encouraged by some of the pressures and expectations of academic institutions themselves. And yet the good news is that scientists are policing themselves when it comes to such practices. Now that the problems with p-hacking are beginning to get more publicity, some journals have stopped asking for it. Other critics have proposed various statistical tests to detect p-hacking and shine a light on it. The next step might be to change the reporting requirements in journals, so that authors are required to compute their own p-curves, which would enable other scientists to tell at a glance whether the results had been p-hacked. Others have called for the mandatory disclosure of all degrees of freedom taken in producing the results in a paper, along with size of the effect and any information about prior probabilities.
Moreover, if a publication gets through this gauntlet but still turns out to be irreproducible or just has some mistakes, it can always be retracted. Scientific journals have long had a mechanism for the retraction or correction of their publications. Based on concern that insufficient attention was being paid to such notices (which might lead other scientists inadvertently to build upon irreproducible work), in 2010 researchers Ivan Oransky and Adam Marcus founded a website called RetractionWatch.com where one can find an up-to-date list of scientific papers that have been retracted. Publicizing retractions might also provide an additional incentive for researchers not to end up on such a public “wall of shame” (though it is important to note that retracted work is not necessarily indicative of fraud or other malfeasance). On their blog, Oransky and Marcus argue that Retraction Watch contributes in part to the “self-correcting” nature of science.
In other words, if left to themselves, scientists have always done a good job of creating an environment in which evidence matters -- and of punishing those who cheat on this principle by all but excommunicating them from the profession. While science may not necessarily be self-correcting, it likely comes as close as possible to the fact-based, rigorous testing of our beliefs against reality that could be invented by the human mind. In a world of cognitive bias, spin, ideology and just plain bullshit, science is one of the few instances where we catch humanity at its best.
What is most special about science is not that it is perfect -- or that it is practiced by perfect individuals -- but that it is based on a set of values that seeks to keep us honest even in the face of wishful thinking, self-interest and a perverse set of motives that are sometimes exacerbated by the environment in which scientific research is practiced. Even if we delude ourselves or outright cheat, the community of scientists can correct us, guided by an expectation of openness to new ideas, coupled with rigorous skepticism and testing, that will in the long run ferret out even the most stubborn mistakes and misconceptions.
Science is extraordinary enough that we can defend it despite the challenges of its practice, based on the communitywide acceptance of its creed. No idea should be excluded merely based on where it came from, but neither can it be accepted until there is sufficient evidence in its favor and a rigorous attempt has been made to show that it is the best explanation for the phenomena in question. If it is, then it can be accepted as warranted … until some future evidence may come along to overthrow it, leading to an even better-warranted hypothesis.
Despite its drawbacks, this is the best way we can attempt to know a world in which there will always be some uncertainty.
This is the scientific attitude.
Lee McIntyre is a research fellow at the Center for Philosophy and History of Science at Boston University. He is the author of The Scientific Attitude: Defending Science From Denial, Fraud and Pseudoscience and Post-Truth, both published by MIT Press.
You need to be a member of School Leadership 2.0 to add comments!
Join School Leadership 2.0