Stuart Ritchie has written a very interesting and timely half of a book in Science Fictions. He has correctly identified and painstakingly articulated why those who claim to have clear cut answers based on “science” should be taken with a grain of salt. His biggest accomplishment is reminding the reader that part of the problem with science is that humans, we imperfect creatures with all of our biases and flaws, are the ones actually doing the science. That is a particularly important thing to remember as more and more policy in the world of COVID seems predicated on “science.” Ritchie’s bracing warning about the limitations of scientific research is a welcome message at the moment. Overcoming the COVID pandemic, or at least effectively managing its risks, requires tremendous scientific precision and accuracy to create a successful vaccine. Science can also warn us about truly risky behaviors, and develop other treatments for those with COVID-19. Yet he falters in the later stages of the text, in which he conflates his overlapping concerns and tries to suggest prescriptions to solve the problems facing science that don’t seem to align human nature with the institutions of scientific research. He also fails to recognize the role that the government plays in funding and directing scientific research.
Much of Ritchie’s book addresses the crisis many fields are facing because of an inability to replicate research findings. Replication is the ability to be able to reproduce reported results in subsequent research across various samples or obviously using the same data from the original published work. Ritchie does a convincing job explaining the scope of the problem to his readers. This is particularly true in psychology. Replication work has invalidated about half of the psychology papers that have appeared in major journals such as Nature. For example, in a well known and widely cited study participants who were shown words like grey, knits, wise, and Florida appeared to walk more slowly because of the supposed “priming” effect of those words. However, subsequent attempts to replicate this study in different contexts did not show any effect. That inability to replicate results is not merely limited to psychology. He also discusses examples from medicine where the inability to replicate findings has more costly and tangible consequences.
Compounding the problem is another legitimate issue Ritchie discusses – the bias of journals only publishing what he calls “positive, flashy, novel, newsworthy,” findings rather than dull replications and null results which don’t excite readers. More than a few academics will acknowledge that they have papers in their “file drawers” that didn’t produce positive results that they never bother to send for review. There’s a lot of potentially beneficial human knowledge hidden away because of these journal biases.
Ritchie then spends the next four chapters describing what he calls “Faults and Flaws-” namely fraud, bias, negligence, and hype. Fraud is about what the reader might expect. Ritchie highlights some of the more well known cases of fraud in various disciplines including political science, social psychology, physics, and medicine – including the now retracted and debunked paper in the Lancet on the relationship between autism and vaccinations. He explores the possible motivations of these cases, and while most frequently it is greed and ambition, he also speculates that in some cases it is the hope and desires of the researchers that their projects should be true that prompts them to falsify data. This raises interesting questions about human nature and influence of prior beliefs in “science”. The reader will be both amused and somewhat horrified to learn that there is a Hall of Shame for individuals who have been forced to retract many of their scientific papers because they engaged in fraudulent science. Additionally, Ritchie shows that many of these retracted papers continue to be cited favorably by researchers who are unaware they have been retracted. This is obviously a serious problem.
This leads him to his second problem, which he describes as “bias”. Now “bias” for Ritchie means a couple of different things, but it seems to boil down to the mostly unconscious biases scholars bring to their scientific research. He begins this chapter discussing the historical “replication” of Samuel Morton’s infamous phrenology study of skulls from different human races that claims to show the larger average size of caucasian skulls. The replication by the esteemed paleontologist Stephen Jay Gould concluded that Morton had made errors such as selecting more male caucasian skulls (thus enhancing the average size). Ritchie tells his readers that Gould said of Morton’s research, “all of his errors moved the results in that same direction,” and the reader is left with the impression that bias is really about prior belief.
He then goes on to explain bias in publications (the above mentioned general bias towards positive rather than null results) as well as what he calls “p-hacking”, manipulation of statistical significance tests to achieve significance for important variables in research. Let me be clear – all of these issues are significant problems in the social and “hard” sciences. However, the way he blends them makes the chapter a little choppy and tough to follow, especially for someone not as familiar with statistics and methodology. He still does a solid job explaining graphically how small sample sizes can skew results. But when the chapter ends he returns the readers to the story of Morton and Gould and tells us that another anthropologist retested the skulls yet again. These researchers, using more modern technology, found errors both helping and harming Morton’s case, while Gould’s work also showed bias in one direction only. If a reader is left somewhat confused by how exactly Ritchie is thinking about bias, she can probably be forgiven.
Next he turns to “negligence” or the sloppy methods employed that can skew results and lead to faulty conclusions. Like fraud, the theme of this chapter shouldn’t be surprising. People make mistakes; we are human. But when it comes to critical research on things like genetic sequencing or the effectiveness of government austerity programs, Ritchie’s concern about the inability to confirm and replicate results because of simple errors in entering data or coding are important to recognize. Ultimately, his solution to this problem is again rechecking and replication. Most of the cases he documents were discovered as the data for this work became more widely available and other researchers and more advanced computer programs began to examine the data to confirm the results. In this sense, he’s actually telling a somewhat more optimistic story than he seems to realize.
Where Ritchie loses his way… Coming next week.