The Male Brain sent over this link a few days ago, and it’s just too good not to turn into a full post. It’s all about how scientists can reach two completely different results from the exact same data set:
In a test of scientific reproducibility, multiple teams of neuroimaging experts from across the globe were asked to independently analyze and interpret the same functional magnetic resonance imaging dataset. The results of the test, published in Nature today (May 20), show that each team performed the analysis in a subtly different manner and that their conclusions varied as a result. While highlighting the cause of the irreproducibility—human methodological decisions—the paper also reveals ways to safeguard future studies against it.
“This is a landmark study that demonstrates clearly what many scientists suspected: the conclusions reached in neuroimaging analyses are highly susceptible to the choices that investigators make on how to analyze the data,” writes John Ioannidis, an epidemiologist at Stanford University, in an email to The Scientist. Ioannidis, a prominent advocate for improving scientific rigor and reproducibility, was not involved in the study (his own work has recently been accused of poor methodology in a study on the seroprevalence of SARS-CoV-2 antibodies in Santa Clara County, California).
Problems with reproducibility plague all areas of science, and have been particularly highlighted in the fields of psychology and cancer through projects run in part by the Center for Open Science. Now, neuroimaging has come under the spotlight thanks to a collaborative project by neuroimaging experts around the world called the Neuroimaging Analysis Replication and Prediction Study (NARPS).
Neuroimaging, specifically functional magnetic resonance imaging (fMRI), which produces pictures of blood flow patterns in the brain that are thought to relate to neuronal activity, has been criticized in the past for problems such as poor study design and statistical methods, and specifying hypotheses after results are known (SHARKing), says neurologist Alain Dagher of McGill University who was not involved in the study. A particularly memorable criticism of the technique was a paper demonstrating that, without needed statistical corrections, it could identify apparent brain activity in a dead fish.
Perhaps because of such criticisms, nowadays fMRI “is a field that is known to have a lot of cautiousness about statistics and . . . about the sample sizes,” says neuroscientist Tom Schonberg of Tel Aviv University, an author of the paper and co-coordinator of NARPS. Also, unlike in many areas of biology, he adds, the image analysis is computational, not manual, so fewer biases might be expected to creep in.
Schonberg was therefore a little surprised to see the NARPS results, admitting, “it wasn’t easy seeing this variability, but it was what it was.”
The study, led by Schonberg together with psychologist Russell Poldrack of Stanford University and neuroimaging statistician Thomas Nichols of the University of Oxford, recruited independent teams of researchers around the globe to analyze and interpret the same raw neuroimaging data—brain scans of 108 healthy adults taken while the subjects were at rest and while they performed a simple decision-making task about whether to gamble a sum of money.
I’m not the first one to say this – that would be our beloved and dreaded Supreme Dark Lord (PBUH), Vox Day – but it bears repeating, over and over, as many times as is necessary until people figure out this very obvious truth:
There is a word for repeatable, replicable, reliable science. And that word is “engineering”.
The interesting thing about fMRI in neuroscience is that, counterintuitively, the amount of data generated by the process is actually pretty enormous. More years ago than I care to remember, I was using R to do some analysis of financial data sets, and I remember thinking that there were a lot of data points in those sets. Then I had a look at the techniques used to analyse fMRI data sets, and I realised that financial data sets have nothing on fMRI sets, which are HUGE.
Seriously, they are so big that R isn’t actually a good language for processing them. You have to resort to much more power enterprise-level software to analyse it, such as SAS.
And yet… even with all of that data, neuroscientists – who, unlike most psychologists and other “fuzzy-wuzzy” types, actually have a background in hard mathematics – cannot agree on the interpretation of those data sets.
This is basically another example of the “replication crisis” that is running rampant throughout even the hard sciences, never mind the nonsense ones like “psychology”.
That problem has been well known for years by now. Even basic findings from seminal papers cannot be easily replicated, using the best available scientific methods. And even the best and most experienced scientific teams around the world cannot replicate each other’s experiments using the exact same data sets. This is unquestionably a huge problem because it destroys the credibility and validity of scientific research in all aspects.
However, once you understand that science is not a single monolithic field of human knowledge, but is in fact at least three, if not four, different things in one, you begin to realise that it is not the scientific method that is the problem. It is the scientific PROFESSION.
As I have stated many times before, “science” is in fact three different pieces – these days, more like four:
- Scientody, the scientific method of hypothesis, experimentation, observation, refutation or acceptance;
- Scientage, the body of available and testable scientific knowledge;
- Scientistry, the profession of science – that is to say, what scientists do;
- Scientism, the religion of science fetishism that has taken over much of the world today;
This, of course, is NOT my original invention and I take no credit whatsoever for its formulation. Our beloved and dreaded Supreme Dark Lord (PBUH) came up with it, as far as I know.
Furthermore, if you read Thomas Kuhn’s book, The Structure of Scientific Revolutions (I haven’t, so if I get something wrong about it, someone please correct me), the philosopher who wrote it made it clear that science does not proceed in a uniform straight line of any kind. Instead, it goes through much the same process that most accumulations of knowledge go through. That is to say, things accumulate over time and “settle” in one accepted paradigm, until something comes along and seriously challenges that view.
That revolution is almost always met with universal disdain and derision, until enough evidence accumulates around it to make it clear that it cannot be reasonably refuted. It then becomes the new paradigm, until it is challenged in its turn.
The root of the replication crisis lies in the fact that it relies primarily on what can be seen, observed, and measured. That is an extremely powerful and useful method of understanding the world, make no mistake. But the core problem with it lies in the fact that it relies on observation.
This would not be an issue if the observer were an unbiased, unfeeling, total mechanical and mechanistic robot.
Without belabouring the obvious, I think we can all accept that human beings are anything but robotic. (Not even me – and I’ve been called robotic and machine-like to my face many times, especially recently.)
Humans always interject our own biases and beliefs and systems of thought into our observations of the world. There is no getting around this fact. To think otherwise is to deny that scientists are human beings. It is to give scientists a degree of reverence and separation from Mankind that they do not deserve.
In the end, problems like the one that The Male Brain highlighted are endemic to science because science is a HUMAN field. Can it ever be corrected? Well, probably not, because scientists have grown far too confident in the supposed infallibility of the scientific method. Far too many scientists believe that they, and they alone, have the keys to unlock the secrets of the Universe – when, in fact, they struggle mightily to explain their own findings and ideas.
Looking ahead to the future, I think that we are going to come to a time when scientists will have to admit, with great sadness and frustration, that they have struggled for hundreds or even thousands of years to climb to the top of a mountain of human knowledge, only to find that philosophers and religious scholars have been sitting there waiting for them to catch up the entire time.
That isn’t my original idea either, by the way. I paraphrased the late NASA astronomer Robert Jastrow, who said the following:
At this moment it seems as though science will never be able to raise the curtain on the mystery of creation. For the scientist who has lived by his faith in the power of reason, the story ends like a bad dream. He has scaled the mountains of ignorance; he is about to conquer the highest peak; as he pulls himself over the final rock, he is greeted by a band of theologians who have been sitting there for centuries.
The scientific method is of tremendous value. None should doubt this. It is our single greatest tool to observe and understand the Universe that God, in His eternal and incredible wisdom, created around us.
But it is NOT the only tool in the toolbox. And it is only as good as the people who use it.
And what we are finding, consistently and constantly, is that the people who use this tool are precisely those who lack the humility, the decency, the wisdom, and the ability to do so responsibly.
0 Comments