In a massive exercise to examine reproducibility, more than 200 biologists analysed the same sets of ecological data — and got widely divergent results. The first sweeping study1 of its kind in ecology demonstrates how much results in the field can vary, not because of differences in the environment, but because of scientists’ analytical choices.
“There can be a tendency to treat individual papers’ findings as definitive,” says Hannah Fraser, an ecology meta researcher at the University of Melbourne in Australia and a co-author of the study. But the results show that “we really can’t be relying on any individual result or any individual study to tell us the whole story”.
This isn’t the first story like this that I have read. Most of the problems with reproducing experiments seem to come down to poorly written methods sections of the study. I read a lot of MG & YA fantasy books so there is often a point where the wizard and his apprentice will have a conversation something like this.
“Didn’t you read in my grimoire to stir the potion 3 times but not widdershins or you will call forth a demon and not cast a spell of invisibility?”
“No sir.”
“You didn’t read my grimoire?” Farfegnugin yells in anger.
“No sir. See here,” Schmedlap hands the book to the wizard, “the directions say to ‘stir widdershins’ to create the spell”
50 years ago when I was deeply immersed in mathematically modeling ecological systems undergoing natural (meaning not directly obviously human caused) catastrophic change, my overwhelming reaction was
Agreed. The biology of a single species is incredibly complex. Put multiple species together, and I’m not sure we can figure out much more than the most basic observations (this one is the prey, that one is the predator) with any certainty.
One problem here is that there are multiple definitions of “reproducibility”…and, also, as the meta researcher quoted is saying Publication is not Validation
For me, reproducibility is the idea that a novel study on a new idea/hypothesis is tested by independent researchers to see if the conclusions comport with the hypothesis given the available data…and that the hypothesis is therefore accurate. Or not, if it doesn’t.
Here, it seems to me that reproducibility is the expectation that independent researchers will all come to the similar conclusions when presented with multiple data sets that’ve been published and accepted as accurate? I wouldn’t have thought that’d be a likely scenario across the board unless the data (from multiple reproducible studies) are unequivocally showing the same results.
Now, I don’t know much about grasses and Eucalyptus seedlings…and these sorts of topics don’t seem to make the banner headlines like, say, biomedical sciences so, per my definition of reproducibility an example would be the HIV as the causative microbe with AIDS (challenged by denialists for a long time…possibly still is?) or the reproducibility of Barry Marshall’s identification of h. pylori as a cause of gastritis and ulcer disease (a Nobel Prize there). But NOT Andrew Wakefield’s hypothesis that the MMR vaccines causes a degree of GI enteropathy that’s associated with autism. Not even an Ig Nobel award there…but, rather, star billing in the Retraction Watch Hall of Shame and a career oriented Darwin Award.
Of course, there has been a tendency to treat publication as validation over the past few decades with the Science By Press Release phenom. Preliminary findings circulated to media list serves by overlying enthusiastic institutional PR departments, and then passed on to the general public by uncritical health and science writers.
That was the Good Ole Days. Now, it seems, researchers don’t even bother with the publication part but circulate pre prints in the same way. This article is based on a pre print that hasn’t yet received (primary) peer review. No irony there🤔
Well, in a way, pre-prints have their place…a lowly place, mind …in the research arena. After all, not everyone with a good idea, young researcher or even plain old resume padder has the advantage of being at a major research institution with all the advantages that come with it…multiple specialists to advise, arny of statisticians and mathematical modelers, ambitious peers looking for holes in your work etc. Even with all this, papers get rejected, returned for rewrites and it all makes for a long winded process.
I don’t think pre-prints were ever intended to stand in lieu of peer review and it looks to me like that’s the case and, not just that, they’re being circulated, with accompanying advertising copy (because that’s what these institutional press releases are, after all) as if they have been (peer reviewed)
It seemed to me to be particularly aggregious during Covid as folk were falling over themselves to get traction on studies showing hydroxychloroquine and ivermectin worked or that masks didn’t etc. Funny thing, though, the retractions never seemed to get much of a mention.
Thanks for the correction of perspective, and yes, pre-prints have their lowly place.
So, cleaning up my messy post a bit, our problem is that as social media has replaced analog print media the “lowly” gets used by people and entities seeking only “views” amped by hysteria, and with no skin in the game as to providing something worth reading.
Ive tended to blame the Publish or Perish environment that researchers have to endure these days…and the touting of preliminary findings as a sort of flag waving to attract research funding. I read an article about this very thing…increasing use of pre-prints to disseminate results…and the views of two different generational attitudes to the practice.
The older guard rightly pointing out that scientific rigour is still necessary in these click-copy-link dump times whereas the young turk was more concerned with getting his name out quickly. Apparently, no matter if it ultimately becomes associated with shoddy work.
Who knows who’s right…but I do prefer to think that folk are asking themselves the right questions still (i.e. “is there an alternative explanation to all this/what might I have done wrong?”.
So, I got to thinking … how you do … about the increase in the numbers of journals over the years and, in tooling around the internet got distracted by these
Also got to thinking about information gathering back in Days of Yore and, say, a day in the research life of folk like Leeuwenhoek etc.