Researchers plan to retract landmark Alzheimer’s paper containing doctored images https://www.science.org/content/article/researchers-plan-retract-landmark-alzheimers-paper-containing-doctored-images
Authors of a landmark Alzheimer’s disease research paper published in Nature in 2006 have agreed to retract the study in response to allegations of image manipulation. University of Minnesota (UMN) Twin Cities neuroscientist Karen Ashe, the paper’s senior author, acknowledged in a post on the journal discussion site PubPeer that the paper contains doctored images. The study has been cited nearly 2500 times, and would be the most cited paper ever to be retracted, according to Retraction Watch data…
The 2006 paper suggested an amyloid beta (Aβ) protein called Aβ56 could cause Alzheimer’s. Aβ proteins have long been linked to the disease. The authors reported that Aβ56 was present in mice genetically engineered to develop an Alzheimer’s-like condition, and that it built up in step with their cognitive decline. The team also reported memory deficits in rats injected with Aβ*56.
For years researchers had tried to improve Alzheimer’s outcomes by stripping amyloid proteins from the brain, but the experimental drugs all failed. Aβ*56 seemed to offer a more specific and promising therapeutic target, and many embraced the finding. Funding for related work rose sharply.
But the Science investigation revealed evidence that the Nature paper and numerous others co-authored by Lesné, some listing Ashe as senior author, appeared to use manipulated data.
This supports the finding that AI-selected drug development candidates do better in trials than the human designed ones. Probably because the AI isn’t operating under a “publish or perish” employment arrangement {{ LOL }}
That Google DeepMind Protein Folding software predicts the structure of a complex molecule with about 95% accuracy. Even the most experienced human researcher would take months of trial and error lab work to get to that point. And DeepMind doesn’t have the academic financial and career incentive to fabricate the data.
You still have to verify the structure of the molecule with lab work. But if you’re starting with 95% of the picture rather than “no idea”, it’ takes a lot less time to verify.
You are starting with the haystack. AI is only putting the needles together. AI does not invent any of the information or data being used. AI tests putting things together till the pieces fit.
It is a side of data mining that is less “creative AI” and more testing out the working organization of the pieces of data.
AI learns what to look for from the research done by humans.
Ultimately, the attempts by other labs to replicate the results led to suspicions that there was a problem with this paper. People started to do forensic studies on the figures. This is an example of the self-correcting mechanism of science.