Call for transparency and reproducibility in A.I. research
From multiple academic institutions
October 14, 2020 -- International
scientists are challenging their colleagues to make Artificial Intelligence
(AI) research more transparent and reproducible to accelerate the impact of
their findings for cancer patients.
In an article published in Nature on
October 14, 2020, scientists at Princess Margaret Cancer Centre, University of
Toronto, Stanford University, Johns Hopkins, Harvard School of Public Health,
Massachusetts Institute of Technology, and others, challenge scientific
journals to hold computational researchers to higher standards of transparency,
and call for their colleagues to share their code, models and computational
environments in publications.
"Scientific progress depends on the
ability of researchers to scrutinize the results of a study and reproduce the
main finding to learn from," says Dr. Benjamin Haibe-Kains, Senior
Scientist at Princess Margaret Cancer Centre and first author of the article.
"But in computational research, it's not yet a widespread criterion for
the details of an AI study to be fully accessible. This is detrimental to our
progress."
The authors voiced their concern about
the lack of transparency and reproducibility in AI research after a Google
Health study by McKinney et al., published in a prominent scientific journal in
January 2020, claimed an artificial intelligence (AI) system could outperform
human radiologists in both robustness and speed for breast cancer screening.
The study made waves in the scientific community and created a buzz with the
public, with headlines appearing in BBC News, CBC, CNBC.
A closer examination raised some
concerns: the study lacked a sufficient description of the methods used,
including their code and models. The lack of transparency prohibited
researchers from learning exactly how the model works and how they could apply
it to their own institutions.
"On paper and in theory, the
McKinney et al. study is beautiful," says Dr. Haibe-Kains, "But if we
can't learn from it then it has little to no scientific value."
According to Dr. Haibe-Kains, who is jointly
appointed as Associate Professor in Medical Biophysics at the University of
Toronto and affiliate at the Vector Institute for Artificial Intelligence, this
is just one example of a problematic pattern in computational research.
"Researchers are more incentivized
to publish their finding rather than spend time and resources ensuring their
study can be replicated," explains Dr. Haibe-Kains. "Journals are
vulnerable to the 'hype' of AI and may lower the standards for accepting papers
that don't include all the materials required to make the study reproducible --
often in contradiction to their own guidelines."
This can actually slow down the
translation of AI models into clinical settings. Researchers are not able to
learn how the model works and replicate it in a thoughtful way. In some cases,
it could lead to unwarranted clinical trials, because a model that works on one
group of patients or in one institution, may not be appropriate for another.
In the article titled Transparency and
reproducibility in artificial intelligence, the authors offer numerous
frameworks and platforms that allow safe and effective sharing to uphold the
three pillars of open science to make AI research more transparent and
reproducible: sharing data, sharing computer code and sharing predictive
models.
"We have high hopes for the utility
of AI for our cancer patients," says Dr. Haibe-Kains. "Sharing and
building upon our discoveries -- that's real scientific impact."
Story Source:
Materials provided by University Health Network.
https://www.sciencedaily.com/releases/2020/10/201014114606.htm
No comments:
Post a Comment