
Artificial intelligence has already proven its value in medicine, with algorithms that can spot cancers and other conditions with remarkable accuracy. But a new study suggests that as doctors rely more on AI, they may be losing some of their own diagnostic sharpness.
The research, published Aug. 13 in Lancet Gastroenterology and Hepatology, raises concerns about what some experts call “deskilling” — a subtle erosion of doctors’ expertise when machines begin to shoulder tasks once done by humans.
The study focused on endoscopists in Poland, where four centers adopted AI in 2021 as part of the Artificial Intelligence in Colonoscopy for Cancer Prevention Trial, or ACCEPT. The tool was designed to help detect polyps, which can be early signs of colorectal cancer.
Between Sept. 8, 2021, and March 9, 2022, more than 1,400 patients underwent colonoscopies at the participating centers. Researchers compared results before and after AI was introduced.
“The ADR (adenoma detection rate) of standard colonoscopy decreased significantly from 28.4% before to 22.4% after exposure to AI, corresponding to an absolute difference of minus 6%,” the authors wrote.
Even in a relatively short period, the decline was consistent. Experienced physicians who had each performed more than 2,000 colonoscopies were suddenly finding fewer adenomas on their own.
“We were quite surprised,” said Dr. Marcin Romańczyk, an M.D.-Ph.D. gastroenterologist at H-T Medical Center in Tychy, Poland, who led the study.
“We thought there might be a small effect, but the 6% absolute decrease — observed in several centers and among most endoscopists — points to a genuine change in behavior. This was especially notable because all participants were very experienced, with more than 2000 colonoscopies each.”
Romańczyk and his colleague Dr. Krzysztof Budzyń co-authored the study with several other researchers from the ACCEPT trial. Their findings highlight a paradox: AI may improve overall accuracy, but it could also erode the skills of the very experts it is meant to support.
A similar concern is emerging outside of medicine. A recent study by MIT’s Media Lab warns that overreliance on large language models such as ChatGPT could undermine learning, especially for younger users. The research has not been peer-reviewed, but one of the authors said she felt compelled to release the findings quickly out of concern that schools might soon adopt policies allowing children as young as preschoolers to depend heavily on AI tools.
The study involved 54 students from Boston-area colleges and universities, divided into three groups. One group used ChatGPT to write essays, another used search engines such as Google, and a third relied only on their own knowledge, without access to online tools.
Over three essay-writing sessions, the results diverged. The ChatGPT group produced work that was strikingly similar, while the “brain-only” group showed more variation and originality. Researchers also tracked brain activity using electroencephalograms, or EEGs — tests that measure electrical activity in the brain.
The EEG results showed a “scaling down” effect. Students in the brain-only group displayed the highest levels of brain activity, the search engine group showed moderate activity, and the ChatGPT group recorded the lowest levels. Over several months, the students who relied on ChatGPT became noticeably less engaged, the study found.
When the groups were later reassigned, something surprising happened: students who had originally been in the brain-only group maintained higher levels of brain activity, even after switching to the ChatGPT group.
There is no clear evidence yet that AI erodes critical thinking, but the findings echo the medical research from Poland. In both cases, experts warn, reliance on AI appears to dull the very skills — diagnostic acumen in doctors, critical engagement in students — that humans have spent years developing.
With AI tools spreading at unprecedented speed, researchers say the challenge is not only how to use them effectively, but also how to preserve the human expertise they are designed to support.