MIT Study Finds ChatGPT Can Harm Critical Thinking Over Time

By John P. Mello Jr.

A recent study by the Media Lab at MIT found that prolonged use of ChatGPT, a large language model (LLM) chatbot, can have a harmful impact on the cognitive abilities of its users.

Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels, noted the report, whose main author was research scientist Nataliya Kos’myna.

These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning, it added.

“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in six to eight months, there will be some policymaker who decides, ‘let’s do GPT kindergarten,’” Kos’myna told Time magazine. “I think that would be absolutely bad and detrimental. Developing brains are at the highest risk.”

For the research, 54 subjects, aged 18 to 39, were divided into three groups to write several SAT essays. One group could use ChatGPT; the second, Google search; and the third, no tools at all. An EEG was used to measure the participants’ brain activity across 32 regions of the brain. Of the three groups, the ChatGPT users had the lowest brain engagement.

EEG analysis from the MIT study reveals lower neural connectivity in participants who use ChatGPT compared to those who use search engines or no tools. (Image Credit: Kos’myna et al., MIT Media Lab, used under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/))
EEG analysis from the MIT study reveals lower neural connectivity in participants who use ChatGPT compared to those who use search engines or no tools. (Image Credit: Kos’myna et al., MIT Media Lab, used under CC BY-NC-SA 4.0)

The researchers also found that over the course of several months, ChatGPT users became increasingly less diligent in their work, often just cutting and pasting whatever the chatbot fed them.

Study Raises Red Flag on AI Use

“I’m super excited that they did this,” said Karen Kovacs North, a clinical professor of communication at the Annenberg School for Communication and Journalism at the University of Southern California.

“People are very excited about AI and about the potential of AI,” she told TechNewsWorld. “But the question that some people have always had is, what does it do to critical thinking if people rely on it for problem solving or to think through issues?”

“MIT decided to take the bull by the horns and actually look at the critical thinking issue and found what a lot of people fear, which is that relying on AI might interfere with the development of critical thinking,” she continued. “They rightfully point out, this should be a cautionary tale.”

While acknowledging that the study raises a red flag, Mark N. Vena, president and principal analyst at SmartTech Research in Las Vegas, points out that it relies on a small sample — 54 participants, with only 18 in the follow-up. “Its early release underscores urgency — but peer review is pending,” he told TechNewsWorld. “It highlights potential weakening in memory and critical thinking with heavy AI use, yet stops short of proving long-term harm.”

“The study’s reveal is provocative,” he added, “but limited in scope and duration. It’s more of an early warning than a conclusive finding. Further research, especially long-term studies across age groups and use cases, is critical before drawing sweeping conclusions about AI’s impact on cognition.”

One such use case might be the legal profession. “The concern around cognitive atrophy applies to all AI users — but the legal field offers a particularly sharp example,” said Boaz Ashkenazy, co-founder and CEO of Augmented AI Labs, a builder and manager of generative AI business solutions, with offices in Seattle and New York City.

“When young lawyers rely on AI to generate arguments or summarize case law without first developing their own reasoning, they risk bypassing the deep, foundational learning that comes from doing the work manually,” he told TechNewsWorld. “The same pattern can show up in other professions, where over-reliance on AI risks dulling human judgment, problem-solving, and strategic thinking.”

Confirming Suspicions

“As a data scientist who has spent decades analyzing complex systems and their emergent behaviors, I find these MIT findings particularly significant because they represent the first neurological evidence of what many of us in the field have suspected — that over-reliance on AI systems may fundamentally alter human cognitive processes,” observed David Bader, director of the Institute for Data Science at the New Jersey Institute of Technology, in Newark, N.J.

While conceding that the study is preliminary and limited in scope, he pointed out that the EEG data showing reduced neural connectivity in ChatGPT users aligns with computational theories about cognitive load distribution.

“What concerns me most is the progressive nature of the decline over just a few sessions, suggesting that cognitive offloading to AI may create a feedback loop where users become increasingly dependent on external processing power at the expense of developing their own analytical capabilities,” he told TechNewsWorld.

“I’ve observed this myself,” confessed Rob Enderle, president and principal analyst with the Enderle Group, an advisory services firm, in Bend, Ore. “The more you depend on AI, the less you do yourself, leading to skill degradation, if you don’t work to improve your prompting skills instead.”

“So this means we need to be very careful with AI or we could significantly degrade our own unique capabilities,” he told TechNewsWorld.

“This study offers empirical evidence of what we already knew,” added Ryan Trattner, CTO and co-founder of StudyFetch, an AI educational platform in Los Angeles.

“ChatGPT is not a learning tool,” he told TechNewsWorld. “Quite the opposite. It is a hindrance to true learning, and unrestricted use is killing critical thinking skills and students’ creativity.”

“ChatGPT is the best — and worst — group project partner in history,” he declared. “It does all the work and never complains, but the group member who never shows up for meetings also doesn’t learn anything. Students are using ChatGPT as an AI co-worker to do all of the work and have no ownership of the result.”

Pattern of Cognitive Decline

John Bambenek, president of Bambenek Consulting, a cybersecurity and threat intelligence consulting firm, based in Schaumburg, Ill., and author of “Lies, Damn Lies, and AI,” noted that the MIT research reveals a similar pattern to other information innovations.

For instance, social media has caused its own cognitive difficulties. “The invention of the internet and blogging also led to warnings ‘don’t believe everything you read on the internet,” he told TechNewsWorld. “ChatGPT, in particular, superficially allows people to query a huge body of knowledge and get credible-looking answers.”

Pedro David Espinoza, a TED speaker, entrepreneur, AI investor, and author, agreed that some technology developments have contributed to general cognitive decline. “AI is accelerating this cognitive decline on steroids. Exponentially. Sadly,” he told TechNewsWorld.

Bambenek added that GenAI can be most harmful in education because it has always been a challenge to get students to think critically and understand concepts, rather than merely reciting information.

“The real byproduct of a student writing a paper is the growth of understanding in the subject that the exercise produces,” he said. “Getting GenAI to write the paper for you — which is increasingly getting harder to detect — means the true product of understanding isn’t achieved, only the superficial product, a paper.”

“The biggest problem,” he continued, “is that absent requiring test taking to occur in strict proctored environments or going back to oral exams, there is little to nothing that can be done to stop cheating.”

“There’s no question that AI is going to be used more and more for mind-numbing tasks like rote essays,” added Dan Kennedy, a professor of journalism at Northeastern University in Boston. “I’m more worried about students and others who will use it for higher-level creative tasks, producing work that is dull and unimpressive but that appears to fulfill the requirements of the assignment.”

“You can’t learn to write except by writing,” he told TechNewsWorld. “To the extent that AI takes away from that, then yes, we ought to be very worried about its use in education and everywhere else.”

https://www.technewsworld.com/story/mit-study-finds-chatgpt-can-harm-critical-thinking-over-time-179801.html

David A. Bader
David A. Bader
Distinguished Professor and Director of the Institute for Data Science

David A. Bader is a Distinguished Professor in the Department of Computer Science at New Jersey Institute of Technology.