Using ChatGPT Provides Short-Term Performance Gains But Slows Learning

That AI is being used in education is in no doubt, much as it’s being used in most other areas of society. Whether that use is helpful or not remains up for debate, however. For instance, most supporters refer to productivity gains being made by AI, or at least humans using AI, as evidence of its benefits.

A recent study from Australia’s Monash University casts doubt on the reliability of these claims. The research found that while students who used ChatGPT typically produced higher quality work, they didn’t actually learn any more as a result, so they became dependent on the technology for results.

“ChatGPT may promote learners’ dependence on technology and potentially trigger metacognitive ‘laziness,’ which can hinder their ability to self-regulate and engage deeply in learning,”? the researchers explain.

Short-term gains

The researchers examined how over 100 university students fared when they were given various forms of support. For instance, some were given access to checklist tools, some to human experts, and some to ChatGPT. The results show that ChatGPT was by far the most effective fool in terms of delivering improved outcomes, with the subsequent essays rated higher even than those produced by human experts.

“ChatGPT improved writing performance even more than the condition that involved support provided by a very experienced human expert,” the researchers explain. “This ‘out-performance’ might be the result of ‘AI-empowered learning skills’ which optimize performance at the expense of developing genuine human skills.”?

This last point is key, as while the students performed well on the essay-writing task, their ability to generalize knowledge was unchanged.

Dulling our skills

The findings echo wider concerns around the use of modern AI tools, with experts worrying that while they may provide short-term results, they deskill people who become overly reliant on the technology and fall into the trap of so-called “metacognitive laziness”.

This is a term the researchers coined to describe the process when we become overly reliant on AI and therefore don’t engage our own brain to evaluate and self-monitor.

“In the context of human-AI interaction, we define metacognitive laziness as learners’ dependence on AI assistance, offloading metacognitive load, and less effectively associating responsible metacognitive processes with learning tasks,”? the researchers explain.

Stagnant motivation

The findings also highlight some interesting outcomes in terms of motivation. We know from the work of Harvard’s Teresa Amabile and others that making progress in a task can be hugely motivational. The latest study showed no such findings when it came to those students utilizing ChatGPT to help them.

In other words, the “progress” made by producing a high-quality essay didn’t translate into motivational gains in the students, who were no more motivated than their peers in the other groups. It suggests that short-cutting our way to “mastery” is no substitute for more tried and trusted approaches.

What’s more, when the researchers dived deeper into the behaviors of the students, they found a clear reliance on ChatGPT, with many repeatedly turning to the tool for guidance rather than engaging directly with the material. By contrast, when students were paired with human experts, their subsequent engagement was much broader and they would read more deeply and reflect on their studies more often.

This contrast underscores the importance of designing AI tools that encourage rather than supplant critical cognitive efforts. As such, the authors believe we should be careful about introducing generative AI into the classroom and even the workplace.

Getting the balance right

While AI can accelerate short-term performance, its impact on long-term learning outcomes is less clear. The authors advocate for integrating AI as a complement, not a replacement, to traditional teaching methods. They urge us to ensure that we utilize AI in a way that means we remain actively engaged with the topic at hand rather than passively accept whatever ChatGPT and other tools provide us.

Since the release of ChatGPT in 2023, there has been a rush of interest in possible applications, with particular fears around the impact on knowledge work. The study reminds us that even when these tools “augment” rather than replace humans, there is a very real risk that not only will we blunt the skills of people but we might also dent their motivation and engagement as a result.

The post-Covid era was defined by so-called “quiet quitting”, whereby employees would do the bare minimum to get by in jobs they didn’t enjoy and weren’t engaged by. The introduction of generative AI tools that can do a “good enough” job runs the risk of making this trend even more pronounced.

The short-term outcomes of using generative AI are increasingly evident, but it’s quite possible we should beware of all that glitters and think more seriously about the long-term implications for us as humanity.

Facebooktwitterredditpinterestlinkedinmail