As generative AI (GenAI) becomes more common in universities, educators hope that clear rules and training will help students use it responsibly. A study from the University of Wollongong suggests otherwise. Even when law students were given guidelines and instruction, many misused the technology in ways that raise serious concerns.
The study asked students to prepare a government policy submission on the legal and ethical challenges of autonomous vehicles. They were allowed to use GenAI but had to reflect on its role and back up AI-generated content with credible sources.
Mixed results
The results were mixed. Some students used AI well—to sharpen arguments, summarize complex ideas, and improve clarity. But many ignored the rules. Some cited academic sources that did not exist, while others misrepresented real papers by claiming they said things they did not. This problem, known as AI “hallucination,” is well documented, but its effect on legal education is particularly troubling.
If future lawyers get into the habit of trusting unverified AI content, the risks go beyond the classroom. The legal world has already seen real cases where professionals cited fake case law or misinterpreted legal precedents due to AI errors, leading to professional embarrassment and even disciplinary action. In response, the Supreme Court of New South Wales has issued guidance to promote responsible AI use in legal work.
Yet the study suggests that guidelines and training are not enough. A key problem is “verification drift.” Students may start out checking AI-generated content, knowing it can be unreliable. But as they get used to its polished and confident tone, they begin to trust it more than they should. Over time, verification seems less necessary, and errors go unnoticed.
A new challenge
For universities, this poses a challenge. Many professors already allow GenAI in coursework, but spotting AI-generated mistakes can take hours. The study’s authors argue that educators must do more than provide instructions—they need to engage students in responsible AI use, offer ongoing feedback, and show real-world examples of AI misinformation leading to serious mistakes.
AI can be a valuable tool, but only if students learn to use it critically. Without stronger safeguards, the next generation of lawyers may enter the profession with a dangerous habit: treating plausible-sounding nonsense as fact.





