The Risks Associated With AI-Generated Content

As generative AI tools, like ChatGPT, mushroomed in popularity in the past 18 months, a growing number of publications have been turning to it to generate content. Research from the University of Alberta highlights the risks associated with this trend.

The paper discusses the risks associated with using generative chatbots for content generation. The authors delve into what they refer to as “botshit,” which they define as inaccurate or fabricated content produced by chatbots.

Human supervision

“Our paper explains that when this jumble of truth and falsehood is used for work tasks, it can become botshit,” the researchers explain. “For chatbots to be used reliably, it is important to recognize that their responses can best be thought of as provisional knowledge.”

The authors provide a typology framework to mitigate these risks.  The framework outlines guardrails that can help to reduce the risks associated with chatbot usage:

  • Technologically-oriented guardrails – they focus on how the chatbot works. Depending on whether it’s doing things automatically or on its own, it needs different checks to make sure it’s giving out good info. Think of it like making sure the chatbot is trustworthy for different types of jobs.
  • Organization-oriented guardrails – these are like a set of guidelines that a company makes to avoid spreading false info. They teach the employees how to use the chatbot responsibly and check if it’s doing a good job. It’s a bit like a code of conduct.
  • User-oriented guardrails – these are about what people using the chatbot should do. Different situations call for different levels of checking and thinking. If you’re using the chatbot to confirm stuff, it’s good to question and double-check. But if you’re using it to get ideas, you might not need to be so critical. And, if something seems off, users are encouraged to speak up and ask questions. It’s all about keeping the workplace free from false info.

To trust chatbots, it’s crucial to see their answers as temporary knowledge. Unlike solid info from trusted sources, this temporary knowledge is like a work in progress. Organizations often deal with uncertain situations, and in those cases, they have to use info that might not be 100% confirmed. It’s like relying on sources where people are still debating if the info is true or not.

Facebooktwitterredditpinterestlinkedinmail