You Won’t Believe This Finding About Clickbait Headlines

We all know a clickbait headline when we see one, and their prevalence is largely founded on the belief that they generate higher numbers of clicks than more modest (and probably honest) headlines.  Alas, new research from Penn State suggests that not only are such headlines not as appealing as previously thought, but they could also be messing around with AI.

This is important, as the rising concern around fake news has prompted most of the social networks to deploy AI to try and spot (and then block) clickbait.  The research suggests this might be harder than previously thought.

“One of the ideas in fake news research is that if we can just solve the clickbait problem, we can get closer to solving the fake news problem,” they explain. “Our studies push back on that a little bit. They suggest that fake news might be a completely different ballgame, and that clickbait is itself more complicated than we thought.”

Fooling the system

The results reveal that the four AI models tested were only able to agree that a headline was clickbait around half of the time.  This agreement differed considerably based on the type of headline used.  For instance, when negative superlatives were used, there tended to be more agreement between the different models.

When the headlines were then assessed alongside the number of clicks they received, three of the four models were able to consistently show that features such as lists and demonstrative adjectives were engaging readers.

“As these machine learning models are the product of the past several decades, we have many variations — some are very simple, some run very fast, yet others are more complicated and require a lot of resources,” the researchers say. “It is like when you assemble a desk — you can do the job with a screwdriver that costs $5, but can probably do the job faster with a power drill costing $50. So, depending on the inherent power of these machine-learning models, and the training dataset the models are given, they tended to have different levels of performance and varying pros/cons.”

Nonetheless, the researchers believe their findings cast a degree of doubt over the ability of AI algorithms to spot fake news simply via the headlines alone.

“People were putting a lot of stock into using clickbait headlines as an element for fake news detection algorithms, but our studies are calling this assumption into question,” the researchers explain.

As such, it’s likely that the developers of these algorithms will need to be constantly modifying and updating the algorithms to keep pace with both the producers of fake news and also the consumers of it.

“It becomes a bit of a cat and mouse game,” the authors conclude. “The people who write fake news may become aware of the characteristics that are identified as fake news by the detectors and they will change their strategies. News consumers may also just become numb to certain characteristics if they see those headlines all the time. So, fake news detection must constantly evolve with the readers as well as the creators.”

Facebooktwitterredditpinterestlinkedinmail