Last year I looked at a study that suggested that sarcasm was a key element in creativity. Whilst it’s hard to imagine managers promoting sarcasm in the workplace with this in mind, a new project from researchers at UC Berkeley might be able to automate the detection of sarcasm.
The project had its roots in language processing, with human beings famously pretty rubbish at detecting sarcasm, especially in written communication.
Automatically detecting sarcasm
Most previous attempts to do this automatically have focused on the text of the message and have attempted to gage the emotion behind the text. These have generally been better than human efforts, but the Berkeley team wanted to do even better.
They found that when contextual information was added, the success rate was even higher. This contextual information included the topic being discussed, the author and the target audience.
The researchers put their algorithm through its paces on Twitter. It was trained on tweets made using the #sarcasm and #sarcastic hashtags.
Once these tweets had been digested and analyzed, the algorithm was capable of successfully detecting sarcasm 85 percent of the time.
The most important element was found to be the individual doing the tweeting. There were various attributes, such as being an American male, made a tweeter more prone to sarcasm.
“This gets into what is, at heart, so difficult about recognizing sarcasm—not just for computers, but for humans as well,” the researchers say. “It just requires so much background knowledge between people to be understood.”
The hope is that the algorithm will be further defined in future to take into account things such as the environment people communicate in, ie the platform online.
The authors believe that this kind of sentiment analysis will eventually be useful on review sites to determine the sincerity of reviews, or even in areas such as national security to assess content shared on social media.
It’s still very much a work in progress, but is nonetheless an interesting area of work.