Between the rolled eyes, shrugged shoulders, jazzed hands and warbling vocal inflection, it’s not hard to tell when someone’s being sarcastic as they’re giving you the business face to face. Online, however, you’re going to need that SpongeBob meme and a liberal application of the shift key to get your contradictory point across. Lucky for us netizens, DARPA’s Information Innovation Office (I2O) has collaborated with researchers from the University of Central Florida to develop a deep learning AI capable of understanding written sarcasm with a startling degree of accuracy.
“With the high velocity and volume of social media data, companies rely on tools to analyze data and to provide customer service. These tools perform tasks such as content management, sentiment analysis, and extraction of relevant messages for the company’s customer service representatives to respond to,” UCF Associate Professor of Industrial Engineering and Management Systems, Dr. Ivan Garibay, told Engadget via email. “However, these tools lack the sophistication to decipher more nuanced forms of language such as sarcasm or humor, in which the meaning of a message is not always obvious and explicit. This imposes an extra burden on the social media team, which is already inundated with customer messages to identify these messages and respond appropriately.”
As they explain in a study published in the journal, Entropy, Garibay and UCF PhD student Ramya Akula have built “an interpretable deep learning model using multi-head self-attention and gated recurrent units. The multi-head self-attention module aids in identifying crucial sarcastic cue-words from the input, and the recurrent units learn long-range dependencies between these cue-words to better classify the input text.”
“Essentially, the researchers’ approach is focused on discovering patterns in the text that indicate sarcasm,” Dr. Brian Kettler, a program manager in the I2O who oversees the SocialSim program, explained in a recent press statement. “It identifies cue-words and their relationship to other words that are representative of sarcastic expressions or statements.”
The team’s methodology differs from approaches used in previous efforts to use machines to spot Twitter sarcasm. “The older way to approach it would be to sit there and define features that we will look at,” Kettler told Engadget, “maybe, linguists’ theories about what makes language sarcastic” or labeling markers pulled from the sentence’s context, such as a random positive Amazon review on an otherwise universally panned product or feature. The model also learned to pay attention to specific words and punctuation such as just, again, totally, and “!” once it noticed them. “These are the words in the sentence that hint at sarcasm and, as expected, these receive higher attention than others,” the researchers wrote.
For this project, the researchers used a diverse group of datasets sourced from Twitter, Reddit, The Onion, Huffpost and the Sarcasm Corpus V2 Dialogues from the Internet Argument Corpus. “That’s the beauty of this approach, all you need is training examples,” Kettler said. “Enough of those, and the system will learn what features in the input text are predictive of language being sarcastic.”
This model also offers a degree of transparency in its decision-making process not typically seen in deep learning AI models like these. The sarcasm AI will actually show users what linguistic features it learned and thought were important in a given sentence via its attention mechanism visualizations (below)
Even more impressive is the system’s accuracy and precision. On the Twitter dataset, the model notched an F1 score of 98.7 (8.7 points higher than its closest rival) while, on the Reddit dataset, it scored 81.0 — 4 points higher than the competition. On headlines, it scored 91.8, more than five points ahead of similar detection systems, though it appeared to struggle a bit with the Dialogues (only hitting an F1 of 77.2).
As the model is further developed, it could become an invaluable tool for both the public and private sectors. Kettler sees this AI fitting into the larger mission of the SocialSim program. “It’s a piece of what we’re doing more broadly, which is really looking at and understanding the online information environment,” he said, trying to figure out “engagement at a high level [and] how many people are likely to engage with what kind of information.”
For example, when the NIH or CDC conducts a public health campaign and solicits online feedback, organizers will have an easier go of gaging the public’s overall opinion of the campaign after the sarcastic replies from trolls and shitposters have been filtered out.
“We’d like to understand the sentiment,” he continued. “Where people are engaging, are people broadly liking something or not liking something, and sarcasm can really fool sentiment detection… It’s an important technology and allows the machine to better interpret what we’re seeing online.”
The UCF team has plans to further develop the model so that it can be used for languages other than English before eventually open-sourcing the code. However Garibay notes that one potential sticking point will be their ability to generate “high quality voluminous datasets in multiple languages. Then the next big challenge would be handling the ambiguities, colloquialisms, slang, and coping with language evolution.”
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.