Uncovering Disinformation: Exorde’s Large-scale Approach

June 27, 2024
Download this as pdf

Social media has become a primary source of information for many people. As we navigate through a complex landscape of opinions and perspectives, it’s increasingly common to turn to comment sections for insights from fellow users. However, what happens when these seemingly authentic voices are actually part of a coordinated effort to shape public opinion?

At Exorde Labs, we specialize in using advanced data analytics to uncover patterns and insights that might otherwise go unnoticed. In this blog post, we’ll explore a recent case study where our team identified and analyzed a suspected disinformation campaign targeting discussions about the conflict between Ukraine and Russia.

Monitoring the Conversation

Our journey began with a broad analysis of conversations related to Ukraine and Russia across multiple languages, including English, Spanish, Portuguese, Russian, Ukrainian, French, Italian, Chinese, Japanese, and Korean. By casting a wide net, we aimed to gather a comprehensive cross-section of posts for our investigation.

On May 22nd, our keyword analysis revealed a significant spike in post activity related to the topic. This anomaly piqued our interest and prompted us to dive deeper into the nature of the conversation.

Analyzing Emotional Sentiment

Using our proprietary AI technology, we analyzed the emotional sentiment of posts on May 22nd across 26 different emotions. Interestingly, we observed a notable increase in emotions associated with anger and annoyance — a pattern often linked to automated accounts employing shock and trigger tactics to sway public opinion.

However, we didn’t detect any significant change in emotions related to sadness, suggesting that the conversation spike wasn’t tied to a major new development in the conflict.

Lack of sadness at the same time

Pinpointing the Source

To identify the root cause of the conversation spike, we filtered the sentiment data by channel across different keyword groups. This analysis revealed that posts originating from YouTube experienced a significant sentiment shift on May 22nd.

By focusing on YouTube posts from the target date, we were able to identify the specific video that triggered the sentiment change — a report from a Spanish news publication covering the conflict.

The video we detected, which had a massive number of suspiciously-looking comments

Detecting Automated Comments

Upon closer examination of the video’s comment section, we discovered a large number of comments that appeared to be generated by AI. Using an AI language model, we determined that out of the 307 comments on the video, 37 (12.05%) were likely created by AI bots.

Illustrative screenshot of the comments we found on the suspicious video

The AI model flagged comments based on several criteria, including:

  • Account names with incoherent combinations of numbers and letters
  • Confusing grammar or sentence structure, as if translated from another language
  • Exaggerated bias or one-sided opinions
  • Generic statements with little connection to the video content
  • Short, controversial statements designed to provoke engagement

The Power of Data Analytics

  1. Identify unusual spikes in conversation activity
  2. Examine emotional patterns to gain insights into the nature of the spike
  3. Explore post behavior across different channels to pinpoint the source
  4. Identify specific URLs driving the activity for further investigation