This site is reader-supported. When you click through links on our site, we may be compensated.
Researchers at Recorded Future have uncovered what appears to be a new, growing social media-based influence operation involving more than 215 social media accounts. While relatively small in comparison to influence and disinformation operations run by the Russia-affiliated Internet Research Agency (IRA), the campaign is notable because of its systematic method of recycling images and reports from past terrorist attacks and other events and presenting them as breaking news—an approach that prompted researchers to call the campaign “Fishwrap.”
The campaign was identified by researchers applying Recorded Future's “Snowball” algorithm, a machine-learning-based analytics system that groups social media accounts as related if they:
- Post the same URLs and hashtags, especially within a short period of time
- Use the same URL shorteners
- Have similar “temporal behavior,” posting during similar times—either over the course of their activity, or over the course of a day or week
- Start operating shortly after another account posting similar content ceases its activity
- Have similar account names, “as defined by the editing distance between their names,” as Recorded Future's Staffan Truvé explained.
Influence operations typically try to shape the world view of a target audience in order to create social and political divisions; undermine the authority and credibility of political leaders; and generate fear, uncertainty, and doubt about their institutions. They can take the form of actual news stories planted through leaks, faked documents, or cooperative “experts” (as the Soviet Union did in spreading disinformation about the US military creating AIDS). But the low cost and easy targeting provided by social media has made it much easier to spread stories (even faked ones) to create an even larger effect—as demonstrated by the use of Cambridge Analytica's data to target individuals for political campaigns, and the IRA's “Project Lakhta,” among others. Since 2016, Twitter has identified multiple apparent state-funded or state-influenced social media influence campaigns out of Iran, Venezuela, Russia, and Bangladesh.
Fake news, old news
<div class="gallery shortcode-gallery gallery-wide"> <ul> <li data-thumb="https://cdn.arstechnica.net/wp-content/uploads/2019/06/russian-fake-150x150.jpg" data-src="https://cdn.arstechnica.net/wp-content/uploads/2019/06/russian-fake.jpg" data-responsive="https://cdn.arstechnica.net/wp-content/uploads/2019/06/russian-fake.jpg 1080, https://cdn.arstechnica.net/wp-content/uploads/2019/06/russian-fake.jpg 2560" data-sub-html="#caption-1521147"> <figure style="height:399px;"> <div class="image" style="background-image:url('https://cdn.arstechnica.net/wp-content/uploads/2019/06/russian-fake.jpg'); background-color:#000"></div> <figcaption id="caption-1521147"> <span class="icon caption-arrow icon-drop-indicator"></span> <div class="caption"> A faked story about a protest in Sweden, written in Russian... </div> </figcaption> </figure> </li> <li data-thumb="https://cdn.arstechnica.net/wp-content/uploads/2019/06/christ-cross-fake-150x150.png" data-src="https://cdn.arstechnica.net/wp-content/uploads/2019/06/christ-cross-fake.png" data-responsive="https://cdn.arstechnica.net/wp-content/uploads/2019/06/christ-cross-fake.png 1080, https://cdn.arstechnica.net/wp-content/uploads/2019/06/christ-cross-fake.png 2560" data-sub-html="#caption-1521145"> <figure style="height:399px;"> <div class="image" style="background-image:url('https://cdn.arstechnica.net/wp-content/uploads/2019/06/christ-cross-fake.png'); background-color:#000"></div> <figcaption id="caption-1521145"> <span class="icon caption-arrow icon-drop-indicator"></span> <div class="caption"> ...and recycled by right-wing UK accounts. </div> </figcaption> </figure> </li> <li data-thumb="https://cdn.arstechnica.net/wp-content/uploads/2019/06/fake-terror-tweet-150x128.jpg" data-src="https://cdn.arstechnica.net/wp-content/uploads/2019/06/fake-terror-tweet.jpg" data-responsive="https://cdn.arstechnica.net/wp-content/uploads/2019/06/fake-terror-tweet.jpg 1080, https://cdn.arstechnica.net/wp-content/uploads/2019/06/fake-terror-tweet.jpg 2560" data-sub-html="#caption-1521151"> <figure style="height:399px;"> <div class="image" style="background-image:url('https://cdn.arstechnica.net/wp-content/uploads/2019/06/fake-terror-tweet.jpg'); background-color:#000"></div> <figcaption id="caption-1521151"> <span class="icon caption-arrow icon-drop-indicator"></span> <div class="caption"> This post linked to a real story, albeit a 4-year-old one. </div> </figcaption> </figure> </li> </ul> </div>
In a blog post, Recorded Future's Truvé called out two examples of “fake news” campaign posts identified by researchers. The company first focused on reports during riots in Sweden over police brutality that claimed Muslims were protesting Christian crosses, showing images of people dressed in black destroying an effigy of Christ on the cross. The story was first reported by a Russian-language account and then picked up by right-wing “news” accounts in the UK—but it used images recycled from a story about students protesting in Chile in 2016. Another bit of fake news identified as part of the Fishwrap campaign used old stories of a 2015 terrorist attack in Paris to create posts about a fake terrorist attack in March of this year. The linked story, however, was the original 2015 story—so attentive readers might realize that it was a bit dated.
The Fishwrap campaign consisted of three clusters of accounts. The first wave was active from May to October of 2018, after which many of the accounts shut down; a second wave launched in November of 2018 and remained active through April 2019. And some accounts remained active for the entire period. All of the accounts used domain shorteners hosted on a total of 10 domains but using identical code.
Many of the accounts have been suspended, but Truvé noted that “there has been no general suspension of accounts related to these URL shorteners.” One of the reasons, he suggested, was that since the accounts are posting text and links associated with “old—but real!—terror events,” the posts don't technically violate the terms of service of the social media platforms they were posted on, making them less likely to be taken down by human or algorithmic moderation.