A decade of tracking Russian online interference
5 April 2018
Profile pictures from a large network of pro-Kremlin Twitter accounts. Image by Lawrence Alexander.

Profile pictures from a large network of pro-Kremlin Twitter accounts. Image by Lawrence Alexander.

With the F.B.I. indictment of 13 Russians for interfering in 2016 United States presidential elections, Global Voices revisited its extensive research into Russian online interference, underscoring the importance of open-source data and research to understand its impact.

RuNet Echo, a Global Voices initiative, began covering Russian automated bots, trolls and paid bloggers seeking to influence online news, conversations, and political campaigns as early as 2011. RuNet Echo’s main purpose is to “expand and deepen understanding of the Russian Internet (RuNet) and related online communities.”

GV was the first to publish evidence of the existence of networked trolls and bot farms operating in a coordinated fashion to distort public discourse, setting the frame for much of the reporting that followed.

Through the research of Lawrence Alexander, Global Voices was the first to demonstrate, using open source tools like NodeXL and Gephi, that specific bot networks existed on Twitter linked to the Russian troll farm run by the Internet Research Agency (I.R.A.) operating in a coordinated fashion to and from a specific location, and tied to specific accounts to disrupt and influence online discourse.

This insight put the research data and tools into the public eye and gave other researchers a trail to pursue the story further.

A December 2017 Washington Post story revisited Alexander’s research and its impact on the U.S. State Department’s understanding of Russian activities:

Frustrated U.S. officials concluded that the best information on Russia’s social media campaign in Ukraine wasn’t coming from U.S. intelligence agencies, but from independent researchers [like Alexander]. 

Department of State Chief Digital Officer Macon Phillips even visited Alexander in Brighton, United Kingdom (U.K.) in June 2015 to gain a full explanation of his methods.

Global Voices’ work occurred years before the story landed on the front page of the New York Times and other mainstream media outlets. Our reporting led to further research by many reporters and investigators, and eventually helped make the case about Russian activities around the 2016 U.S. presidential elections. We hope Silicon Valley companies also took advantage of this material to pursue their own investigations into the I.R.A’s manipulations on their platforms.

Tracking Russian interference on the internet from the beginning

In 2011, Global Voices focused on Distributed Denial of Service (DDoS) attacks against LiveJournal. Stories such as Alexey Sidorenko’s “Russia: Distributed Denial of LiveJournal,” note that DDoS attacks against platforms used for political speech had been occurring since at least 2007.

The topic became an ongoing subject of research at Global Voices, expanding from DDoS attacks to investigations into data-leak wars waged by the Kremlin, attacks on popular web forums, and the phenomenon of fake Twitter accounts in January 2012.

Sidorenko notes:

The first Russian fake Twitter users appeared long before other well known faux accounts…have developed their own particular ironic styles and have become integrated into the socio-political landscape of the Russian blogosphere.

While these fake accounts were a counter-attack by independent writers, they demonstrate the scale and impact of false identities in the Russian online space.

In February 2012, Global Voices witnessed payoff schemes to popular bloggers to shape Russian public opinion. RuNet Echo continued to track and write about waves of DDoS attacks, spoofs, and the emerging world of fake Twitter accounts, coordinated false information campaigns, and the appearance of bot networks to influence public information at social scale.

Global Voices tracked these attacks as they shifted targets, first against Russian opposition and activists, then against Ukrainians after the 2014 invasion, and then to attacks against Western and U.S. media, personalities, politicians, and systems, also in 2014.

By 2014, RuNet Echo editors Kevin Rothrock and Andrey Tselikov were reporting persistent attacks on Twitter of U.S. politicians and spokespeople, documenting collectives of leakers such as Shaltay Boltay (Anonymous International). At the same time, we tracked Russia’s ongoing efforts to restrict access to Twitter and other social media networks and to regulate bloggers.

Journalist Max Seddon’s June 2014 Buzzfeed story that discusses how a Russian troll army attacked America used Global Voices’ Alexey Sidorenko as a source.

In November 2014, GV contributor Aric Toler was the first to report in English about the existence, identity and physical location of the I.R.A. building at 55 Savushkina Street in St. Petersburg that became the focus of the 2018 FBI indictment. Location details of Russian troll farms were published in the Russian media as early as September 2013 by Novaya Gazeta, as well as by Delovoi Petersburg in November 2014.

In April 2015, Alexander published the first social network analysis to definitively reveal the scale of the Kremlin’s Twitter bot campaign. In these stories, originally commissioned by Global Voices contributor Tetyana Lokot, Alexander gathered evidence of a network of at least 20,500 Twitter accounts, synchronizing their activity in a concerted effort to spread misinformation. He demonstrated that these accounts were composed chiefly of bots, and tied to a specific location and agency. Alexander’s findings were also widely reported in Russian-language news outlets such as Tjournal.

Lessons learned from open-source tracking and reporting

Because of this work, reporters and investigators have the tools, methods and data points to identify where Twitter accounts are located and to whom they belong. Alexander’s open-source data investigation means that these accounts are easier to fingerprint on the internet. His reverse-engineering of the Google Analytics code also helped identify fake news websites meant to flood news aggregators in Russia for Russian audiences.

These research techniques revealed that Russians were buying ads on Google to influence and misinform on topics ranging from the war in eastern Ukraine to U.S. and European politics.

Other reporters and researchers built on this work, looking more closely into Russian trolling. In June 2015 Adrian Chen wrote the first story on the topic in the mainstream U.S. media, in the Sunday magazine of the New York Times. His work did not include open-source techniques or data analysis but provided an in-person investigation of the I.R.A. building. 

Two years later, this work provides the framework for further investigations into Russian activity. GV continues to cover the topic with dozens of related stories.

Had this research by reporters and legal investigators been better supported and investigated earlier, the evidence provided may have led to a reduction in harm or at least a greater awareness of the efforts by the Russians to disrupt democratic processes on a grand scale.

This post was originally published on Global Voices under the title: Tracking Russian Online Interference Teaches Valuable Lessons on Improving News Quality, under a Creative Commons 3.0 license.