Wednesday, December 13, 2023
Extended book excerpt: Computational propaganda
(This is a long excerpt about manipulation of algorithms by adversaries from my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")
Inauthentic activity is designed to manipulate social media. It exists because there is a strong incentive to manipulate wisdom of the crowd algorithms. If someone can get recommended by algorithms, they can get a lot of free attention because they now will be the first thing many people see.
For adversaries, a successful manipulation is like a free advertisement, seen by thousands or even millions. On Facebook, Twitter, YouTube, Amazon, Google, and most other sites on the internet, adversaries have a very strong incentive to manipulate these company’s algorithms.
For some governments, political parties, and organizations, the incentive to manipulate goes beyond merely shilling some content for the equivalent of free advertising. These adversaries engage thousands of controlled accounts over long periods of time in disinformation campaigns.
The goal is to promote a point of view, shut down those promoting other points of view, obfuscate unfavorable news and facts, and sometimes even create whole other realities that millions of people believe are true.
These efforts by major adversaries are known as “computational propaganda.” Computational propaganda unites many terms — “information operations,” “information warfare,” “influence operations,” “online astroturfing,” “cybertufing,” “disinformation campaigns,” and many others — and is defined as “the use of automation and algorithms in the manipulation of public opinion.”
More simply, computational propaganda is an attempt to give “the illusion of popularity” by using a lot of fake accounts and fake followers and make something look far more popular than it actually is. It creates “manufactured consensus,” the appearance that many people think something is interesting, true, and important when, in fact, it is not.
It is propaganda by stuffing the ballot box. The trending algorithm on Twitter and the recommendation engine on Facebook look at what people are sharing, liking, and commenting on as votes, votes for what is interesting and important. But “fringe groups that were five or 10 people could make it look like they were 10 or 20,000 people,” reported PBS’ The Facebook Dilemma. “A lot of people sort of laughed about how easy it was for them to manipulate social media.” They run many “accounts on Facebook at any given time and use them to manipulate people.”
This is bad enough when it is done for profit, to amplify a scam or just to try to sell more of some product. But when governments get involved, especially autocratic governments, reality itself can start to warp under sustained efforts to confuse what is real. “It’s anti-information,” said historian Heather Cox Richardson. Democracies rely on a common understanding of facts, of what is true, to function. If you can get even a few people to believe something that is not true, it changes how people vote, and can even “alter democracy.”
The scale of computational propaganda is what makes it so dangerous. Large organizations and state-sponsored actors are able to sustain thousands of controlled accounts pounding out the same message over long periods of time. They can watch how many real people react to what they do, learn what is working and what is failing to gain traction, and then adapt, increasing the most successful propaganda.
The scale is what creates computational propaganda from misinformation and disinformation. Stanford Internet Observatory’s RenĂ©e DiResta provided an excellent explanation in The Yale Review: “Misinformation and disinformation are both, at their core, misleading or inaccurate information; what separates them is intent. Misinformation is the inadvertent sharing of false information; the sharer didn’t intend to mislead people and genuinely believed the story. Disinformation, by contrast, is the deliberate creation and sharing of information known to be false. It’s a malign narrative that is spread deliberately, with the explicit aim of causing confusion or leading the recipient to believe a lie. Computational propaganda is a suite of tools or tactics used in modern disinformation campaigns that take place online. These include automated social media accounts that spread the message and the algorithmic gaming of social media platforms to disseminate it. These tools facilitate the disinformation campaign’s ultimate goal — media manipulation that pushes the false information into mass awareness.”
The goal of computational propaganda is to bend reality, to make millions believe something that is not true is true. DiResta warned: “As Lenin purportedly put it, ‘A lie told often enough becomes the truth.’ In the era of computational propaganda, we can update that aphorism: ‘If you make it trend, you make it true’”
In recent years, Russia was particularly effective at computational propaganda. Adversaries created fake media organizations that looked real, created fake accounts with profiles and personas that looked real, and developed groups and communities to the point they had hundreds of thousands of followers. Russia was “building influence over a period of years and using it to manipulate and exploit existing political and societal divisions,” DiResta wrote in the New York Times.
The scale of this effort was remarkable. “About 400,000 bots [were] engaged in the political discussion about the [US] Presidential election, responsible for roughly 3.8 million tweets, about one-fifth of the entire conversation,” said USC researchers.
Only later was the damage at all understood. In the book Zucked, Roger McNamee summarized the findings: “Facebook disclosed that 126 million users had been exposed to Russian interference, as well as 20 million users on Instagram ... The user number represents more than one-third of the US population, but that grossly understates its impact. The Russians did not reach a random set of 126 million people on Facebook. Their efforts were highly targeted. On the one hand, they had targeted people likely to vote for Trump with motivating messages. On the other, they identified subpopulations of likely Democratic voters who might be discouraged from voting ... In an election where only 137 million people voted, a campaign that targeted 126 million eligible voters almost certainly had an impact.”
These efforts were highly targeted, trying to pick out parts of the US electorate that might be susceptible to their propaganda. The adversaries worked over a long period of time, adapting as they discovered what was getting traction.
By late 2019, as reported by MIT Technology Review, “all 15 of the top pages targeting Christian Americans, 10 of the top 15 Facebook pages targeting Black Americans, and four of the top 12 Facebook pages targeting Native Americans were being run by …. Eastern European troll farms.”
These pages “reached 140 million US users monthly.” They achieved this extraordinary reach not by people seeking them out on their own, but by manipulating Facebook’s “engagement-hungry algorithm.” These groups were so large and so popular because “Facebook’s content recommendation system had pushed [them] into their news feeds.” Facebook’s optimization process for their algorithms was giving these inauthentic actors massive reach for their propaganda.
As Facebook data scientists warned inside of the company, “Instead of users choosing to receive content from these actors, [Facebook] is choosing to give them an enormous reach.” Real news, trustworthy information from reliable sources, took a back seat to this content. Facebook was amplifying these troll farms. The computational propaganda worked.
The computational propaganda was not limited to Facebook. The efforts spanned many platforms, trying the same tricks everywhere, looking for flaws to exploit and ways to extend their reach. The New York Times reported that the Russian “Internet Research Agency spread its messages not only via Facebook, Instagram and Twitter ... but also on YouTube, Reddit, Tumblr, Pinterest, Vine and Google+” and others. Wherever they were most successful, they would do more. They went wherever it was easiest and most efficient to spread their false message to a mass audience.
It is tempting to question how so many people could fall for this manipulation. How could over a hundred million Americans, and hundreds of millions of people around the world, see propaganda and believe it?
But this propaganda did not obviously look like Russian propaganda. The adversaries would impersonate Americans using fake accounts with descriptions that appeared to be authentic on casual inspection. Most people would have no idea they were reading a post or joining a Facebook Group that was created by a troll farm.
Instead “they would be attracted to an idea — whether it was guns or immigration or whatever — and once in the Group, they would be exposed to a steady flow of posts designed to provoke outrage or fear,” said Roger McNamee in Zucked. “For those who engaged frequently with the Group, the effect would be to make beliefs more rigid and more extreme. The Group would create a filter bubble, where the troll, the bots, and the other members would coalesce around an idea floated by the troll.”
The propaganda was carefully constructed, using amusing memes and emotion-laden posts to lure people in, then using manufactured consensus through multiple controlled accounts to direct and control what people saw afterwards.
Directing and controlling discussions only requires a small number of accounts if well-timed and coordinated. Most people reading a group are passive. Most people are not actively posting to the group. And far more people read than like, comment, or reshare.
Especially if adversaries do the timing well to get the first few comments and likes, then “as few as 1 to 2 percent of a group can steer the conversation if they are well- coordinated. That means a human troll with a small army of digital bots— software robots— can control a large, emotionally engaged Group.” If any real people start to argue or point out that something is not true, they can be drowned out by the controlled accounts simultaneously slamming them in the comments, creating an illusion of consensus and keeping the filter bubble intact.
This spanned the internet, on every platform and across seemingly-legitimate websites. Adversaries tried many things to see what worked. When something gained traction, they would “post the story simultaneously on an army of Twitter accounts” along with their controlled accounts saying, “read the story that the mainstream media doesn’t want you to know about.” If any real journalist eventually wrote about the story, “The army of Twitter accounts— which includes a huge number of bots— tweets and retweets the legitimate story, amplifying the signal dramatically. Once a story is trending, other news outlets are almost certain to pick it up.”
In the most successful cases, what starts as propaganda becomes misinformation, with actual American citizens unwittingly echoing Russian propaganda, now mistakenly believing a constructed reality was actually real.
By no means was this limited to only within the United States or only by Russians. Many large scale adversaries, including governments, political campaigns, multinational corporations, and organizations, are engaging in computational propaganda. What they have in common is using thousands of fake, hacked, controlled, or paid accounts to rapidly create messages on social media and the internet. They create manufactured consensus around their message and flood confusion around what is real and what is not. They have been seen “distorting political discourse, including in Albania, Mexico, Argentina, Italy, the Philippines, Afghanistan, South Korea, Bolivia, Ecuador, Iraq, Tunisia, Turkey, Taiwan, Paraguay, El Salvador, India, the Dominican Republic, Indonesia, Ukraine, Poland and Mongolia,” wrote the Guardian.
Computational propaganda is everywhere in the world. It “has become a regular tool of statecraft,” said Princeton Professor Jacob Shapiro, “with at least 51 different countries targeted by government-led online influence efforts” in the last decade.
An example in India is instructive. In the 2019 general election in India, adversaries used “hundreds of WhatsApp groups,” fake accounts, hacked and hijacked accounts, and “Tek Fog, a highly sophisticated app” to centrally control activity on social media. In a published paper, researchers wrote that adversaries “were highly effective at producing lasting Twitter trends with a relatively small number of participants.” This computational propaganda amplified “right-wing propaganda … making extremist narratives and political campaigns appear more popular than they actually are.” They were remarkably effective: “A group of public and private actors working together to subvert public discourse in the world’s largest democracy by driving inauthentic trends and hijacking conversations across almost all major social media platforms.”
Another recent example was in Canada, the so-called “Siege of Ottawa.” In the Guardian, Arwa Mahdawi wrote about how it came about: “It’s an astroturfed movement – one that creates an impression of widespread grassroots support where little exists – funded by a global network of highly organised far-right groups and amplified by Facebook ... Thanks to the wonders of modern technology, fringe groups can have an outsize influence ... [using] troll farms: organised groups that weaponise social media to spread misinformation.”
Computational propaganda “threatens democracies worldwide.” It has been “weaponized around the world,” said MIT Professor Sinan Aral in the book The Hype Machine. In the 2018 general elections in Sweden, a third of politics-related hashtagged tweets “were from fake news sources.” In the 2018 national elections in Brazil, “56 percent of the fifty most widely shared images on [popular WhatsApp] chat groups were misleading, and only 8 percent were fully truthful.” In the 2019 elections in India, “64 percent of Indians encountered fake news online.” In the Philippines, there was a massive propaganda effort against Maria Ressa, a journalist “working to expose corruption and a Time Person of the Year in 2018.” Every democracy around the world is seeing adversaries using computational propaganda.
The scale is what makes computational propaganda so concerning. The actors behind computational propaganda are often well-funded with considerable resources to bring to bear to achieve their aims.
Remarkably, there is now enough money involved that there are private companies “offering disinformation-for-hire services.” Computational propaganda “has become more professionalised and is now produced on an industrial scale.” It is everywhere in the world. “In 61 countries, we found evidence of political parties or politicians running for office who have used the tools and techniques of computational propaganda,” said researchers at University of Oxford. The way they work is always the same. “Automated accounts are often used to amplify certain narratives while drowning out others ... in order to game the automated systems social media companies use.” It is spreading propaganda using manufactured consensus at industrial scale.
Also concerning is that computational propaganda can target the just most vulnerable and the most susceptible and still achieve its aims. In a democracy, the difference between winning an election and losing is often just a few percentage points.
To change the results of an election, you don’t have to influence everyone. The target of computational propaganda is usually “only 10-20% of the population.” Swaying even a fraction of this audience by convincing them to vote in a particular way or discouraging them from voting at all “can have a resounding impact,” shifting all the close elections favorably, and leading to control of a closely-contested government.
To address the worldwide problem of computational propaganda, it is important to understand why it works. Part of why computational propaganda works is the story of why propaganda has worked throughout history. Computational propaganda floods what people see with a particular message, creating an illusion of consensus while repeating the same false message over and over again.
This feeds the common belief fallacy, even if the number of controlled accounts is relatively small, by creating the appearance that everyone believes this false message to be true. It creates a firehose of falsehood, flooding people with the false message, creating confusion about what is true or not, and drowning out all other messages. And the constant repetition, seeing the message over and over, fools our minds using the illusionary truth effect, which tends to make us believe things we have seen many times before, “even if the idea isn’t plausible and even if [we] know better.”
As Wharton Professor Ethan Mollick wrote, “The Illusionary Truth Effect supercharges propaganda on social media. If you see something repeated enough times, it seems more true.” Professor Mollick went on to say that studies found it works on the vast majority of people even when the information isn’t plausible and merely five repetitions were enough to start to make false statements seem true.
The other part of why computational propaganda works is algorithmic amplification by social media algorithms. Wisdom of the crowd algorithms, which are used in search, trending, and recommendations, work by counting votes. They look for what is popular, or what seems to be interesting to people like you, by looking at what people seemed to have enjoyed in the recent past.
When the algorithms look for what people are enjoying, these algorithms assume each person is a real person and each person is acting independently. When adversaries create many fake accounts or coordinate between many controlled accounts, they are effectively voting many times, fooling the algorithms with an illusion of consensus.
What the algorithm thought was popular and interesting turns out to be shilled. The social media post is not really popular or interesting, but the computational propaganda effort made it look to the algorithm that it is. And so the algorithm amplifies the propaganda, inappropriately showing it to many more people, and making the problem far worse.
Both people using social media and the algorithms picking what people see on social media are falling victim to the same technique, manufactured consensus, the propagandist creating “illusory notions of ... popularity because of this same automated inflation of the numbers.” It is adversaries using bots and coordinated accounts to mimic real users.
“They can drive up the number of likes, re-messages, or comments associated with a person or idea,” wrote the authors of Social Media and Democracy. “Researchers have catalogued political bot use in massively bolstering the social media metrics.”
The fact that they are only mimicking real users is important to addressing the problem. They are not real users, and they don’t behave like real users.
For example, when the QAnon conspiracy theory was growing rapidly on Facebook, it grew using “minimally connected bulk group invites. One member sent over 377,000 group invites in less than 5 months.” There were very few people responsible. According to reporter David Gilbert, there are “a relatively few number of actors creating a large percentage of the content.” He said a “small group of users has been able to hijack the platform.”
To shill and coordinate between many accounts pushing propaganda, adversaries have to behave in ways that are not human. Bots and other accounts that are controlled by just a few people all “pounce on fake news in the first few seconds after it’s published, and they retweet it broadly.” The initial spreaders of the propaganda “are much more likely to be bots than humans” and often will be the same accounts, superspreaders of propaganda, acting over and over again.
Former Facebook data scientist Sophie Zhang talked about this in a Facebook internal memo, reported by BuzzFeed: “thousands of inauthentic assets ... coordinated manipulation ... network[s] of more than a thousand actors working to influence ... The truth was, we simply didn’t care enough to stop them.” Despairing about the impact of computational propaganda on people around the world, Zhang went on to lament, “I have blood on my hands.”
Why do countries, and especially authoritarian regimes, create and promote propaganda? Why do they bother?
The authors of the book Spin Dictators write that, in recent years, because of globalization, post-industrial development, and technology changes, authoritarian regimes have “become less bellicose and more focused on subtle manipulation. They seek to influence global opinion, while co-opting and corrupting Western elites.”
Much of this is simply that it in recent decades has become cheaper and more effective to maintain power through manipulation and propaganda, in part due to lower costs on communication such as disinformation campaigns on social media, in part due to the economic benefits of openness that raise the costs of use of violence.
“Rather than intimidating citizens into submission, they use deception to win the people over.” Nowadays, propaganda is easier and cheaper. “Their first line of defense, when the truth is against them, is to distort it. They manipulate information ... When the facts are good, they take credit for them; when bad, they have the media obscure them when possible and provide excuses when not. Poor performance is the fault of external conditions or enemies ... When this works, spin dictators are loved rather than feared.”
Nowadays, it is cheaper to become loved than feared. “Spin dictators manipulate information to boost their popularity with the general public and use that popularity to consolidate political control, all while pretending to be democratic.”
While not all manipulation of wisdom of the crowd algorithms is state actors, adversarial states are a big problem: “The Internet allows for low-cost, selective censorship that filters information flows to different groups.” Propaganda online is cheap. “Social networks can be hijacked to disseminate sophisticated propaganda, with pitches tailored to specific audiences and the source concealed to increase credibility. Spin dictators can mobilize trolls and hackers ... a sophisticated and constantly evolving tool kit of online tactics.”
Unfortunately, internet “companies are vulnerable to losing lucrative markets,” so they are not always quick to act when they discover countries manipulating their rankers and recommender algorithms; authoritarian governments often play to this fear by threatening retaliation or loss of future business in the country.
Because “the algorithms that decide what goes viral” are vulnerable to shilling, it is also easy for “spin dictators use propaganda to spread cynicism and division.” And “if Western publics doubt democracy and distrust their leaders, those leaders will be less apt to launch democratic crusades around the globe.” Moreover, they can spread the message that “U.S.-style democracy leads to polarization and conflict” and corruption. This reduces the threats to an authoritarian leader and reinforces their own popularity.
Because the manipulation is all adversaries trying to increase their visibility, downranking or removing accounts involved in computational propaganda has little business risk. New accounts and any account involved in shilling, coordination, or propaganda could largely be ignored for the purpose of algorithmic amplification, and repeat offenders could be banned entirely.
Computational propaganda exists because it is cost effective to do at large scale. Increasing the cost of propaganda reaching millions of people may be enough to vastly reduce its impact. As Sinan Aral writes in the book The Hype Machine, “We need to cut off the financial returns to spreading misinformation and reduce the economic incentive to create it in the first place.”
While human susceptibility to propaganda is difficult to solve, on the internet today, a big part of the problem of computational propaganda comes down to how easy it is for adversaries to manipulate wisdom of the crowd algorithms and see their propaganda cheaply and efficiently amplified by algorithms.
Will Oremus blamed recommendation and other algorithms in the Washington Post making it far too easy for the bad guys. “The problem of misinformation on social media has less to do with what gets said by users than what gets amplified — that is, shown widely to others — by platforms’ recommendation software,” he said. Raising the costs to manipulating the recommendation engine is key to reducing the effectiveness of computational propaganda.
Wisdom of the crowds depends on the crowd consisting of independent voices voting independently. When that assumption is violated, adversaries can force the algorithms to recommend whatever they want. Computational propaganda uses a combination of bots and many controlled accounts, along with so-called “useful idiot” shills, to efficiently and effectively manipulate trending, ranker, and recommender algorithms.
Allowing their platforms to be manipulated by computational propaganda makes the experience on the internet worse. University of Oxford researchers found that “globally, disinformation is the single most important fear of internet and social media use and more than half (53%) of regular internet users are concerned about disinformation [and] almost three quarters (71%) of internet users are worried about a mixture of threats, including online disinformation, fraud and harassment.” At least in the long-term, it is in everyone’s interest to reduce computational propaganda.
When adversaries have their bots and coordinated accounts like, share, and post, none of that is authentic activity. None of that shows that people actually like the content. None of that content is actually popular nor interesting. It is all manipulation of the algorithms and only serves to make relevance and the experience worse.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment