Friday, December 08, 2023

Book excerpt: Manipulating likes, comments, shares, and follows

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

“The systems are phenomenally easy to game,” explained Stanford Internet Observatory’s Renee DiResta.

The fundamental idea behind the algorithms used by social media is that “popular content, as defined by the crowd” should rise to the top. But “the crowd doesn’t have to be real people.”

In fact, adversaries can get these algorithms to feature whatever content they want. The process is easy and cheap, just pretend to be many people: “Bots and sockpuppets can be used to manipulate conversations, or to create the illusion of a mass groundswell of grassroots activity, with minimal effort.”

Whatever they want — whether it is propaganda, scams, or just flooding-the-zone with disparate and conflicting misinformation — can appear to be popular, which trending, ranker, and recommender algorithms will then dutifully amplify.

“The content need not be true or accurate,” DiResta notes. All this requires is a well-motivated small group of individuals pretending to be many people. “Disinformation-campaign material is spread via mass coordinated action, supplemented by bot networks and sockpuppets (fake people).”

Bad actors can amplify propaganda on a massive scale, reaching millions, cheaply and easily, from anywhere in the world. “Anyone who can gather enough momentum from sharing, likes, retweets, and other message-amplification features can spread a message across the platforms’ large standing audiences for free,” DiResta continued in an article for Yale Review titled "Computational Propaganda": “Leveraging automated accounts or fake personas to spread a message and start it trending creates the illusion that large numbers of people feel a certain way about a topic. This is sometimes called ‘manufactured consensus’.”

Another name for it is astroturf. Astroturf is feigning popularity by using a fake crowd of shills. It's not authentic. Astroturf creates the illusion of popularity.

There are even businesses set up to provide the necessary shilling, hordes of fake people on social media available on demand to like, share, and promote whatever you may want. As described by Sarah Frier in the book No Filter: “If you searched [get Instagram followers] on Google, dozens of small faceless firms offered to make fame and riches more accessible, for a fee. For a few hundred dollars, you could buy thousands of followers, and even dictate exactly what these accounts were supposed to say in your comments.”

Sarah Frier described the process in more detail. “The spammers ... got shrewder, working to make their robots look more human, and in some cases paying networks of actual humans to like and comment for clients.” They found “dozens of firms” offering these services of “following and commenting” to make content falsely appear to be popular and thereby get free amplification by the platforms wisdom of the crowd algorithms. “It was quite easy to make more seemingly real people.”

In addition to creating fake people by the thousands, it is easy to find real people who are willing to be paid to shill, some of which would even “hand over the password credentials” for their account, allowing the propagandists to use their account to shill whenever they wished. For example, there were sites where bad actors could “purchase followers and increase engagement, like Kicksta, Instazood, and AiGrow. Many are still running today.” And in discussion groups, it was easy to recruit people who, for some compensation, “would quickly like and comment on the content.”

Bad actors manipulate likes, comments, shares, and follows because it works. When wisdom of the crowd algorithms look for what is popular, they pick up all these manipulated likes and shares, thinking they are real people acting independently. When the algorithms feature manipulated content, bad actors get what is effectively free advertising, the coveted top spots on the page, seen by millions of real people. This visibility, this amplification, can be used for many purposes, including foreign state-sponsored propaganda or scams trying to swindle.

Professor Fil Menczer studies misinformation and disinformation on social media. In our interview, he pointed out that it is not just wisdom of the crowd algorithms that fixate on popularity, but a “cognitive/social” vulnerability that “we tend to pay attention to items that appear popular … because we use the attention of other people as a signal of importance.”

Menczer explained: “It’s an instinct that has evolved for good reason: if we see everyone running we should run as well, even if we do not know why.” Generally, it does often work to look at what other people are doing. “We believe the crowd is wise, because we intrinsically assume the individuals in the crowd act independently, so that the probability of everyone being wrong is very low.”

But this is subject to manipulation, especially online on social media “because one entity can create the appearance of many people paying attention to some item by having inauthentic/coordinated accounts share that item.” That is, if a few people can pretend to be many people, they can create the appearance of a popular trend, and fool our instinct to follow the crowd.

To make matters worse, there often can be a vicious cycle where some people are manipulated by bad actors, and then their attention, their likes and shares, is “further amplified by algorithms.” Often, it is enough to merely start some shilled content trending, because “news feed ranking algorithms use popularity/engagement signals to determine what is interesting/engaging and then promote this content by ranking it higher on people’s feeds.”

Adversaries manipulating the algorithms can be clever and patient, sometimes building up their controlled accounts over a long period of time. One low cost method of making a fake account look real and useful is to steal viral content and share it as your own.

In an article titled “Those Cute Cats Online? They Help Spread Misinformation,” New York Times reporters described one method of how new accounts manage to quickly gain large numbers of followers. The technique involves reposting popular content, such as memes that previously went viral, or cute pictures of animals: “Sometimes, following a feed of cute animals on Facebook unknowingly signs [people] up” for misinformation. “Engagement bait helped misinformation actors generate clicks on their pages, which can make them more prominent in users’ feeds in the future.”

Controlling many seemingly real accounts, especially accounts that have real people following them to see memes and cute pictures of animals, allows bad actors to “act in a coordinated fashion to increase influence.” The goal, according to researchers at Indiana University, is to create a network of controlled shills, many of which might be unwitting human participants, that are “highly coordinated, persistent, homogeneous, and fully focused on amplifying” scams and propaganda.

This is not costless for social media companies. Not only are people directly misled, and even sometimes pulled into conspiracy theories and scams, but amplifying manipulated content including propaganda rather than genuinely popular content will “negatively affect the online experience of ordinary social media users” and “lower the overall quality of information” on the website. Degradation of the quality of the experience can be hard for companies to see, only eventually showing up in poor retention and user growth when customers get fed up and leave in disgust.

Allowing fake accounts, manipulation of likes and shares, and shilling of scams and propaganda may hurt the business in the long-term, but, in the short-term, it can mean advertising revenue. As Karen Hao reported in MIT Technology Review, “Facebook isn’t just amplifying misinformation. The company is also funding it.” While some adversaries manipulate wisdom of the crowd algorithms in order to push propaganda, some bad actors are in it for the money.

Social media companies allowing this type of manipulation does generate revenue, but it also reduces the quality of the experience, filling the site with unoriginal content, republished memes, and scams. Hao detailed how it works: “Financially motivated spammers are agnostic about the content they publish. They go wherever the clicks and money are, letting Facebook’s news feed algorithm dictate which topics they’ll cover next ... On an average day, a financially motivated clickbait site might be populated with ... predominantly plagiarized ... celebrity news, cute animals, or highly emotional stories—all reliable drivers of traffic. Then, when political turmoil strikes, they drift toward hyperpartisan news, misinformation, and outrage bait because it gets more engagement ... For clickbait farms, getting into the monetization programs is the first step, but how much they cash in depends on how far Facebook’s content-recommendation systems boost their articles.”

The problem is that this works. Adversaries have a strong incentive to manipulate social media’s algorithms if it is easy and profitable.

But “they would not thrive, nor would they plagiarize such damaging content, if their shady tactics didn’t do so well on the platform,” Hao wrote. “One possible way Facebook could do this: by using what’s known as a graph-based authority measure to rank content. This would amplify higher-quality pages like news and media and diminish lower-quality pages like clickbait, reversing the current trend.” The idea is simple, that authoritative, trustworthy sources should be amplified more than untrustworthy or spammy sources.

Broadly this type of manipulation is spam, much like spam that technology companies have dealt with for years in email and on the Web. If social media spam was not cost-effective, it would not exist. Like with web spam and email spam, the key with social media spam is to make it less effective and less efficient. As Hao suggested, manipulating wisdom of the crowd algorithms could be made to be less profitable by viewing likes and shares from less trustworthy accounts with considerable skepticism. If the algorithms did not amplify this content as much, it would be much less lucrative to spammers.

Inside of Facebook, data scientists proposed something similar. Billy Perrigo at Time magazine reported that Facebook “employees had discovered that pages that spread unoriginal content, like stolen memes that they’d seen go viral elsewhere, contributed to just 19% of page-related views on the platform but 64% of misinformation views.” Facebook data scientists “proposed downranking these pages in News Feed ... The plan to downrank these pages had few visible downsides ... [and] could prevent all kinds of high-profile missteps.”

What the algorithms show is important. The algorithms can amplify a wide range of interesting and useful content that enhances discovery and keeps people on the platform.

Or the algorithms can amplify manipulated content, including hate speech, spam, scams, and misinformation. That might make people click now in outrage, or perhaps fool them for a while, but will cause people to leave in disguist eventually.

Tuesday, December 05, 2023

Book excerpt: Bonuses and promotions causing bad incentives

(This is an excerpt from my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Bonuses are a powerful incentive. Technology companies are using them more than ever. Most technology companies cap salaries and instead use bonuses and stock grants as most of their compensation for employees.

These bonuses are often tied to key metrics. For example, imagine that if you deploy a change to the recommendation algorithms that boosts revenue by a fraction of a percent, you would get the maximum bonus, a windfall of a million dollars, into your pocket.

What are you going to do? You’re going to try to get that bonus. In fact, you’ll do anything you can to get that bonus.

The problem comes when the criteria for what gets the bonus isn’t exactly correct. It doesn’t matter if it is mostly correct — increasing revenue is mostly correct as a goal — what matters is if there is any way, any way at all, to get that bonus in a way that doesn’t help the company and customers.

Imagine you find a way to increase revenue by biasing the recommendations toward outright scams, snake oil salesmen selling fake cures to the desperate. Just a twiddle to the algorithms and those scams show up just a bit more often, and that nudges the revenue just that much higher, at least when you tested it for a couple days.

Do you roll out this new scammy algorithm to everyone? Should everyone see more of these scams? And what happens to customers, and the company, if people see all these scams?

But that bonus. That tasty, tasty bonus. $1 million dollars. Surely, if you weren’t supposed to do this, they wouldn’t give you that bonus. Would they? This has to be the right thing. Isn’t it?

People working within technology companies have to make decisions like this every day. Examples abound of ways to generate more revenue that ultimately are harmful to the company, including increasing the size and number of paid promotions, salacious or otherwise inappropriate content, deceptive sales pitches, promoting lower quality items where you receive compensation, spamming people with takeover or pop-up advertising, and feeling strong emotions such as hatred.

As an article in Wired titled “15 Months of Fresh Hell Inside Facebook” described, this is a real problem. There easily can be “perverse incentives created by [the] annual bonus program, which pays people in large part based on the company hitting growth targets.”

“You can do anything, no matter how crazy the idea, as long as you move the goal metrics,” added Facebook whistleblower Francis Haugen. If you tell people their bonus depends on moving goal metrics, they will do whatever it takes to move those goal metrics.

This problem is why some tech companies reject using bonuses as a large part of their compensation. As Netflix founder and CEO Reed Hastings explained, “The risk is that employees will focus on a target instead of spotting what’s best for the company in the present moment.”

When talking about bonuses in our interview, a former executive who worked at technology startups gave the example of teams meeting their end-of-quarter quotas by discounting, which undermines pricing strategy and can hurt the long-term of the company. He also told of an executive who forced a deal through that was bad for the company because signing the deal ensured he hit his quarterly licensing goal and got his bonus. This other executive, when challenged by the CEO, defended his choice by saying he was not given the luxury of long-term thinking.

“We learned that bonuses are bad for business,” Netflix CEO Reed Hastings said. “The entire bonus system is based on the premise that you can reliably predict the future, and that you can set an objective in any given moment that will continue to be important down the road.”

The problem is that people will work hard to get a bonus, but it is hard to set a criteria for bonuses that cannot be abused in some way. People will try many, many things seeking to find something that wins the windfall the company is dangling in front of them. Some of the innovations might be real. But others may actually cause harm, especially over long periods of time.

As Reed Hastings went on to say, what companies need to be able to do is “adapt direction quickly” and have creative freedom to do the right thing for the company, not to focus on what “will get you that big check.” It’s not just how much you pay people, it’s also how you pay them.

Similarly, the people working on changing and tuning algorithms want to advance in their careers. How people are promoted, who is promoted, and for what reason creates incentives. Those incentives ultimately change what wisdom of the crowd algorithms do.

If people are promoted for helping customers find and discover what they need and keeping customers satisfied, people inside the company have more incentive to target those goals. If people are promoted for getting people to click more regardless of what they are clicking, then those algorithms are going to get more clicks, so more people get those promotions.

In the book An Ugly Truth, the authors found Facebook “engineers were given engagement targets, and their bonuses and annual performance reviews were anchored to measurable results on how their products attracted more users or kept them on the site longer.” Performance reviews and promotions were tied with making changes that kept people engaged and clicking. “Growth came first,” they found. “It’s how people are incentivized on a day-to-day basis.”

Who gets good performance reviews and promotions determines which projects get done. If a project that reduces how often people see disinformation from adversaries is both hard and gets poor performance reviews for its team, many people will abandon it. If another project that promotes content that makes people angry gets its team promoted because they increased engagement, then others will look over and say, that looks easy, I can do that too.

In the MIT Technology Review article “How Facebook Got Addicted to Spreading Misinformation,” Karen Hao described the incentives: “With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.”

The optimization of these algorithms is a series of steps, each one a small choice, about what people should and shouldn’t do. Often, the consequences can be unintended, which makes it that much more important for executives to check frequently if they are targeting the right goals. As former Facebook Chief Security Officer Alex Stamos said, “Culture can become a straightjacket” and force teams down paths that eventually turn out to be harmful to customers and the company.

Executives need to be careful of the bonus and promotion incentives they create for how their algorithms are tuned and optimized. What the product does depends on what incentives teams have.

Monday, December 04, 2023

The failure of big data

For decades, the focus in machine learning has been big data.

More data beats better algorithms, said an early 2001 result from Banko and Brill at Microsoft Research. This was hugely influential on the ML community. For years, most people found it was roughly true that if you get more data, ML works better.

Those days have come to an end. Nowadays, big data often is worse. This is because low quality or manipulated data wrecks everything.

There was a quiet assumption behind big data that any bad data in the big data is noise that averages out. That is wrong in most real world data where the bad data is skewed.

This problem is acute with user behavior data, like clicks, likes, links, or ratings. ML trying to use user behavior is trying to do wisdom of crowds, summarizing the opinions of many independent sources to produce useful information.

Adversaries can purposely skew user behavior data. When they do, using that data will yield terrible results in ML algorithms because the adversaries are able to make the algorithms show whatever they like. That includes the important ranking algorithms for search, trending, and recommendations that we use every day to find information on the internet.

ML using behavior data is doing wisdom of crowds. Wisdom of crowds assumes the crowd is full of real, unbiased, non-coordinating voices. It doesn't work when the crowd is not real. In cases where you are not sure, it's better to discard much of the data, anything not reliable.

Better data often beats big data if you measure by what is useful to people. ML needs data from reliable, representative, independent, and trustworthy sources to produce useful results. If you aren't sure about the reliability, throw that data out, even if you are throwing most of the data away in the end. Seek useful data, not big data.

Friday, December 01, 2023

Book excerpt: Manipulating customer reviews

(This is an excerpt from my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Amazon is the place people shop online. Over 40% of all US e-commerce spending was on in recent years.

Amazon also is the place for retailers to list their products for sale. Roughly 25% of all US e-commerce spending recently was third-party marketplace sellers using the website to sell their goods. Amazon is the place for merchants wanting to be seen by customers.

Because the stakes are so high, sellers have a strong incentive to have positive reviews of their products. Customers not only look at the reviews before buying, but also filter what they search for based on the reviews.

“Reviews are meant to be an indicator of quality to consumers," Zoe Schiffer wrote for The Verge, “[And] they also signal to algorithms whose products should rise to the top.”

For example, when a customer searches on Amazon for [headphones], there are tens of thousands of results. Most customers will only look at the first few of those results. The difference between being one of the top results for that search for headphones and being many clicks down the list can make or break a small manufacturer.

As Wired put it in an article titled “How Amazon’s Algorithms Curated a Dystopian Bookstore”: “Amazon shapes many of our consumption habits. It influences what millions of people buy, watch, read, and listen to each day. It’s the internet’s de facto product search engine — and because of the hundreds of millions of dollars that flow through the site daily, the incentive to game that search engine is high. Making it to the first page of results for a given product can be incredibly lucrative.”

But there is a problem. “Many curation algorithms can be gamed in predictable ways, particularly when popularity is a key input. On Amazon, this often takes the form of dubious accounts coordinating.”

The coordination of accounts often takes the form of paying people to write positive reviews whether they have used the item or not. It is not hard to recruit people to write a bogus positive review. A small payment and being allowed to keep the product for free is usually enough. There are even special discussion forums where people wait to be offered the chance to post a false positive review, ready and available recruits for the scam.

BuzzFeed described the process in detail in an investigative piece, “Inside Amazon’s Fake Review Economy.” They discuss “a complicated web of subreddits, invite-only Slack channels, private Discord servers, and closed Facebook groups.” They went on to detail how “sellers typically pay between $4 to $5 per review, plus a refund of the product ... [and] reviewers get to keep the item for free.”

Why do merchants selling on Amazon do this? As Nicole Nguyen explained in that BuzzFeed article, “Being a five-star product is crucial to selling inventory at scale in Amazon’s intensely competitive marketplace — so crucial that merchants are willing to pay thousands of people to review their products positively.”

Only one product can appear at the top of an Amazon search for [headphones]. And the top result will be the one most customers see and buy. It is winner take all.

“Reviews are a buyer’s best chance to navigate this dizzyingly crowded market and a seller’s chance to stand out from the crowd ... Online customer reviews are the second most trusted source of product information, behind recommendations from family and friends ... The best way to make it on Amazon is with positive reviews, and the best way to get positive reviews is to buy them.”

Because so few customers leave reviews, and even fewer leave positive reviews, letting the natural process take its course means losing to another less scrupulous merchant who is willing to buy as many positive reviews as they need. The stakes are high, and those who refuse to manipulate the reviews usually lose.

“Sellers trying to play by the rules are struggling to stay afloat amid a sea of fraudulent reviews,” Nguyen wrote. It is “really hard to launch a product without them.”

More recently, Facebook Groups have grown in popularity, generally and as a way to recruit people to write fake reviews. UCLA researchers described in detail how it works, finding “23 [new] fake review related groups every day. These groups are large and quite active, with each having about 16,000 members on average, and 568 fake review requests posted per day per group. Within these Facebook groups, sellers can obtain a five-star review that looks organic.” They found the cost of buying a fake review to be quite cheap, “the cost of the product itself,” because “the vast majority of sellers buying fake reviews compensate the reviewer by refunding the cost of the product via a PayPal transaction after the five-star review has been posted” with only a small number of the bad sellers also offering money in addition to a refund of the cost of the product.

Washington Post reporters also found “fraudulent reviews [often] originate on Facebook, where sellers seek shoppers on dozens of networks, including Amazon Review Club and Amazon Reviewers Group, to give glowing feedback in exchange for money or other compensation.”

You might think that getting caught manipulating reviews, and through fake reviews also getting featured in search and in recommendations, might have some cost for sellers if they were to get caught. However, Brad Stone in Amazon Unbound found that “sellers [that] adopted deceitful tactics, like paying for reviews on the Amazon website” faced almost no penalties. “If they got caught and their accounts were shut down, they simply opened new ones.”

Manipulating reviews, search rankings, and recommendations hurts Amazon customers and, eventually, will undermine trust in Amazon. While Amazon reviews have been viewed as a useful and trusted way to figure out what to buy on Amazon, fake reviews threaten to undermine that trust.

“It’s easy to manipulate ratings or recommendation engines, to create networks of sockpuppets with the goal of subtly shaping opinions, preying on proximity bias and confirmation bias,” wrote Stanford Internet Observatory’s Renee DiResta. Sockpuppets are fake accounts pretending to be real people. When bad actors create many sockpuppets, they can use those fake accounts to feign popularity and dominate conversations. “Intentional, deliberate, and brazen market manipulation, carried out by bad actors gaming the system for profit ... can have a profound negative impact.”

The bad guys manipulate ranking algorithms through a combination of fake reviews and coordinated activity between accounts. A group of people, all working together to manipulate the reviews, can change what algorithms like the search ranker or the recommendation engine think are popular. Wisdom of the crowd algorithms, including reviews, require all the votes to be independent, and coordinated shilling breaks that assumption.

Nowadays, Amazon seems to be saturated with fake reviews. The Washington Post found that “for some popular product categories, such as Bluetooth headphones and speakers, the vast majority of reviews appear to violate Amazon’s prohibition on paid reviews.”

This hurts both Amazon customers and other merchants trying to sell on Amazon. “Sellers say the flood of inauthentic reviews makes it harder for them to compete legitimately and can crush profits.” Added one retailer interviewed by the Washington Post, “These days it is very hard to sell anything on Amazon if you play fairly.”

Of course, this also means the reviews no longer indicate good products. Items with almost entirely 5 star reviews may be an “inferior or downright faulty products.” Customers are “left in the dark” using “seemingly genuine reviews” but end up buying “products of shoddy quality.” As Buzzfeed warned, “These reviews can significantly undermine the trust that consumers and the vast majority of sellers and manufacturers place in Amazon, which in turn tarnishes Amazon’s brand.”

Long-term harm to customer trust could eventually lead people to shop on Amazon less. Customer Reports, in an article titled “Hijacked Reviews on Amazon Can Trick Shoppers,” went as far as to warn against using the average review score at all: “Fraudulent reviews are a well-known pitfall for shoppers on Amazon ... never rely on just looking at the number of reviews and the average score ... look at not only good reviews, but also the bad reviews.”

Unfortunately, Amazon executives may have to see growth and sales problems, due to lack of customer trust in the reviews, before they are willing to put policies in place to change the incentives for sellers. For now, as Consumer Reports said, Amazon's customer reviews can no longer be trusted.

Tuesday, November 28, 2023

Book excerpt: The problem is bad incentives

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Incentives matter. “As long as your goal is creating more engagement,” said former Facebook data scientist Francis Haugen in a 60 Minutes interview, “you’re going to continue prioritizing polarizing, hateful content.”

Teams inside of the tech companies determine how the algorithms are optimized and what the algorithms amplify. People in teams optimize those algorithms for whatever goals they are given. Metrics and incentives the teams have inside the tech companies determine how wisdom of the crowd algorithms are optimized over time.

What the company decides is important and rewards determines how the algorithms are tuned. Metrics determine what wins A/B tests. Metrics decide what changes get launched to customers. Metrics determine who gets promoted inside these companies. When a company creates bad incentives by picking bad metrics, the algorithms will produce bad results.

What Facebook’s leadership prioritizes and rewards determines what people see on Facebook. “Facebook’s algorithm isn’t a runaway train,” Haugen said. “The company may not directly control what any given user posts, but by choosing which types of posts will be seen, it sculpts the information landscape according to its business priorities.” What the executives prioritize in what they measure and reward determines what types of posts people see on Facebook. You get what you measure.

“Mark has never set out to make a hateful platform. But he has allowed choices to be made where the side effects of those choices are that hateful, polarizing content gets more distribution and more reach,” Haugen said. Disinformation, misinformation, and scams on social media are “the consequences of how Facebook is picking out that content today.” The algorithms are “optimizing for content that gets engagement, or reaction.”

Who gets that quarterly bonus? It’s hard to have a long-term focus when the company offers large quarterly bonuses for hitting short-term engagement targets. In No Rules Rules, Netflix co-founder and CEO Reed Hastings wrote, “We learned that bonuses are bad for business.” He went on to say that executives are terrible at setting the right metrics for the bonuses and, even if they do, “the risk is that employees will focus on a target instead of spot what’s best for the company.”

Hastings said that “big salaries, not merit bonuses, are good for innovation” and that Netflix does not use “pay-per-performance bonuses.” Though “many imagine you lose your competitive edge if you don’t offer a bonus,” he said, “We have found the contrary: we gain a competitive edge in attracting the best because we just pay all that money in salary.”

At considerable effort, Google, Netflix, and Spotify have shown that, properly measured in long experiments, short-term metrics such as engagement or revenue hurt the company in the long-run. For example, in a paper titled “Focus on the Long-term: It’s Better for Users and Business”, Google showed that optimizing for weekly ad revenue would result in far too many ads in the product to maximize Google’s long-term ad revenue. Short-term metrics miss the most important goals for a company: growth, retention, and long-term profitability.

Short-term metrics and incentives overoptimize for immediate gains and ignore long-term costs. While companies and executives should have enough reasons to avoid bad incentives and metrics that hurt the company in the long-term, it is also true that regulators and governments could step in to encourage the right behaviors. As Foreign Policy wrote when talking about democracies protecting themselves from adversarial state actors, regulators could encourage social media companies to think beyond the next quarterly earnings report.

Regulators have struggled to understand how to help. Could they directly regulate algorithms? Attempts to do so have immediately hit the difficulty of crafting useful regulations for machine learning algorithms. But the problem is not the algorithm. The problem is people.

Companies want to make money. Many scammers and other bad actors also want to make money. The money is in the advertising.

Fortunately, the online ad marketplace already has a history of being regulated in many countries. Regulators in many countries already maintain bans on certain types of ads, restrictions on some ads, and financial reporting requirements for advertising. Go after the money and you change the incentives.

Among those suggesting increasing regulation on social media advertising is the Aspen Institute Commission on Information Disorder. In their report, they suggest countries “require social media companies to regularly disclose ... information about every digital ad and paid post that runs on their platforms [and then] create a legal requirement for all social media platforms to regularly publish the content, source accounts, reach and impression data for posts that they organically deliver to large audiences.”

This would provide transparency to investors, the press, government regulators, and the public, allowing problems to be seen far earlier, and providing a much stronger incentive for companies themselves to prevent problems before having them disclosed.

The Commission on Information Disorder goes further, suggesting that, in the United States, the extension of Section 230 protections to advertising and algorithms that promote content is overly broad. They argue any content that is featured, either by paid placement advertising or by recommendation algorithms, should be more heavily scrutinized: “First, withdraw platform immunity for content that is promoted through paid advertising and post promotion. Second, remove immunity as it relates to the implementation of product features, recommendation engines, and design.”

Their report was authored by some of the world experts on misinformation and disinformation. They say that “tech platforms should have the same liability for ad content as television networks or newspapers, which would require them to take appropriate steps to ensure that they meet the established standards for paid advertising in other industries.” They also say that “the output of recommendation algorithms” should not be considered user speech, which would enforce a “higher standard of care” when the company’s algorithms get shilled and amplify content “beyond organic reach.”

These changes would provide strong incentives for companies to prevent misinformation and propaganda in their products. The limitations on advertising would reduce the effectiveness of using advertising in disinformation campaigns. It also would reduce the effectiveness of spammers who opportunistically pile on disinformation campaigns, cutting into their efficiency and profitability. Raising the costs and reducing the efficiency of shilling will reduce the amount of misinformation on the platform.

Subject internet companies to the same regulations on advertising that television networks and newspapers have. Regulators are already familiar with following the money, and even faster enforcement and larger penalties for existing laws would help. By changing where revenue comes from, it may encourage better incentives and metrics within tech companies.

“Metrics can exert a kind of tyranny,” former Amazon VP Neil Roseman said in our interview. Often teams “don’t know how to measure a good customer experience.” And different teams may have “metrics that work against each other at times” because simpler and short-term term metrics often “narrow executive focus to measurable input/outputs of single systems.” A big problem is that “retention (and long-term value) are long-term goals which, while acknowledged, are just harder for people to respond to than short-term.”

Good incentives and metrics focus on the long-term. Short-term incentives and metrics can create a negative feedback loop as algorithms are optimized over time. Good incentives and metrics focus on what is important to the business, long-term retention and growth.

Monday, November 27, 2023

Tim O'Reilly on algorithmic tuning for exploitation

Tim O'Reilly, Mariana Mazzucato, and Ilan Strauss have three working papers focusing on Amazon's ability to extract unusual profits from its customers nowadays. The papers are: The core idea in all three is that Amazon has become the default place to shop online for many. So, when Amazon changes their site in ways that make Amazon higher profits but hurt consumers, it takes work for people to figure that out and shop elsewhere.

The papers criticize the common assumption that people will quickly switch to shopping elsewhere if the Amazon customer experience deteriorates. Realistically, people are busy. People have imperfect information, limited time, and it is effortful to find another place to shop. At least up to some limit, people may tolerate a familiar but substantially deteroriated experience for some time.

For search, it takes effort for people to notice that they are being shown lots of ads, that less reliable third party sellers are promoted over less profitable but more relevant options, and that the most useful options aren't always first. And then it takes yet more effort to switch to using other online stores. So Amazon is able to extract extraordinary profits in ways less dominant online retailers can't get away with.

But I do have questions about how far Amazon can push this. How long can Amazon get away with excessive advertising and lower quality? Do consumers tire of it over time and move on? Or do they put up with it forever as long as the pain is below some threshold?

Take an absurd extreme. Imagine that Amazon thought it could maximize its revenue and profits by showing only ads and only the most profitable ads for any search regardless of the relevance of those ads to the search. Clearly, that extreme would not work. The search would be completely useless and consumers would go elsewhere very rapidly.

Now back off from that extreme, adding back more relevant ads and more organic results. At what point do consumers stay at Amazon? And do they just stay at Amazon or do they slowly trickle away?

I agree time and cognitive effort, as well as Amazon Prime renewing annually, raise switching costs. But when will consumers have had enough? Do consumers only continue using Amazon with all the ads until they realize the quality has changed? When does brand and reputation damage accumulate to the point that consumers start trusting Amazon less, shopping at Amazon less, and expending the effort of trying alternatives?

I think one model of customer attrition is that every time customers notice a bad experience, they have some probability of using Amazon less in the future. The more bad experiences they have, the faster the damage to long-term revenue. Under this model, even the level of ads Amazon has now is causing slow damage to Amazon. Amazon execs may not notice because the damage is over long periods of time and hard to attribute directly back to the poor quality search results, but the damage is there. This is the model I've seen used by some others, such as Google Research in their "Focus on the Long-term" paper.

Another model might be that consumers are captured by dominant companies such as Amazon and will not pay the costs to switch until they hit some threshold. That is, most customers will refuse to try alternatives until it is completely obvious that it is worth the effort. This assumes that Amazon can exploit customers for a very long time, and that customers will not stop using Amazon no matter what they do. There is some extreme where that breaks, but only at the threshold, not before.

The difference between these two models matters a lot. If Amazon is experiencing substantial but slow costs from what they are doing right now, there's much more hope for them changing their behavior on their own than if Amazon is experiencing no costs from their bad behavior unless regulators impose costs externally. The solutions you get in the two scenarios are likely to be different.

I enjoyed the papers and found them thought-provoking. Give the papers a read, especially if you are interested in the recent discussions of enshittification started by Cory Doctorow. As Cory points out, this is a much broader problem than just Amazon. And we need practical solutions that companies, consumers, and policy makers can actually implement.

Sunday, November 26, 2023

Book excerpt: People determine what the algorithms do

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

The problem is people. These algorithms are built, tuned, and optimized by people. The incentives people have determine what these algorithms do.

If what wins A/B tests is what gets the most clicks, people will optimize the algorithms to get more clicks. If a company hands out bonuses and promotions when the algorithms get more clicks, people will tune the algorithms to get more clicks.

It doesn’t matter that what gets clicks and engagement may not be good for customers or the company in the long-term. Lies, scams, and disinformation can be very engaging. Fake crowds generate a lot of clicks. None of them are real, true, and none of them help customers or the business, but look at all those click, click, clicks.

Identifying the right problem is the first step toward finding the right solutions. The problem is not algorithms. The problem is how people optimize the algorithms. Lies, scams, and disinformation thrive if people optimize for the short-term. Problems like misinformation are a symptom of a system that invites these problems.

Instead, invest in the long-term. Invest in removing fake crowds and in a good customer experience that keeps people around. Like any investment, this means lower profits in the short-term for higher profits in the long-term. Companies maximize long-term profitability by making sure teams are optimizing for customer satisfaction and retention.

It’s not the algorithm, it’s people. People are in control. People tune the algorithm to cause harm usually unintentionally and sometimes because they have incentives to ignore the harm. The algorithm does what people tell it to.

To fix why the algorithms cause harm, look to the people who build the algorithms. Fixing the harm from wisdom of the crowd algorithms requires fixing why people allow those algorithms to cause harm.

Friday, November 17, 2023

Book excerpt: How companies build algorithms using experimentation

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Wisdom of the crowd algorithms shape what people see on the internet. Constant online experimentation shapes what wisdom of the crowd algorithms do.

Wisdom of crowds is the idea that summarizing the opinions of lots of independent people is often useful. Many machine learning algorithms use wisdom of the crowds, including rankers, trending, and recommenders on social media.

It's important to realize that recommendations algorithms are not magic. They don't come up with good recommendations out of thin air. Instead, they just summarize what people found.

If summarizing what people found is all the algorithms do, why do they create harm? Why would algorithms amplify social media posts about scammy vitamin supplements? Why would algorithms show videos from white supremacists?

It is not how the algorithms are built, but how they are optimized. Companies change, twiddle, and optimize algorithms over long periods of time using online experiments called A/B tests. In A/B tests, some customers see version A of the website and some customers see version B.

Teams compare the two versions. Whichever version performs better, by whatever metrics the company chooses, is the version that later launches for all customers. This process repeats and repeats, slowly increasing the metrics.

Internet companies run tens of thousands of these online experiments every year. The algorithms are constantly tested, changing, and improving, getting closer and closer to the target. But what if you have the wrong target? If the goal is wrong, what the algorithms do will be wrong.

Let’s say you are at Facebook working on the news feed algorithm. The news feed algorithm is what picks what posts people see when they come to Facebook. And let’s say you are told to optimize the news feed for what gets the most clicks, likes, and reshares. What do you do? You will start trying changes to the algorithm and A/B testing them. Does this change get more clicks? What about this one? Through trial-and-error, you will find whatever makes the news feed get more engagement.

It is this trial-and-error process of A/B testing that drives what the algorithms do. Whatever the goal is, whatever the target, teams of software engineers will work hard to twiddle the algorithms to hit those goals. If your goal is the wrong goal, your algorithms will slowly creep toward doing the wrong thing.

So what gets the most clicks? It turns out scams, hate, and lies get a lot of clicks. Misinformation tends to provoke a strong emotional reaction. When people get angry, they click. Click, click, click.

And if your optimization process is craving clicks, it will show more of whatever gets clicks. Optimizing algorithms for clicks is what causes algorithms to amplify misinformation on the internet.

To find practical solutions, it's important to understand how powerful tech companies build their algorithms. It's not what you would expect.

Algorithms aren't invented so much as evolved. These algorithms are optimized over long periods of time, changing slowly to maximize metrics. That means the algorithms can unintentionally start causing harm.

It's easy for social media to fill with astroturf

Most underestimate how easy it is for social media to become dominated by astroturf. It's easy. All you need is a few people creating and controlling multiple accounts. Here's an example.

Let's say you have 100M real people using your social media site. On average, most post or comment infrequently, once every 10 days. That looks like real social media activity from real people. Most people lurk, a few people post a lot.

Now let's say 1% of people shill their own posts using about 10 accounts they control on average. These accounts also post and comment more frequently, once a day. Most of these use a few burner accounts to like, share, and comment on their own posts. Some use paid services and unleash hundreds of bots to shill for them.

In this simple example, about 50% of comments and posts you see on the social media site will be artificially amplified by fake crowds. Astroturfed posts and comments will be everywhere. This is because most people don't post often, and the shills are much more active.

Play with the numbers. You'll find that if most people don't post or comment -- and most real people don't -- it's easy for people who post a lot from multiple accounts they control to dominate conversations and feign popularity. It's like a megaphone for social media.

Also important is how hard it is for the business to fix astroturf once they (often unintentionally) go down this path. This example social media site has 100M people using it, but claims about 110M users. Real engagement is much smaller with fewer highly engaged accounts, not what this business pitches to advertisers. Once you have allowed this problem to grow, it's tempting for companies finding themselves in this situation to not fix it.

Wednesday, November 15, 2023

Book excerpt: How some companies get it right

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

How do some companies fix their algorithms? In the last decade, wisdom of the crowds broke, corrupted by bad actors. But some found fixes that let them still use wisdom of the crowds.

Why was Wikipedia resilient to spammers and shills when Facebook and Twitter were not? Diving into how Wikipedia works, this book shows that Wikipedia is not a freewheeling anarchy of wild edits by anyone, but a place where the most reliable and trusted editors have most of the power. A small percentage of dedicated Wikipedia editors have much more control over Wikipedia than the others; their vigilance is the key to keeping out scammers and propagandists.

It's well known that When Larry Page and Sergey Brin first created Google, they invented the PageRank algorithm. Widely considered a breakthrough at the time, PageRank used links between web pages as if they were votes for what was interesting and popular. PageRank says a web page is useful if it has a lot of other useful web pages pointing to it.

Less widely known is that PageRank quickly succumbed to spam. Spammers created millions of web pages with millions of links all pointing to each other, deceiving the PageRank algorithm. Because of spam and manipulation, Google quickly replaced PageRank with the much more resilient TrustRank.

TrustRank only considers links from reliable and trustworthy web pages and mostly ignores links from unknown or untrusted sources. It works by propagating trust along links between web pages from known trusted pages to other pages. TrustRank made manipulating Google's search ranking algorithm much less effective and much more expensive for scammers.

TrustRank also works for social media. Start by identifying thousands of accounts that are known to be reliable, meaning that they are real people posting useful information, and thousands of accounts that are unreliable, meaning that they are known to be spammers and scammers. Then look at the accounts that those accounts follow, like, reshare, or engage with in any way. Those nearby accounts then get a bit of the goodness or badness, spreading through the engagement network. Repeat this over and over, allowing reliability and unreliability to spread across all the accounts, and you know how reliable most accounts are even if they are anonymous.

If you boost reliable accounts and mostly ignore unknown and unreliable accounts, fake accounts become less influential, and it becomes much less cost-effective for bad actors to create influential fake accounts.

Companies that fixed their wisdom of the crowd algorithms also do not use engagement to optimize their algorithms. Optimizing for engagement will cause wisdom of the crowd algorithms to promote scams, spam, and misinformation. Lies get clicks.

It’s a lot of work to not optimize for engagement. Companies like Netflix, Google, YouTube, and Spotify put in considerable effort to run long experiments, often measuring people over months or even years. They then develop proxy short-term metrics that they can use to measure long-term satisfaction and retention over shorter periods of time. One example is satisfied clicks, which are clicks where people are not immediately repelled and spend time using the content they see, ignoring clicks on scams or other low quality content. These companies put in all this effort to develop good metrics because they know that optimizing algorithms for engagement eventually will hurt the company.

Algorithms can be fixed if executives leading the companies decide to fix them. Some companies have successfully prevented bad actors from manipulating wisdom of the crowds. The surprise: Companies make more much more money over the long-run if they don't optimize algorithms for clicks.

Thursday, November 09, 2023

Book excerpt: Table of Contents

(This is the Table of Contents from my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Introduction: How good algorithms became a fountain of scams, shills, and disinformation — and what to do about it

Part I: The golden age of wisdom of crowds algorithms
Chapter 1: The rise of helpful algorithms
Chapter 2: How companies build algorithms using experimentation

Part II: The problem is not the algorithms
Chapter 3: Bad metrics: What gets measured gets done
Chapter 4: Bad incentives: What gets rewarded gets replicated
Chapter 5: Bad actors: The irresistible lure of an unlocked house

Part III: How to stop algorithms from amplifying misinformation
Chapter 6: How some companies get it right
Chapter 7: How to solve the problems with the algorithms
Chapter 8: Getting platforms to embrace long-term incentives and metrics
Chapter 9: Building a win-win-win for companies, users, and society

Conclusion: From hope to despair and back to hope

(That was the Table of Contents from a draft of my book. If you might be interested in this book, I'd love to know.)

Monday, October 30, 2023

Book excerpt: Overview from the book proposal

(This is an excerpt from the book proposal for my unpublished book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Without most of us even realizing it, algorithms determine what we see everyday on the internet.

Computer programs pick which videos you’ll watch next on TikTok and YouTube. When you go to Facebook and Twitter, algorithms pick which news stories you’ll read. When it’s movie night, algorithms dictate what you’ll watch on Netflix based on what you watched in the past. Everywhere you look, algorithms decide what you see.

When done well, these computer programs have enormous value, helping people find what they need quickly and easily. It’s hard to find what you are looking for with so much out there. Algorithms filter through everything, tossing bad options away with wild abandon, to bring rare gems right to you.

Imagine you’re looking for a book. When you go to Amazon and start searching, algorithms are what filter through all the world’s books for you. But not only that. Algorithms also look at what books people seem most interested in and then bring you the very best choices based on what other customers bought. By quickly filtering through millions of options, computers help people discover things they never would have been able to find on their own.

These algorithms make recommendations in much the same way that you would. Suppose you have a friend who asks you to recommend a good book for her to read. You might ask yourself, what do you know about her? Does she like fiction or nonfiction? Which authors does she like? What books did she read in the past few months? With a little information about your friend’s tastes, you might narrow things down. Perhaps she would like this well-reviewed mystery book? It has some similar themes to a book she enjoyed last year.

Algorithms combine opinions, likes, and dislikes from millions of people. The seminal book The Wisdom of Crowds popularized the idea that combining the opinions of many random people often gives useful results. What algorithms do is bring together the wisdom of crowds at massive scale. One way they do this is by distilling thousands of customer reviews so you can easily gauge the average review of a movie or video game before you sink time and money into it. Another way is by showing you that customers who bought this also bought that. When algorithms pick what you see on the internet, they use wisdom of the crowds.

Something changed a few years ago. Wisdom of the crowds failed. Algorithms that use wisdom of the crowds started causing harm. Across the internet, algorithms that choose what people see started showing more spam, misinformation, and propaganda.

What happened? In the same way a swindler on a street corner will stack the crowd with collaborators who loudly shill the supposed wonders of their offerings, wisdom of the crowd algorithms got fooled into promoting misinformation, scams, and frauds. With the simple ease of creating many online accounts, a fraudster can pretend to be an entire crowd of people online. A fake crowd gives scammers a megaphone that they can use to amplify their own voice as they drown out the voices of others.

Search and recommendation algorithms across the internet were fooled by these fake crowds. Before the 2020 election in the United States, foreign adversaries posted propaganda to social media, then pretended to be large numbers of Americans liking and resharing, fooling the algorithms into amplifying their posts. 140 million people in the United States saw this propaganda, many of whom were voters. In 2019, the largest pages on social media for Christian Americans, such as “Be Happy Enjoy Life” and “Jesus is my Lord”, were controlled by foreign operatives pretending to be Americans. These troll farms shilled recommendation, search, and trending algorithms, getting top placement for their posts and high visibility for their groups, reaching 75 million people. Scammers manipulated wisdom of the crowd algorithms with shills to promote their bogus cures during the COVID-19 global pandemic. In 2021, the US Surgeon General was so alarmed by health misinformation on the internet that he warned of increased illness and death if it continued.

Misinformation and disinformation are now the biggest problems on the internet. It is cheap and easy for scammers and propagandists to get seen by millions. Just create a few hundred accounts, have them like and share your stuff to create the illusion of popularity, and wisdom of the crowd algorithms will amplify whatever you like. Even once many companies realized the algorithms had gone wrong, many failed to fix it.

This book is about fixing misinformation on the internet by fixing the algorithms that promote misinformation. Misinformation, scams, and propaganda are ubiquitous on the internet. Algorithms including trending, recommendations, and search rankers amplify misinformation, giving it much further reach and making it far more effective.

But the reason why algorithms amplify misinformation is not what you think. As this book shows, the process of how big tech companies optimize algorithms is what causes those algorithms to promote misinformation. Diving deep inside the tech companies to understand how they build their algorithms is the key to finding practical solutions.

This book could only be written by an insider with an eye toward how the biggest tech companies operate. That’s because it’s necessary to not only understand the artificial intelligence technology behind the algorithms that pick what people see on the internet, but also understand the business incentives inside these companies when teams build and optimize these algorithms.

When I invented Amazon’s recommendation algorithm, our team was idealistic about what would happen next. We saw algorithms as a tool to help people. Find a great book. Enjoy some new music. Discover new things. No matter what you are looking for, someone out there probably already found it. Wisdom of the crowd algorithms share what people found with other people who might enjoy it. We hoped for an internet that would be a joyful playground of knowledge and discovery.

In the years since, and in my journeys through other tech companies, I have seen how algorithms can go terribly wrong. It can happen easily. It can happen unintentionally. Like taking the wrong path in a dark forest, small steps lead to bigger problems. When algorithms go wrong, we need experts like me who can see realistic ways to correct the root causes behind the problems.

Solutions to what is now the world’s algorithm problem require interdisciplinary expertise in business, technology, management, and policy. I am an artificial intelligence expert, invented Amazon’s recommendation algorithm, and have thirty-two patents on search and recommendation algorithms. I also have a Stanford MBA, worked with executives at Amazon, Microsoft, and Netflix, and am an expert on how tech companies manage, measure, and reward teams working on wisdom of the crowd algorithms. Past books have failed to offer solutions because authors have lacked the insider knowledge, and often the technical and business expertise, to solve the problems causing misinformation and disinformation. Only with a deep understanding of the technology and business will it be possible to find solutions that not only will work, but also will be embraced by business, government, and technology leaders.

This book walks readers through how these algorithms are built, what they are trying to do, and how they go wrong. I reveal what it is like day-to-day to work on these algorithms inside the biggest tech companies. For example, I describe how the algorithms are gradually optimized over time. That leads to the surprising conclusion that critical to what the algorithms show people is not the algorithms themselves but the metrics companies pick for judging if the algorithms are doing their job well. I show how easy it is for attempts to improve algorithms to instead go terribly wrong. Seemingly unrelated decisions such as how people are promoted can not only cause algorithms to amplify misinformation, but also hurt customers and the long-term profitability of the company.

Readers need to know both why the algorithms caused harm and why some companies failed to fix the problems. By looking at what major tech companies have done and failed to do, readers see the root causes of the massive spread of misinformation and disinformation on the internet. Some companies have invested in fixing their algorithms and prospered. Some companies failed to fix their algorithms and suffered higher costs as misinformation and scams grew. By comparing companies that have had more success with those that have not, readers discover how some companies keep fraudsters from manipulating their algorithms and why others fail.

Other books have described misinformation and disinformation on the internet, but no other book offers practical solutions. This book explains why algorithms promote misinformation with key insights into what makes misinformation cost effective for fraudsters. This book describes what tempts giant tech companies to allow misinformation on their platforms and how that eventually hurts the companies and their customers. Importantly, this book provides strong evidence that companies would benefit from fixing their algorithms, establishing that companies make more money when they fix their algorithms to stop scams, propaganda, and misinformation. From this book, consumers, managers, and policy makers not only will know why algorithms go wrong, but also will be equipped with solutions and ready to push for change.

This is the story of what went wrong and how it can be fixed as told by people who were there. I bring together rare expertise to shine a light on how to solve the greatest problem on the internet today. This book is a guide inside how the world’s biggest technology companies build their algorithms, why those algorithms can go wrong, and how to fix it.

Friday, October 27, 2023

Book excerpt: The irresistible lure of an unlocked house

(This is an excerpt from drafts of my unpublished book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Bad incentives and bad metrics create an opportunity. They are what allow bad guys to come in and take root. Scammers and propagandists can take advantage of poorly optimized algorithms to make algorithms promote whatever misinformation they like.

Adversaries outside of these companies see wisdom of the crowd algorithms as an opportunity for free advertising. By manipulating algorithms with fake crowds, such as an astroturf campaign of controlled accounts and bots pretending to be real people, bad actors can feign popularity. Wisdom of the crowds summarizes opinions of the crowd. If the crowd is full of shills, the opinions will be skewed in whatever direction the shills like.

There is a massive underground economy around purchasing five star reviews on Amazon — as well as offering one star reviews for competing products — that allows counterfeiters and fraudsters to purchase whatever reputation they like for questionable and even dangerous products. Third-party merchants selling counterfeit, fraudulent, or other illicit goods with very high profit margins buy reviews from these services, feigning high quality to unwitting Amazon customers. If they are caught, they simply create a new account, list all their items again, and buy more fake reviews.

Get-rich-quick scammers and questionable vitamin supplement dealers can buy fake crowds of bogus accounts on social media that like and share their false promises. Buying fake crowds of followers on social media that like and share your content is a mature service now with dealers offering access to thousands of accounts for a few hundred dollars. Scammers rely on these fake crowds shilling their wares to fool algorithms into promoting their scams.

Foreign operatives have buildings full of people, each employee sitting at a desk pretending to be hundreds of Americans at once. They spend long days at work on social media with their multitude of fake accounts, commenting, liking, following, and sharing, all with the goal of pushing their disinformation and propaganda. The propaganda effort was so successful that, by 2019, some of the largest pages on social media were controlled by foreign governments with interests not aligned with the United States. Using their multitude of fake accounts, they were able to fool social media algorithms into recommending their pages and posts. Hundreds of millions of Americans saw their propaganda.

It is cheap to buy fake crowds and swamp wisdom of the crowd algorithms with bogus data about what is popular. When the crowd isn’t real, the algorithms don’t work. Wisdom of the crowd relies on crowds of independent, real people. Fake crowds full of shills means there is no wisdom in that crowd.

When algorithms amplify scams and disinformation, it may increase a platform’s engagement metrics for the moment. But, in the long-run, the bad actors win and the company loses. It is easy for people inside of tech companies to unwittingly optimize their algorithms in ways that help scammers and propagandists and hurt customers.

Saturday, October 21, 2023

A summary of my book

My book is the untold story of the algorithms that shape our lives, how they went terribly wrong, and how to fix them.

Most people now have at least a vague idea that algorithms choose what we see on our favorite online platforms. On Amazon they recommend millions of products. On Facebook they predict whether we’re more likely to click on a cute animal video or a rant about Donald Trump. At Netflix, Spotify, Twitter, YouTube, Instagram, TikTok, and every other site on the internet, they serve billions of users with billions of recommendations. But most people don’t know how all those algorithms really work — or why in recent years they began filling our screens with misinformation.

Other books have described the abundant misinformation, scams, and propaganda on many platforms, but this is the first to offer practical fixes to misinformation and disinformation across the entire internet by focusing on how and why algorithms amplify harmful content. This book offers solutions to what has become the biggest problem on the internet, using insider knowledge from my 30 years of experience in artificial intelligence, recommender systems, search, advertising, online experimentation, and metrics, including many years at Amazon, Microsoft, and startups.

Many assume “the problem with algorithms” is a tech problem, but it’s actually an incentives problem. Solutions must begin with the incentives driving the executives who run platforms, the investors who fund them, the engineers who build and optimize algorithms, and the content creators who do whatever it takes to maximize their own visibility. Ultimately, this is a book about people and how people optimize algorithms.

Equipped with insider knowledge of why these algorithms do what they do, readers will finish this book with renewed hope, practical solutions, and ready to push for change.

(this was a summary of my book, and I will be posting more excerpts from the book here)

Thursday, October 19, 2023

Book excerpt: The problem is fake crowds

(This is an excerpt from my book. Please let me know if you like it and want more.)

It is usually unintentional. Companies don’t intend for their websites to fill with spam. Companies don’t intend for their algorithms to amplify propagandists, shills, and scammers.

It can happen just from overlooking the problem then build up over time. Bad actors come in, the problem grows and grows, and eventually becomes difficult and costly to stop.

For the bad guys, the incentives are huge. Get your post trending, and a lot of people will see it. If your product is the first thing people see when they search, you will get a lot of sales. When algorithms recommend your content to people, that means a lot more people will see you. It’s like free advertising.

Adversaries will attack algorithms. They will pay people to offer positive reviews. They will create fake crowds consisting of hundreds of fake accounts, all together liking and sharing their brilliant posts, all together saying how great they are. If wisdom of the crowd algorithms treat these fake crowds as real, the recommendations will be shilled, spammy, and scammy.

Allow the bad guys to create fake crowds and the algorithms will make terrible recommendations. Algorithms try to help people find what they need. They try to show just the right thing to customers at just the right time. But fake crowds make that impossible. Facebook suffers from this problem. An internal study at Facebook looked at why Facebook couldn’t retain young adults. Young people consistently described Facebook as “boring, misleading, and negative” and complained that “they often have to get past irrelevant content to get to what matters.”

Customers won’t stick around if what they see is mostly useless scams. Nowadays, Facebook’s business has stalled because of problems with growth and retention, especially with young adults. Twitter's audience and revenue has cratered.

Bad, manipulated, shilled data means bad recommendations. People won’t like what they are seeing, and they won’t stay around.

Kate Conger wrote in the New York Times about why tech companies sometimes underestimate how bad problems with spam, misinformation, propaganda, and scams will get if neglected. In the early years of Twitter, “they believed that any reprehensible content would be countered or drowned out by other users.” Jason Goldman, who was very early at Twitter, described “a certain amount of idealistic zeal” that they all had, a belief that the crowds would filter out bad content and regulate discussion in the town square.

It wasn’t long until adversaries took advantage of their naiveté: “In September 2016, a Russian troll farm quietly created 2,700 fake Twitter profiles” which they used to shill and promote whatever content they liked, including attempting to manipulate the upcoming US presidential election.

On Facebook, “One Russian- run Facebook page, Heart of Texas, attracted hundreds of thousands of followers by cultivating a narrow, aggrieved identity,” Max Fisher wrote in The Chaos Machine. “‘Like if you agree,’ captioned a viral map with all other states marked ‘awful’ or ‘boring,’ alongside text urging secession from the morally impure union. Some posts presented Texas identity as under siege (‘Like & share if you agree that Texas is a Christian state’).”

Twitter was born around lofty goals of the power of wisdom of the crowds to fix problems. But the founders were naive about how bad the problems could get with bad actors creating fake accounts and controlling multiple accounts. By pretending to be many people, adversaries could effectively vote many times, and give the appearance of a groundswell of faked support and popularity to anything they liked. Twitter’s algorithms would then dutifully pick up the shilled content as trending or popular and amplify it further.

Twitter later “rolled out new policies that were intended to prevent the spread of misinformation,” started taking action against at least some of the bot networks and controlled accounts, and even “banned all forms of political advertising.” That early idealism that “the tweets must flow” and that wisdom of the crowds would take care of all problems was crushed under a flood of manipulated fake accounts.

Bad actors manipulate wisdom of the crowds because it is lucrative to do so. For state actors, propaganda on social media is cheaper than ever. Creating fake crowds feigns popularity for their propaganda, confuses the truth in a flood of claims and counterclaims, and silences opposition. For scammers, wisdom of the crowds algorithms are like free advertising. Just by creating a few hundred fake accounts or by paying others to help shill, they can wrap scams or outright fraud in a veneer of faked reliability and usefulness.

“Successfully gaming the algorithm can make the difference between reaching an audience of millions – or shouting into the wind,” wrote Julia Carrie Wong in the Guardian. Successfully manipulating wisdom of the crowds data tricks trending and recommender algorithms into amplifying. Getting into trending, the top search results, or getting recommended by manipulating using fake and controlled accounts can be a lot cheaper and more effective than buying advertising.

“In addition to distorting the public’s perception of how popular a piece of content is,” Wong wrote, “fake engagement can influence how that content performs in the all-important news feed algorithm.” With fake accounts, bad actors can fake likes and shares, creating fake engagement and fake popularity, and fooling the algorithms into amplifying. “It is a kind of counterfeit currency in Facebook’s attention marketplace.”

“Fake engagement refers to things such as likes, shares, and comments that have been bought or otherwise inauthentically generated on the platform,” Karen Hao wrote in MIT Technology Review. It’s easy to do. “Fake likes and shares [are] produced by automated bots and used to drive up someone’s popularity.”

“Automation, scalability, and anonymity are hallmarks of computational propaganda,” wrote University of Oxford Professor Philip Howard in his recent book Lie Machines. “Programmers who set up vast networks” of shills and bots “have a disproportionate share of the public conversation because of the fake user accounts they control.” For example, “dozens of fake accounts all posing as engaged citizens, down- voting unsympathetic points of view and steering a conversation in the service of some ideological agenda— a key activity in what has come to be known as political astroturfing. Ordinary people who log onto these forums may believe that they are receiving a legitimate signal of public opinion on a topic when they are in effect being fed a narrative by a secret marketing campaign.” Fake crowds create a fake “impression that there is public consensus.” And by manipulating wisdom of the crowds algorithms, adversaries “control the most valuable resource possible … our attention.”

The most important part is at the beginning. Let’s say there is a new post full of misinformation. No one has seen it yet. What it needs is to look popular. What it needs is a lot of clicks, likes, and shares. If you control a few hundred accounts, all you need to do is have them all engage with your new post around the same time. And wow! Suddenly you look popular!

Real people join in later. It is true that real people share misinformation and spread it further. But the critical part is at the start. Fake crowds make something new look popular. It isn’t real. It’s not real people liking and sharing the misinformation. But it works. The algorithms see all the likes and shares. The algorithms think the post is popular. The algorithms amplify the misinformation. Once the algorithms amplify, a lot of real people see the shilled post. It is true that there is authentic engagement from real people. But most important is how everything got started, shilling using fake crowds.

When adversaries shill wisdom of the crowd algorithms, they replace the genuinely popular with whatever they like. This makes the experience worse and eventually hurts growth, retention, and corporate profits. These long-term costs are subtle enough that many tech companies often miss them until the costs become large.

Ranking algorithms use wisdom of the crowds to determine what is popular and interesting. Wisdom of the crowds requires independent opinions. You don't have independent opinions when there is coordinated shilling by adversaries, scammers, and propagandists. Faked crowds make trending, search, and recommendation algorithms useless. To be useful, the algorithms have to use what real people actually like.

Tuesday, October 17, 2023

Book excerpt: Mark as spam, the long fight to keep emails and texts useful

(This is an excerpt from my book. Please let me know if you like it and want more.)

The first email on the internet was sent in 1971. Back then, the internet was a small place used only by a few geeky researchers affiliated with ARPANET, an obscure project at the Department of Defence.

Oh how the internet has grown. Five billion people now use the internet, including nearly everyone in the United States, as well as most of the world. A lot of the time, we use the internet to communicate with our friends using email and text messaging.

As the internet usage grew, so did the profit motive. The first email spam was sent in 1978, an advertisement for mainframe computers. By the mid-1990s, as more and more people started using the internet, email spam became ubiquitous. Sending a spam message to millions of people could get a lot of attention and earn spammers a lot of money. All it took was a small percentage of the people responding.

It got to the point that, by the early 2000s, email was becoming difficult to use because of the time-consuming distraction of dealing with unwanted spam. The world needed solutions.

The problem is aggravated by executives often unwittingly measuring the goals of their marketing teams by whether people click on their emails, which has unintended harmful consequences. If you measure teams by how many clicks they get on their emails, the teams have a strong incentive to send as much email as possible. And that means customers get annoyed by all the emails and start marking it as spam. This long-term cost – that you might not be able to send email anymore to customers if you send them too much email – needed to be part of the goals of any team sending email to customers.

The bigger email spam problem was that spam worked for the bad guys. When bad actors can make money by sending spam emails, you get a lot of spam emails. Spammers could make a lot of money by evading spam filters. So they worked hard to trick spam filters by, for example, using misspellings to get past keyword detection.

In the early 2000s, email was dying under spam. It was bad out there. Spam filtering algorithms were in an arms race against bad actors who tried everything to get around them. Anti-spam algorithms filtered out spammy keywords, so the bad guys used misspelling and hordes of fake accounts to get back in that inbox. The good guys adapted to the latest tactics, then the bad guys found new tricks.

What finally fixed it was to make email spam unprofitable. If you never see spam, it is like it doesn't exist for you. Spammers spam because they make money. If it becomes more difficult to make money, there will be fewer spammers sending fewer scams to your inbox. But how can you make spam less profitable?

What worked was reputation. Much like TrustRank, known spammers and unknown senders of email tend to be unreliable, and reliable and well-known internet domains tend to not send spam. Reliable companies and real people should be able to send email. New accounts created on new internet domains, especially if they have sent spam before, probably should not be able to send email. Treating every email from unknown or unreliable sources with great suspicion, and skipping the inbox, means most people nowadays rarely see email and text spam, merely an occasional nuisance today.

Email spam is barely profitable these days for spammers. Reducing the payoff from spamming changes the economics of spam. To discourage bad behaviors, make them less profitable.

Monday, October 16, 2023

Cory Doctorow on enshittification

Another good piece by Cory on enshittification, with details about Facebook, some on how A/B testing optimizes for enshittification, and updated with how Musk's Twitter is impatiently racing to enshittify. An excerpt from Cory's piece:
Enshittification is the process by which a platform lures in and then captures end users (stage one), who serve as bait for business customers, who are also captured (stage two) whereupon the platform rug-pulls both groups and allocates all the value they generate and exchange to itself (stage three).

It was a long con. Platform operators and their investors have been willing to throw away billions convincing end-users and business customers to lock themselves in until it was time for the pig-butchering to begin. They financed expensive forays into additional features and complementary products meant to increase user lock-in, raising the switching costs for users who were tempted to leave.

Tech platforms are equipped with a million knobs on their back-ends, and platform operators can endlessly twiddle those knobs, altering the business logic from moment to moment, turning the system into an endlessly shifting quagmire where neither users nor business customers can ever be sure whether they're getting a fair deal.

For users, this meant that their feeds were increasingly populated with payola-boosted content from advertisers and pay-to-play publishers ... Twiddling lets Facebook fine-tune its approach. If a user starts to wean themself off Facebook, the algorithm (TM) can put more content the user has asked to see in the feed. When the user's participation returns to higher levels, Facebook can draw down the share of desirable content again, replacing it with monetizable content. This is done minutely, behind the scenes, automatically, and quickly. In any shell game, the quickness of the hand deceives the eye.

If a user starts to wean themself off Facebook, the algorithm (TM) can put more content the user has asked to see in the feed. When the user's participation returns to higher levels, Facebook can draw down the share of desirable content again, replacing it with monetizable content. This is done minutely, behind the scenes, automatically, and quickly. In any shell game, the quickness of the hand deceives the eye ... This is the final stage of enshittification: withdrawing surpluses from end-users and business customers, leaving behind the minimum homeopathic quantum of value for each needed to keep them locked to the platform, generating value that can be extracted and diverted to platform shareholders.

But this is a brittle equilibrium to maintain. The difference between "God, I hate this place but I just can't leave it" and "Holy shit, this sucks, I'm outta here" is razor-thin. All it takes is one privacy scandal, one livestreamed mass-shooting, one whistleblower dump, and people bolt for the exits. This kicks off a death-spiral: as users and business customers leave, the platform's shareholders demand that they squeeze the remaining population harder to make up for the loss.

As much as Cory talks about it here, I do think the role of A/B testing in enshittification is understated. Teams can unintentionally enshitify just with repeated A/B testing and optimizing for the metrics they are told to optimize for. It doesn't necessarily take malice, certainly not on the part of everyone at the company, just A/B testing, bad incentive systems for bonuses and promotions, and bad metrics like engagement.

Friday, October 13, 2023

To stop disinformation, stop astroturf

(this is a version of an excerpt from my book, if you like it please let me know)

There's a lot of discussion of removing disinformation as censorship lately. I think this gets the problem wrong. The problem is using many accounts that you control to act like a megaphone for your speech. Platforms can prevent disinformation by preventing bad actors from astroturfing popularity using faked crowds.

Governments regulating speech is fraught with peril. But disinformation campaigns don't work by using normal speech. They work by creating thousands of controlled accounts that like and share their own content, creating the appearance of popularity, which algorithms like search, trending, and recommendations then pick up and amplify further.

There's no right to create x1000 accounts for yourself and shout down everyone else. That's not how social media is supposed to work. And it's definitely not how wisdom of crowds is supposed to work. In wisdom of crowds, every voice has to be independent for the result to be valid. Search rankers, trending algorithms, and recommender systems are all based on wisdom of the crowds.

Regulators should focus not on specific posts or accounts, but on manipulation of the platforms by creating many accounts. It's fraudulent manipulation of platforms by spoofing what is popular. Astroturfing causes disinformation, not individuals posting what they think.

Monday, October 09, 2023

Book excerpt: The problem is not the algorithm

(This is an excerpt from the draft of my book. Please let me know if you like it and want more of these.)

“The Algorithm,” in scare quotes, is an oft-attacked target. But this obscures more than it informs.

It creates the image of some mysterious algorithm, intelligent computers controlling our lives. It makes us feel out of control. After all, if the problem is “the algorithm”, who is to blame?

When news articles talk about lies, scams, and disinformation, they often blame some all-powerful, mysterious algorithm as the source of the troubles. Scary artificial intelligence controls what we see, they say. That grants independence, agency, and power where none exists. It shifts responsibility away from the companies and teams that create and tune these algorithms and feed them the data that causes them to do what they do.

It's wrong to blame algorithms. People are responsible for the algorithms. Teams working on these algorithms and the companies that use them in their products have complete control over the algorithms. Every day, teams make choices on tuning the algorithms and what data goes into the algorithms that change what is emphasized and what is amplified. The algorithm is nothing but a tool, a tool people can control and use any way they like.

It is important to demystify algorithms. Anyone can understand what these algorithms do and why they do it. “While the phrase ‘the algorithm’ has taken on sinister, even mythical overtones, it is, at its most basic level, a system that decides a post’s position on the news feed based on predictions about each user’s preferences and tendencies,” wrote the Washington Post, in an article “How Facebook Shapes Your Feed.” How people tune and optimize the algorithms determines “what sorts of content thrive on the world’s largest social network and what types languish.”

We are in control. We are in control because “different approaches to the algorithm can dramatically alter the categories of content that tend to flourish.” Choices that teams and companies make about how to tune wisdom of the crowd algorithms make an enormous difference for what billions of people see every day.

You can think of all the choices for tuning the algorithms as a bunch of knobs you can turn. Turn that knob to make the algorithm show some stuff more and other stuff less.

When I was working at Amazon many years ago, an important knob we thought hard about turning was how much new items were recommended. When recommending books, one choice would tend to show people more older books that they might like. Another choice we could make would show people more new releases, such as new books that came out in the last year or two. On the one hand, people are particularly unlikely to know about a new release, and new books, especially by an author or in a genre you tend to read, can be particularly interesting to hear about. On the other hand, if you go by how likely you are to buy a book, maybe the algorithm should recommend older books. Help people discover something new or maximize sales today, our team had a choice in how to tune the algorithm.

Wisdom of the crowds works by summarizing people’s opinions. Another way that people control the algorithms is through the information about what people like, buy, and find interesting and useful. For example, if many people post positive reviews of a new movie, the average review of that movie might be very high. Algorithms use those positive reviews. This movie looks popular! People who haven’t seen it yet might want to hear about it. And people who watched similar other movies, such as movies in the same genre or with the same actors, might be particularly interested in hearing about this new movie.

The algorithms summarize what people are doing. They calculate and collate what people like and don’t like. What people like determines what the algorithms recommend. The data about what people like controls the algorithms.

But that means that people can change what the algorithms do through changing the data about what it seems like people like. For example, let’s say someone wants to sell more of their cheap flashlights, and they don’t really care about the ethics of how they get more sales. So they pay for hundreds of people to rate their flashlight with a 5-star review on Amazon.

If Amazon uses those shilled 5-star reviews in their recommendation engines and search rankers, those algorithms will mistakenly believe that hundreds of people think the flashlights are great. Everyone will see and buy the terrible flashlights. The bad guys win.

If Amazon chooses to treat that data as inauthentic, faked, bought-and-paid-for, and then ignores those hundreds of paid reviews, that poorly-made flashlight is far less likely to be shown to and bought by Amazon customers. After all, most real people don’t like that cheap flashlight. The bad guys tried hard to fake being popular, but they lost in the end.

The choice of what data is used and what is discarded makes an enormous difference in what is amplified and what people are likely to see. And since wisdom of the crowd algorithms assume that each vote for what is interesting and popular is independent, the choice of what votes are considered, and whether ballot-box stuffing is allowed, makes a huge difference in what people see. Humans make these choices. The algorithms have no agency. It is people, working in teams inside companies, that make choices on how to tune algorithms and what data is used by wisdom of the crowd algorithms. Those people can choose to do things one way, or they can choose to do them another way.

“Facebook employees decide what data sources the software can draw on in making its predictions,” reported the Washington Post. “And they decide what its goals should be — that is, what measurable outcomes to maximize for, and the relative importance of each.”

Small choices by teams inside of these companies can make a big difference for what the algorithms do. “Depending on the lever, the effects of even a tiny tweak can ripple across the network,” wrote the Washington Post in another article titled “Five Points for Anger, One Point for Like”. People control the algorithms. By tuning the algorithms, teams inside Facebook are “shaping whether the news sources in your feed are reputable or sketchy, political or not, whether you saw more of your real friends or more posts from groups Facebook wanted you to join, or if what you saw would be likely to anger, bore or inspire you.”

It's hard to find the right solutions if you don't first correctly identify the problem. The problem is not the algorithm. The problem is how people optimize the algorithm. People control what the algorithms do. What wisdom of the crowd algorithms choose to show depends on the incentives people have.

(This was an excerpt from the draft of my book. Please let me know if you like it and want more.)

Saturday, October 07, 2023

Book excerpt: Metrics chasing engagement

(This is an excerpt from the draft of my book. Please let me know if you like it and want more.)

Let’s say you are in charge of building a social media website like Facebook. And you want to give your teams a goal, a target, some way to measure that what they are about to launch on the website is better than what came before.

One metric you might think of might be how much people engage with the website. You might think, every click, every like, every share, you can measure those. The more the better! We want people clicking, liking, and sharing as much as possible. Right?

So you tell all your teams, get people clicking! The more likes the better! Let’s go!

Teams are always looking for ways to optimize the metrics. Teams are constantly changing algorithms. If you tell your teams to optimize for clicks, what you will see is that soon recommender and ranker algorithms will change what they show. Up at the top of any recommendations and search results will be the posts and news predicted to get the most clicks.

Outside of the company, people will also notice and change what they do. They will say, this article I posted didn’t get much attention. But this one, wow, everyone clicked on it and reshared it. And people will create more of whatever does well on your site with the changes your team made to your algorithms.

All sounds great, right? What could go wrong?

The problem is what attracts the most clicks. What you are likely to click on are things that provoke strong emotions, such as hatred, disbelief, anger, or lust. This means what gets the most clicks are things that are lies, sensationalistic, provoking, or pornographic. The truth is boring. Posts of your Aunt Mildred’s flowers might make you happy. But they won’t get a click. But, oh yeah, that post with scurrilous lies about some dastardly other, that likely will get engagement.

Cecilia Kang and Sheera Frenkel wrote a book about Facebook, An Ugly Truth. In it, they describe the problem with how Facebook optimized its algorithms: “Over the years, the platform’s algorithms had gotten more sophisticated at identifying the material that appealed most to individual users and were prioritizing it at the top of their feeds. The News Feed operated like a finely tuned dial, sensitive to that photograph a user lingered on longest, or the article they spent the most time reading. Once it had established that the user was more likely to view a certain type of content, it fed them as much of it as possible.”

The content the algorithms fed to people, the content the algorithms chose to put on top and amplify, was not what made people content and satisfied. It was whatever would provide a click right now. And what would provide a click right now was often enraging lies.

“Engagement was 50 percent higher than in 2018 and 10 percent higher than in 2017,” wrote the author of the book The Hype Machine. “Each piece of content is scored according to our probabilities of engaging with it, across the several dozen engagement measures. Those engagement probabilities are aggregated into a single relevance score. Once the content is individually scored (Facebook’s algorithm considers about two thousand pieces of content for you every time you open your newsfeed), it’s ranked and shown in your feed in order of decreasing relevance.”

Most people will not read past the top few items in search results or on recommendations. So what is at the top is what matters most. In this case, by scoring and ordering content by likelihood of engagement, the content being amplified was the most sensationalistic content.

Once bad actors outside of Facebook discovered the weaknesses of the metrics behind the algorithms, they exploited it. From an article titled “Troll Farms Reached 140M Americans,” these are “easily exploited engagement based ranking systems … At the heart of Feed ranking, there are models that predict the probability a user will take an engagement action. These are colloquially known as P(like), P(comment), and P(share).” That is, the models use a prediction of the probability that people will like the content, the probability that they will share it, and so forth. Hao cited an internal report from Facebook that said that these “models heavily skew toward content we know to be bad.” Bad content includes hate speech, lies, and plagiarized content.

“Bad actors have learned how to easily exploit the systems,” said former Facebook data scientist Jeff Allen. “Basically, whatever score a piece of content got in the models when it was originally posted, it will likely get a similar score the second time it is posted … Bad actors can scrape … and repost … to watch it go viral all over again.” Anger, outrage, lies, and hate, all of those performed better on engagement metrics. They don’t make people satisfied. They make people more likely to leave in disgust than keep coming back. But they do make people likely to click right now.

It is by no means necessary to optimize for short-term engagement. Sarah Frier in the book No Filter describes how Instagram, in its early years, looked at what was happening at Facebook and made a different choice: “They decided the algorithm wouldn’t be formulated like the Facebook news feed, which had a goal of getting people to spend more time on Facebook … They knew where that road had led Facebook. Facebook had evolved into a mire of clickbait … whose presence exacerbated the problem of making regular people feel like they didn’t need to post. Instead Instagram trained the program to optimize for ‘number of posts made.’ The new Instagram algorithm would show people whatever posts would inspire them to create more posts.” While optimizing for the number of posts made also could have bad incentives, such as encouraging spamming, most important is considering the incentives created by the metrics you pick and questioning whether your current metrics are the best thing for the long-term of your business.

YouTube is an example of a company that picked problematic metrics years ago, but then questioned what was happening, noticed the problem, and then fixed their metrics in recent years. While researchers noted problems with YouTube’s recommender system amplifying terrible content many years ago, in recent years they have mostly concluded that YouTube no longer algorithmically amplifies — though they do still host — hate speech and other harmful content.

The problem started, as described by the authors of the book System Error, when a Vice President at YouTube “wrote an email to the YouTube executive team arguing that ‘watch time, and only watch time’ should be the objective to improve at YouTube … He equated watch time with user happiness: if a person spends hours a day watching videos on YouTube, it must reveal a preference for engaging in that activity.” The executive went on to claim, “When users spend more of their valuable time watching YouTube videos, they must perforce be happier with those videos.”

It is important to realize that YouTube is a giant optimization machine, with teams and systems targeting whatever metric it is given to maximize that metric. In the paper “Deep Neural Networks for YouTube Recommendations,” YouTube researchers describe it: “YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence … In a live experiment, we can measure subtle changes in click-through rate, watch time, and many other metrics that measure user engagement … Our goal is to predict expected watch time given training examples that are either positive (the video impression was clicked) or negative (the impression was not clicked).”

The problem is that optimizing your recommendation algorithm for immediate watch time, which is an engagement metric, tends to show sensationalistic, scammy, and extreme content, including hate speech. As BuzzFeed reporters wrote in an article titled “We Followed YouTube’s Recommendation Algorithm Down the Rabbit Hole”: “YouTube users who turn to the platform for news and information — more than half of all users, according to the Pew Research Center — aren’t well served by its haphazard recommendation algorithm, which seems to be driven by an id that demands engagement above all else.”

The reporters described a particularly egregious case: “How many clicks through YouTube’s Up Next recommendations does it take to go from an anodyne PBS clip about the 116th United States Congress to an anti-immigrant video from a designated hate organization? Thanks to the site’s recommendation algorithm, just nine.” But the problem was not isolated to just a small number of examples. At the time, there were a “high percentage of users who say they’ve accepted suggestions from the Up Next algorithm — 81%.” The problem is that the optimization engines for their recommender algorithms. “It’s an engagement monster.”

The “algorithm decided which videos YouTube recommended that users watch next; the company said it was responsible for 70 percent of the one billion hours a day people spent on YouTube. But it had become clear that those recommendations tended to steer viewers toward videos that were hyperpartisan, divisive, misleading or downright false.” The problem was optimizing for an engagement metric like watch time.

Why does this happen? In any company, in any organization, you get what you measure. When you tell your teams to optimize for a certain metric, that they will get bonuses and be promoted if they optimize for that metric, they will optimize the hell out of that metric. As Bloomberg reporters wrote in an article titled “YouTube Executives Ignored Warnings,” “Product tells us that we want to increase this metric, then we go and increase it … Company managers failed to appreciate how [it] could backfire … The more outrageous the content, the more views.”

This problem was made substantially worse at YouTube by outright manipulation of YouTube’s wisdom of the crowd algorithms by adversaries, who effectively stuffed the ballot box for what is popular and good with votes from fake or controlled accounts. As Guardian reporters wrote, “Videos were clearly boosted by a vigorous, sustained social media campaign involving thousands of accounts controlled by political operatives, including a large number of bots … clear evidence of coordinated manipulation.”

The algorithms optimized for engagement, but they were perfectly happy to optimize for fake engagement, clicks and views from accounts that were all controlled by a small number of people. By pretending to be a large number of people, adversaries easily could make whatever they want appear popular, and also then get it amplified by a recommender algorithm that was greedy for more engagement.

In a later article, “Fiction is Outperforming Reality,” Paul Lewis at the Guardian wrote, “YouTube was six times more likely to recommend videos that aided Trump than his adversary. YouTube presumably never programmed its algorithm to benefit one candidate over another. But based on this evidence, at least, that is exactly what happened … Many of the videos appeared to have been pushed by networks of Twitter sock puppets and bots.” That is, Trump videos were not actually better to recommend, but manipulation by bad actors using a network of fake and controlled accounts caused the recommender to believe that it should recommend those videos. Ultimately, the metrics they picked, metrics that emphasized immediate engagement rather than the long-term, were at fault.

“YouTube’s recommendation system has probably figured out that edgy and hateful content is engaging.” As sociologist Zeynep Tufekci described it, “This is a bit like an autopilot cafeteria in a school that has figured out children have sweet teeth, and also like fatty and salty foods. So you make a line offering such food, automatically loading the next plate as soon as the bag of chips or candy in front of the young person has been consumed.” If the target of the optimization of the algorithms is engagement, the algorithms will be changed over time to automatically show the most engaging content, whether it contains useful information or full of lies and anger.

The algorithms were “leading people down hateful rabbit holes full of misinformation and lies at scale.” Why? “Because it works to increase the time people spend on the site” watching videos.

Later, YouTube stopped optimizing for watch time, but only years after seeing how much harmful content was recommended by YouTube algorithms. At the time, chasing engagement metrics changed both what people watched on YouTube and what videos got produced for YouTube. As one YouTube creator said, “We learned to fuel it and do whatever it took to please the algorithm.” Whatever metrics the algorithm was optimizing for, they did whatever it takes to please it. Pick the wrong metrics and the wrong things will happen, for customers and for the business.

(This was an excerpt from the draft of my book. Please let me know if you like it and want more.)