Tuesday, November 28, 2023

Book excerpt: The problem is bad incentives

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Incentives matter. “As long as your goal is creating more engagement,” said former Facebook data scientist Francis Haugen in a 60 Minutes interview, “you’re going to continue prioritizing polarizing, hateful content.”

Teams inside of the tech companies determine how the algorithms are optimized and what the algorithms amplify. People in teams optimize those algorithms for whatever goals they are given. Metrics and incentives the teams have inside the tech companies determine how wisdom of the crowd algorithms are optimized over time.

What the company decides is important and rewards determines how the algorithms are tuned. Metrics determine what wins A/B tests. Metrics decide what changes get launched to customers. Metrics determine who gets promoted inside these companies. When a company creates bad incentives by picking bad metrics, the algorithms will produce bad results.

What Facebook’s leadership prioritizes and rewards determines what people see on Facebook. “Facebook’s algorithm isn’t a runaway train,” Haugen said. “The company may not directly control what any given user posts, but by choosing which types of posts will be seen, it sculpts the information landscape according to its business priorities.” What the executives prioritize in what they measure and reward determines what types of posts people see on Facebook. You get what you measure.

“Mark has never set out to make a hateful platform. But he has allowed choices to be made where the side effects of those choices are that hateful, polarizing content gets more distribution and more reach,” Haugen said. Disinformation, misinformation, and scams on social media are “the consequences of how Facebook is picking out that content today.” The algorithms are “optimizing for content that gets engagement, or reaction.”

Who gets that quarterly bonus? It’s hard to have a long-term focus when the company offers large quarterly bonuses for hitting short-term engagement targets. In No Rules Rules, Netflix co-founder and CEO Reed Hastings wrote, “We learned that bonuses are bad for business.” He went on to say that executives are terrible at setting the right metrics for the bonuses and, even if they do, “the risk is that employees will focus on a target instead of spot what’s best for the company.”

Hastings said that “big salaries, not merit bonuses, are good for innovation” and that Netflix does not use “pay-per-performance bonuses.” Though “many imagine you lose your competitive edge if you don’t offer a bonus,” he said, “We have found the contrary: we gain a competitive edge in attracting the best because we just pay all that money in salary.”

At considerable effort, Google, Netflix, and Spotify have shown that, properly measured in long experiments, short-term metrics such as engagement or revenue hurt the company in the long-run. For example, in a paper titled “Focus on the Long-term: It’s Better for Users and Business”, Google showed that optimizing for weekly ad revenue would result in far too many ads in the product to maximize Google’s long-term ad revenue. Short-term metrics miss the most important goals for a company: growth, retention, and long-term profitability.

Short-term metrics and incentives overoptimize for immediate gains and ignore long-term costs. While companies and executives should have enough reasons to avoid bad incentives and metrics that hurt the company in the long-term, it is also true that regulators and governments could step in to encourage the right behaviors. As Foreign Policy wrote when talking about democracies protecting themselves from adversarial state actors, regulators could encourage social media companies to think beyond the next quarterly earnings report.

Regulators have struggled to understand how to help. Could they directly regulate algorithms? Attempts to do so have immediately hit the difficulty of crafting useful regulations for machine learning algorithms. But the problem is not the algorithm. The problem is people.

Companies want to make money. Many scammers and other bad actors also want to make money. The money is in the advertising.

Fortunately, the online ad marketplace already has a history of being regulated in many countries. Regulators in many countries already maintain bans on certain types of ads, restrictions on some ads, and financial reporting requirements for advertising. Go after the money and you change the incentives.

Among those suggesting increasing regulation on social media advertising is the Aspen Institute Commission on Information Disorder. In their report, they suggest countries “require social media companies to regularly disclose ... information about every digital ad and paid post that runs on their platforms [and then] create a legal requirement for all social media platforms to regularly publish the content, source accounts, reach and impression data for posts that they organically deliver to large audiences.”

This would provide transparency to investors, the press, government regulators, and the public, allowing problems to be seen far earlier, and providing a much stronger incentive for companies themselves to prevent problems before having them disclosed.

The Commission on Information Disorder goes further, suggesting that, in the United States, the extension of Section 230 protections to advertising and algorithms that promote content is overly broad. They argue any content that is featured, either by paid placement advertising or by recommendation algorithms, should be more heavily scrutinized: “First, withdraw platform immunity for content that is promoted through paid advertising and post promotion. Second, remove immunity as it relates to the implementation of product features, recommendation engines, and design.”

Their report was authored by some of the world experts on misinformation and disinformation. They say that “tech platforms should have the same liability for ad content as television networks or newspapers, which would require them to take appropriate steps to ensure that they meet the established standards for paid advertising in other industries.” They also say that “the output of recommendation algorithms” should not be considered user speech, which would enforce a “higher standard of care” when the company’s algorithms get shilled and amplify content “beyond organic reach.”

These changes would provide strong incentives for companies to prevent misinformation and propaganda in their products. The limitations on advertising would reduce the effectiveness of using advertising in disinformation campaigns. It also would reduce the effectiveness of spammers who opportunistically pile on disinformation campaigns, cutting into their efficiency and profitability. Raising the costs and reducing the efficiency of shilling will reduce the amount of misinformation on the platform.

Subject internet companies to the same regulations on advertising that television networks and newspapers have. Regulators are already familiar with following the money, and even faster enforcement and larger penalties for existing laws would help. By changing where revenue comes from, it may encourage better incentives and metrics within tech companies.

“Metrics can exert a kind of tyranny,” former Amazon VP Neil Roseman said in our interview. Often teams “don’t know how to measure a good customer experience.” And different teams may have “metrics that work against each other at times” because simpler and short-term term metrics often “narrow executive focus to measurable input/outputs of single systems.” A big problem is that “retention (and long-term value) are long-term goals which, while acknowledged, are just harder for people to respond to than short-term.”

Good incentives and metrics focus on the long-term. Short-term incentives and metrics can create a negative feedback loop as algorithms are optimized over time. Good incentives and metrics focus on what is important to the business, long-term retention and growth.

No comments: