Friday, January 12, 2024

My book, Algorithms and Misinformation

Misinformation and disinformation are the biggest problems on the internet.

To solve a problem, you need to understand the problem. In Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It, I claim that the problem is not that misinformation exists, but that so many people see it. I explain why algorithms amplify scams and propaganda, how it easily can happen unintentionally, and offer solutions.

You can read much of the book for free. If you want a single article summary, this overview describes the entire book:

If you are interested in what you might get from skimming the book, you might be interested in a bit more: If you want part of what you might get from reading the entire book, you may want all the excerpts: I wanted this book to be a part of the debate on how to solve misinformation and disinformation on the internet. This book offers some practical solutions. It was intended to be an essential part of the discussion about viable solutions to what has become one of the biggest problems of our time.

I wrote, developed, and edited this book over four years. It was under contract with two agents for a year but will not be published. The full manuscript had many more examples, interviews, and stories, but you can get some of what you would have gotten by reading the book by reading all the excerpts above.

Some might want to jump straight to ideas for solutions. I think solutions depend on who you are.

For those inside of tech companies, this book shows how other companies have fixed this and made more revenue. Because it's easy for executives to unintentionally cause search and recommendations to amplify scams, it's important for everyone to question what algorithms are optimized for and make sure they point toward the long-term growth of the company.

For the average person, because the book shows companies actually make more money when they don't allow their algorithms to promote scams, this book gives hope that complaining about scammy products and stopping use of those products will change the internet we use every day.

For policy makers, because it's hard to regulate AI but easy to regulate what they already know how to regulate, this book claims they should target scammy advertising that funds misinformation, increase fines for promoting fraud, and ramp up antitrust efforts (to increase consumers' ability to switch to alternatives and further raise long-term costs on companies that enshittify their products).

Why these are the solutions requires exploring the problem. Most of the book is about how companies build their algorithms -- optimizing them over time -- and how that can accidentally amplify misinformation. To solve the problem, focus not on that misinformation exists, but that people see too much misinformation and disinformation. If the goal is to reduce it to nuisance levels, we can fix misinformation on the internet.

Through stories, examples, and research, this book showed why so many people see misinformation and disinformation, that it is often unintentional, and that it doesn't maximize revenue for companies. Understanding why we see so much misinformation is the key to coming up with practical solutions.

I hope others find this useful. If you do, please let me know.

Wednesday, January 10, 2024

Book excerpt: Conclusion

(This is one version of the conclusion from my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Wisdom of crowds is the idea that summarizing the opinions of a lot of people often is very useful. Computers can do this too. Wisdom of the crowd algorithms operating at a massive scale pick everything we see when we use the internet.

Computer algorithms look at people's actions as if they were votes for what is interesting and important. Search and recommendations on your favorite websites combine what millions of people do to help you find what you need.

In recent years, something has gone terribly wrong. Wisdom of the crowds has failed us.

Misinformation and scammers are everywhere on the internet. You cannot buy something from Amazon, see what friends are doing on Facebook, or try to read news online without encountering fraudsters and propagandists.

This is the story of what happened and how to fix it, told by the insiders that built the internet we have today.

Throughout the last thirty years of the internet, we fought fraudsters and scammers trying to manipulate what people see. We fought scammers when we built web search. We fought spammers trying to get into our email inboxes. We fought shills when we built algorithms recommending what to buy.

Seeing these hard battles through the lens of insiders reveals an otherwise hidden insight: how companies optimize their algorithms is what amplifies misinformation and causes the problems we have today.

The problem is not the algorithm. The problem is how algorithms are tuned and optimized.

Algorithms will eventually show whatever the team is rewarded for making the algorithms show. When algorithms are optimized badly, they can do a lot of harm. Through the metrics and incentives they set up, teams and executives control these algorithms and how they are optimized over time.

We have control. People control the algorithms. We should make sure these algorithms built by people work well for people.

Wisdom of the crowd algorithms such as recommender systems, trending, and search rankers are everywhere on the internet. Because they control what billions see every day, these algorithms are enormously valuable.

The algorithms are supposed to work by sharing what people find interesting with other people who have not seen it yet. They can enhance discovery and help people discover things they would not have found on their own, but only if they are optimized and tuned properly.

Short-term measures like clicks are bad metrics for algorithms. These metrics encourage scams, sensationalistic content, and misinformation. Amplifying fraudsters and propagandists creates a terrible experience for customers and eventually hurts the company.

Wisdom of the crowds doesn’t work when the crowds are fake. Wisdom of the crowds will amplify scams and propaganda if a few people can shout down everyone else with their hordes of bots and shills. Wisdom of the crowds requires information from real, independent people.

If executives tell teams to optimize for clicks, it can be hard to remove fake accounts, shills, and sockpuppets. Click metrics will be higher when bad actors shill because fake crowds faking popularity looks like a lot of new fake accounts creating a lot of new fake engagement. But none of it is real, and none of it helps the company or its customers in the long run.

Part of the solution is only using reliable accounts for wisdom of crowds. Wisdom of the trustworthy makes it much harder for bad actors to create fake crowds and feign popularity. Wisdom of the trustworthy means algorithms only use provably human and reliable accounts as input to the algorithms. To deter fraudsters from creating lots of fake accounts, trust must be hard to gain and easy to lose.

Part of the solution is to recognize that most metrics are flawed proxies for what you really want. What companies really want is satisfied customers that stay with you for a long time. Always question whether your metrics are pointing you at the right target. Always question if your wisdom of the crowd algorithms are usefully helping customers.

It's important to view optimizing algorithms as investing in the long-term. Inside tech companies, to measure the success of those investments and the long-term success of the company, teams should run long experiments to learn more about long-term harm and costs. Develop metrics that approximate long-term retention and growth. Everyone on teams should constantly question metrics and frequently change goal metrics to improve them.

As Google, Netflix, YouTube, and Spotify have discovered, companies make more money if they invest in good algorithms that don't chase clicks.

Even so, some companies may need encouragement to focus on the long-term, especially if their market power means customers have nowhere else to go.

Consumer groups and policy makers can help by pushing for more regulation of the advertising that funds scams, antitrust enforcement to maintain competition and offer alternative to consumers that are fed up with enshittified products, and the real threat of substantial and painful fines for failing to minimize scams and fraud.

We can have the internet we want. We can protect ourselves from financial scams, consumer fraud, and political propaganda. We can fix misinformation on the internet.

With a deeper understanding of why wisdom of the crowd algorithms can amplify misinformation and cause harm, we can fix seemingly ungovernable algorithms.

Monday, January 08, 2024

Book excerpt: A win-win-win for customers, companies, and society

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Everyone wins -- companies, consumers, and society -- if companies fix their algorithms to stop amplifying scams and misinformation.

Executives are often tempted to reward their teams for simpler success metrics like engagement. But companies make more money if they focus on long-term customer satisfaction and retention.

YouTube had a problem. They asked customers, “What’s the biggest problem with your homepage today?” The answer came back: “The #1 issue was that viewers were getting too many already watched videos on their homepage.” In our interview, YouTube Director Todd Beaupré discussed how YouTube made more money by optimizing their algorithms for diversity, customer retention, and long-term customer satisfaction.

YouTube ran experiments. They found that reducing already watched recommendations reduced how many videos people watched from their home page. Beaupré said, “What was surprising, however, was that viewers were watching more videos on YouTube overall. Not only were they finding another video to enjoy to replace the lost engagement from the already watched recommendations on the homepage, they found additional videos to watch as well. There were learning effects too. As the experiment ran for several months, the gains increased.”

Optimizing not for accuracy but for discovery turned out to be one of YouTube’s biggest wins. Beaupré said, “Not only did we launch this change, but we launched several more variants that reduced already watched recommendations that combined to be the most impactful launch series related to growing engagement and satisfaction that year.”

Spotify researchers found the same thing, that optimizing for engagement right now misses a chance to show something that will increase customer engagement in the future. They said, “Good discoveries often lead to downstream listens from the user. Driving discovery can help reduce staleness of recommendations, leading to greater user satisfaction and engagement, thereby resulting in increased user retention. Blindly optimizing for familiarity results in potential long term harms.” In the short-term, showing obvious and familiar things might get a click. In the long-term, helping customers discover new things leads to greater satisfaction and better retention.

Companies that don't optimize for engagement make more money. In a paper “Focus on the Long-Term: It’s Better for Users and Business,” Googlers wrote that “optimizing based on short-term revenue is the obvious and easy thing to do, but may be detrimental in the long-term if user experience is negatively impacted.” What can look like a loss in short-term revenue can actually be a gain in long-term revenue.

Google researchers found that it was very important to measure long-term revenue because optimizing for engagement ignores that too many ads will make people ignore your ads or stop coming entirely. Google said investing in cutting ads in half in their product improved customer satisfaction and resulted in a net positive change in ad revenue, but they could only see that they made more money when they measured over long periods of time.

Netflix uses very long experiments to keep their algorithms targeting long-term revenue. From the paper "Netflix Recommender System": “We ... let the members in each [experimental group] interact with the product over a period of months, typically 2 to 6 months ...The time scale of our A/B tests might seem long, especially compared to those used by many other companies to optimize metrics, such as click-through rates ... We build algorithms toward the goal of maximizing medium-term engagement with Netflix and member retention rates ... If we create a more compelling service by offering personalized recommendations, we induce members who were on the fence to stay longer, and improve retention.”

Netflix's goal is keeping customers using the product. If customers stay, they keep generating revenue, which maximizes long-term business value. “Over years of development of personalization and recommendations, we have reduced churn by several percentage points. Reduction of monthly churn both increases the lifetime of an existing subscriber and reduces the number of new subscribers we need to acquire.”

Google revealed how they made more money when they did not optimize for engagement. Netflix revealed they focus on keeping people watching Netflix for many years, including their unusually lengthy experiments that sometimes last over a year, because that makes them more money. Spotify researchers revealed how they keep people subscribing longer when they suggest less obvious, more diverse, and more useful recommendations, making them more money. YouTube, after initially optimizing for engagement, switched to optimizing for keeping people coming back to YouTube over years, finding that is what made them the most money in the long run.

Scam-filled, engagement-hungry, or manipulated algorithms make less money than helpful algorithms. Companies such as Google, YouTube, Netflix, Wikipedia, and Spotify offer lessons for companies such as Facebook, Twitter, and Amazon.

Some companies know that adversaries attack and shill their algorithms because the profit motive is so high from getting to the top of trending algorithms or recommendations. Some companies know that if they invest in eliminating spam, shilling, and manipulation, that investment will pay off in customer satisfaction and higher growth and revenue in the future. Some companies align the interests of their customers and the company by optimizing algorithms for long-term customer satisfaction, retention, and growth.

Wisdom of the crowds failed the internet. Then the algorithms that depend on wisdom of the crowds amplified misinformation across the internet. Some already have shown the way to fix the problem. If all of us borrow lessons from those that already have solutions, we can solve the problem of algorithms amplifying misinformation. All companies can fix their algorithms, and they will make more money if they do.

Many executives are unaware of the harms of optimizing for engagement. Many do not realize when they are hurting the long-term success of the company.

This book has recommendations for regulators and policy makers, focusing their work on incentives including executive compensation and the advertising that funds misinformation and scams. This book provides examples to teams inside companies of why they should not optimize for engagement and what companies do instead. And this book provides evidence consumers can use to advocate for companies better helping their customers while also increasing profits for the company.

Sunday, January 07, 2024

Book excerpt: Use only trustworthy behavior data

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Adversaries manipulate wisdom of crowds algorithms by controlling a crowd of accounts.

Their controlled accounts can then coordinate to shill whatever they like, shout down opposing views, and create an overwhelming flood of propaganda that makes it hard for real people to find real information in the sea of noise.

The Aspen Institute Commission, in a report titled Commission on Information Disorder, suggests the problem is often confined to a surprisingly small number of accounts, amplified by coordinated activity from other controlled accounts.

They describe how it works: “Research reveals that a small number of people and/or organizations are responsible for a vast proportion of misinformation (aka ‘superspreaders’) ... deploying bots to promote their content ... Some of the most virulent propagators of falsehood are those with the highest profile [who are often] held to a lower standard of accountability than others ... Many of these merchants of doubt care less about whether they lie, than whether they successfully persuade, either with twisted facts or outright lies.”

The authors of this report offer a solution. They suggest that these manipulative accounts should not be amplified by algorithms, making the spreading of misinformation much more costly and much more difficult to do efficiently.

Specifically, they argue social media companies and government regulators should “hold superspreaders of mis- and disinformation to account with clear, transparent, and consistently applied policies that enable quicker, more decisive actions and penalties, commensurate with their impacts — regardless of location, or political views, or role in society.”

Because just a few accounts, supported by substantial networks of controlled shill accounts, are the problem, they add that social media should focus “on highly visible accounts that repeatedly spread harmful misinformation that can lead to significant harms.”

Problems with adversaries manipulating, shilling, and spamming have a long history. One way to figure out how to solve the problem is to look at how others mitigated these issues in the past.

Particularly helpful are the solutions for web spam. As described in the research paper "Web Spam Detection with Anti-Trust Rank", web spam is “artificially making a webpage appear in the top results to various queries on a search engine.” The web spam problem is essentially the same problem faced by social media rankers and recommenders. Spammers manipulate the data that ranking and recommender algorithms use to determine what content to surface and amplify.

The researchers described how bad actors create web spam: “A very common example ... [is] creating link farms, where webpages mutually reinforce each other ... [This] link spamming also includes ... putting links from accessible pages to the spam page, such as posting web links on publicly accessible blogs.”

This is essentially the same techniques used by adversaries for social media; adversaries use controlled accounts and bots to post, reshare, and like content, reinforcing how popular it appears.

To fix misinformation on social media, learn from what has worked elsewhere. TrustRank is a popular and widely used technique in web search engines to reduce the efficiency, effectiveness, and prevalence of web spam. It “effectively removes most of the spam” without negatively impacting non-spam content.

How does it work? “By exploiting the intuition that good pages -- i.e. those of high quality -- are very unlikely to point to spam pages or pages of low quality.”

The idea behind TrustRank is to start from the trustworthy and view the actions of those trustworthy people to also be likely to be trustworthy. Trusted accounts link to, like, share, and post information that is trustworthy. Everything they say is trustworthy is now mostly trustworthy too, and the process repeats. In this way, trust gradually propagates out from a seed of known reliable accounts to others.

As the "Combating Web Spam with TrustRank" researchers put it, “We first select a small set of seed pages to be evaluated by an expert. Once we manually identify the reputable seed pages, we use the link structure of the web to discover other pages that are likely to be good ... The algorithm identifies other pages that are likely to be good based on their connectivity with the good seed pages.”

TrustRank works for web spam in web search engines. “We can effectively filter out spam from a significant fraction of the web, based on a good seed set of less than 200 sites.” Later work suggested adding in Anti-Trust Rank has some benefits as well, which works by taking a set of known untrustworthy people who have a history of spamming, shilling, and attempting to manipulate the ranker algorithms, then assuming that everything they have touched are all also likely to be untrustworthy.

In social media, much of the problem is not that bad content exists at all, but that bad content is amplified by algorithms. Specifically, rankers and recommenders on social media look at likes, shares, and posts, then think that shilled content is popular, so the algorithms share the shilled content with others.

The way this works, both for web search and for social media, is that wisdom of the crowd algorithms including rankers and recommenders count votes. A link, like, click, purchase, rating, or share is a vote that a piece of content is useful, interesting, or good. What is popular or trending is what gets the most votes.

Counting votes in this way easily can be manipulated by people who create or use many controlled accounts. Bad actors vote many times, effectively stuffing the ballot box, to get what they want on top.

If wisdom of crowds only uses trustworthy data from trustworthy accounts, shilling, spamming, and manipulation becomes much more difficult.

Only accounts known to be trustworthy should matter for what is considered popular. Known untrustworthy accounts with a history of being involved in propaganda and shilling should have their content hidden or ignored. And unknown accounts, such as brand new accounts or accounts that have no connection to trustworthy accounts, also should be ignored as potentially harmful and not worth the risk of including.

Wisdom of the trustworthy dramatically raises the costs for adversaries. No longer can a few dozen accounts, acting together, successfully shill content.

Now, only trustworthy accounts amplify. And because trust is hard to gain and easily lost, disinformation campaigns, propaganda, shilling, and spamming often become cost prohibitive for adversaries.

As Harvard fellow and security expert Bruce Schneier wrote in a piece for Foreign Policy titled “8 Ways to Stay Ahead of Influence Operations,” the problem is recognizing these fake accounts that are all acting together in a coordinated way to manipulate the algorithms and not using their data to inform ranker and recommender algorithms.

Schneier wrote, “Social media companies need to detect and delete accounts belonging to propagandists as well as bots and groups run by those propagandists. Troll farms exhibit particular behaviors that the platforms need to be able to recognize.”

Shills and trolls are shilling and trolling. That is not normal human behavior.

Real humans don’t all act together, at the same time, to like and share some new content. Real humans cannot act many times per second or vote on content they have never seen. Real humans cannot all like and share content from a pundit as soon as it appears and then all do it again exactly in the same way for the next piece of content from that pundit.

When bad actors use controlled fake accounts to stuff the ballot box, the behavior is blatantly not normal.

There are a lot of accounts in social media today that are being used to manipulate the wisdom of the crowd algorithms. Their clicks, likes, and shares are bogus and should not be used by the algorithms.

Researchers in Finland studying the phenomenon back in 2021 wrote that “5-10% of Twitter accounts are bots and responsible for the generation of 20-25% of all tweets.” The researchers describe these compromised accounts as “cyborgs” and write that they “have characteristics of both human-generated and bot-generated accounts."

These controlled accounts are unusually active, producing a far larger percentage of all tweets than the percentage of accounts they represent. This also was a low estimate on the total amount of manipulated accounts in social media as it did not include compromised accounts, accounts that are paid to shill, or accounts paid to disclose their password so they can sometimes be used by someone else to shill.

Because bad actors using accounts to spam and shill must quickly act in concert to spam and shill, and often do so repeatedly with the same accounts, their behavior is not normal. Their unusually active and unusually timed actions can be detected.

One detection tool published by researchers at the American Association for Artificial Intelligence (AAAI) conference was a “classifier ... capturing the local and global variations of observed characteristics along the propagation path ... The proposed model detected fake news within 5 min of its spread with 92 percent accuracy for Weibo and 85 percent accuracy for Twitter.”

Professor Kate Starbird, who runs a research group studying disinformation at University of Washington, wrote how social media companies have taken exactly the wrong approach, exempting prominent accounts associated with misinformation, disinformation, and propaganda rather than subjecting them and their shills to skepticism and scrutiny. Starbird wrote, “Research shows that a small number of accounts have outsized impact on the spread of harmful misinfo (e.g. around vaccines and false/misleading claims of voter fraud). Instead of whitelisting these prominent accounts, they should be held to higher levels of scrutiny and accountability.”

Researchers have explained the problem, being willing to amplify anything that isn’t provably bad rather than only amplifying that which is known to be trustworthy. In a piece titled Computational Propaganda, Stanford Internet Observatory researcher Renee DiResta wrote, “Our commitment to free speech has rendered us hesitant to take down disinformation and propaganda until it is conclusively and concretely identified as such beyond a reasonable doubt. That hesitation gives ... propagandists an opportunity.”

The hesitation is problematic, as it makes it easy to manipulate wisdom of crowds algorithms. “Incentive structures, design decisions, and technology have delivered a manipulatable system that is being gamed by propagandists,” DiResta said. “Social algorithms are designed to amplify what people are talking about, and popularity is ... easy to feign.”

Rather than starting from the assumption that every account is real, the algorithms should start with the assumption that every account is fake.

Only provably trustworthy accounts should be used by wisdom of the crowd algorithms such as trending, rankers, and recommenders. When considering what is popular, not only should fake accounts coordinating to shill be ignored, but also there should be considerable skepticism toward new accounts that have not been proven to be independent of the others.

With wisdom of crowds algorithms, rather than think of which accounts should be banned and not used, consider the minimum number of trustworthy accounts needed to not lower the perceived quality of the recommendations. There is no reason to use all the data when the biggest problem is shilled and untrustworthy data.

Companies are playing whack-a-mole with bad actors who just create new accounts or find new shills every time they’re whacked because it’s so profitable -- like free advertising -- to create fake crowds that manipulate the algorithms.

Propagandists and scammers are loving it and winning. It’s easy and lucrative for them.

Rather than classify accounts as spam, classify accounts as trustworthy. Only use trustworthy data as input to the algorithms, ignoring anything unknown or borderline as well as known spammers and shills.

Toss big data happily, anything suspicious at all. Do not be concerned about false positives galore accidentally marking new accounts or borderline accounts as shills when deciding what to input to the recommender algorithms. None of that matters if it does not reduce the perceived quality of the recommendations.

As with web spam and e-mail spam, the goal isn’t eliminating manipulation, coordination, disinformation, scams, and propaganda.

The goal is raising the costs on adversaries, ideally to the point where most of it is no longer cost-effective. If bad actors no longer find it easy and effective to try to manipulate recommender systems on social media, most will stop.

Thursday, January 04, 2024

Book excerpt: Data and metrics determine what algorithms do

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Wisdom of the crowd algorithms, including rankers and recommenders, work from data about what people like and do. Teams inside tech companies gather user behavior data then tune and optimize algorithms to maximize measurable targets.

The data quality and team incentives control what the algorithms produce and how useful it is. When the behavior data or goal metrics are bad, the outcome will be bad. When the wisdom of the crowds data is trustworthy and when the algorithms are optimized for the long-term, algorithms like recommendations will be useful and helpful.

Queensland University Professor Rachel Thomas warned that “unthinking pursuit of metric optimization can lead to real-world harms, including recommendation systems promoting radicalization .... The harms caused when metrics are overemphasized include manipulation, gaming, a focus on short-term outcomes to the detriment of longer-term values ... particularly when done in an environment designed to exploit people’s impulses and weaknesses."

The problem is that “metrics tend to overemphasize short-term concerns.” Thomas gave as an example the problems that YouTube had before 2017 because they years earlier picked “watch time” (how long people spend watching videos) as a proxy metric for user satisfaction. An algorithm that tries to pick videos people will watch right now will tend to show anything to get a click including risqué videos or lies that get people angry. So YouTube struggled with their algorithms amplifying sensationalistic videos and scams. These clickbait videos looked great on short-term metrics like watch time but repelled users in the long-term.

“AI is very effective at optimizing metrics,” Thomas said. Unfortunately, if you pick the wrong metrics, AI will happily optimize for the wrong thing. “The unreasonable effectiveness of metric optimization in current AI approaches is a fundamental challenge to the field and yields an inherent contradiction: solely optimizing metrics leads to far from optimal outcomes.”

Unfortunately, it’s impossible to get a perfect success metric for algorithms. Not only are metrics “just a proxy for what you really care about,” but also all “metrics can, and will be gamed.” The goal has to be to make the success metrics as good as possible and keep fixing the metrics as they drift away from the real goal of the long-term success of the company. Only by constantly fixing the metrics will teams optimize the algorithms to help the company grow and profit over the years.

A classic article by Steven Kerr, “On the folly of rewarding A while hoping for B,” was originally published back in 1975. The author wrote: “Many managers seek to establish simple, quantifiable standards against which to measure and reward performance. Such efforts may be successful in highly predictable areas within an organization, but are likely to cause goal displacement when applied anywhere else.”

Machine learning algorithms need a target. Teams need to have success metrics for algorithms so they know how to make them better. But it is important to recognize that metrics are likely to be wrong and to keep trying to make them better.

You get what you measure. When managers pick a metric, there are almost always rewards and incentives tied to that metric. Over time, as people optimize for the metric, you will get that metric maximized, often at the expense of everything else, and often harming the true goals of the organization.

Kerr went on to say, “Explore what types of behavior are currently being rewarded. Chances are excellent that ... managers will be surprised by what they find -- that firms are not rewarding what they assume they are.” An editor when Kerr's article was republished in 1995 summarized this as, “It’s the reward system, stupid!”

Metrics are hard to get right, especially because they often end up being a moving target over time. The moment you put a metric in place, people both inside and outside the company will start to find ways to succeed against that metric, often finding cheats and tricks that move the metric without helping customers or the company. It's as Goodhart’s Law says: “When a measure becomes the target, it ceases to be an effective measure.”

One familiar example to all of us is the rapid growth of clickbait headlines -- “You won’t believe what happens next” -- that provide no value but try to get people to click. This happened because the headline writers were rewarded for getting a click, whether or not they do it through deception. When what the organization optimizes is getting a click, teams will drive clicks.

Often companies pick poor success metrics such as clicks just because it is too hard to measure the things that matter most. Long-term metrics that try to be good proxies for what we really care about such as retention, long-term growth, long-term revenue, and customer satisfaction can be costly to measure. And, because of Goodhart’s Law, the metrics will not work forever and will need to be changed over time. Considerable effort is necessary.

Many leaders don’t realize the consequences of not putting in that effort. You will get what you measure. Unless you reward teams for the long-term growth and profitability of the company, teams will not optimize for the success of the company or shareholders.

What can companies do? Professor Thomas went on to say that companies should “use a slate of metrics to get a fuller picture and reduce gaming” which can “keep metrics in their place.” The intent is that gaming of one metric may be visible in another, so a slate with many metrics may show problems that otherwise might be missed. Another idea is changing metrics frequently, which also can reduce gaming and provides an opportunity to adjust metrics so they are closer to the true target.

Getting this wrong causes a lot of harm to the company and sometimes to others as well. “A modern AI case study can be drawn from recommendation systems,” Thomas writes. “Platforms are rife with attempts to game their algorithms, to show up higher in search results or recommended content, through fake clicks, fake reviews, fake followers, and more.”

“It is much easier to measure short-term quantities [such as] click-through rates,” Thomas said. But “many long-term trends have a complex mix of factors and are tougher to quantify.” There is a substantial risk if teams, executives, and companies get their metrics wrong. “Facebook has been the subject of years’ worth of ... scandals ... which is now having a longer-term negative impact on Facebook’s ability to recruit new engineers” and grow among younger users.

As Googler and AI expert François Chollet once said, “Over a short time scale, the problem of surfacing great content is an algorithmic problem (or a curation problem). But over a long time scale, it's an incentive engineering problem.”

It is the optimization of the algorithms, not the algorithms themselves, that determine what they show. Incentives, rewards, and metrics that determine what wisdom of the crowd algorithms do. That is why metrics and incentives are so important.

Get the metrics wrong, and the long-term costs for the company — stalled growth, poor retention, poor reputation, regulatory risk — become worse and worse. Because the algorithms are optimized over time, it is important to be constantly fixing the data and metrics to make sure they are trustworthy and doing the right thing. Trustworthy data and long-term metrics lead to algorithms that minimize scams and maximize long-term growth and profits.

Wednesday, January 03, 2024

Book excerpt: From hope to despair and back to hope

(This is an excerpt from drafts of my book, "Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It")

Twenty five years ago, when recommendation algorithms first launched at large scale on the internet, these algorithms helped people discover new books to read and new movies to watch.

In recent years, wisdom of the crowds failed the internet, and the internet filled with misinformation.

The story of why this happened — and how to fix it — runs through the algorithms that pick what we see on the internet. Algorithms use wisdom of the crowds at a massive scale to find what is popular and interesting. That is how they determine what to show to millions of people. When these algorithms fail, misinformation flourishes.

The reason the algorithms fail is not what you think. It is not the algorithms.

Only with an insider view can readers see how the algorithms work and how tech companies build these algorithms. The surprise is that the algorithms are actually made of people.

People build and maintain these algorithms. Wisdom of the crowds works using data about what people do. The key to why algorithms go wrong, and how they can be fixed, runs through people and the incentives people have.

When bad actors manipulate algorithms, they are trying to get their scams and misinformation seen by as many people as possible as cheaply as possible.

When teams inside companies optimize algorithms, they are trying to meet the goals executives set for them, whatever those goals are and regardless of whether they are the right goals for the company.

People’s incentives control what the algorithms do. And incentives are the key to fixing misinformation on the internet.

To make wisdom of the crowds useful again, and to make misinformation ineffective, all companies must use only reliable data and must not optimize their algorithms for engagement. As this book shows, these solutions reduce the reach of misinformation, making it far less effective and far more expensive for scammers and fraudsters.

We know these solutions work because some companies did it. Exposing a gold mine of knowledge buried deep inside the major tech companies, this book shows that some successfully stopped their algorithms from amplifying misinformation by not optimizing for engagement. And, importantly, these companies made more money by doing so.

Companies that have not fixed their algorithms have taken a dark path, blinded by short-term optimization for engagement, and the teams deceived by bad incentives and bad metrics inside of their companies. This book shows the way out for those led astray.

People inside and outside the powerful tech companies, including consumers and policy makers, can help align incentives away from short-term engagement and toward long-term customer satisfaction and growth.

It turns out it's a win-win to listen to consumers and optimize algorithms to be helpful for your customers in the long-term. Nudging people's incentives in practical ways is easier once you see inside the companies, understand how they build these algorithms, and see that companies make more money when they do not myopically optimize their algorithms in ways that later will cause a flood of misinformation and scams.

Wisdom of the crowd algorithms are everywhere on the internet. Readers of this book started out feeling powerless to fix the algorithms that control everything we see and the misinformation these algorithms promote. Readers of this book end this book hopeful and ready to push for change.