Friday, December 04, 2020

Facebook and investing in the long-term

Kevin Roose, Mike Isaac and Sheera Frenkel at the New York Times had a great piece ([1] [2]) on the internal debate inside Facebook on removing disinformation:
Facebook engineers and data scientists posted the results of a series of experiments called "P(Bad for the World)." ... The team trained a machine-learning algorithm to predict posts that users would consider "bad for the world" and demote them in news feeds. In early tests, the new algorithm successfully reduced the visibility of objectionable content.

But it also lowered the number of times users opened Facebook, an internal metric known as "sessions" that executives monitor closely.

Another product, an algorithm to classify and demote "hate bait" — posts that don’t strictly violate Facebook’s hate speech rules, but that provoke a flood of hateful comments ... [Another] called "correct the record," would have retroactively notified users that they had interacted with false news and directed them to an independent fact-check ... [Both were] vetoed by policy executives who feared it would disproportionately show notifications to people who shared false news from right-wing websites.

Many rank-and-file workers and some executives ... want to do more to limit misinformation and polarizing content. [Others] fear those measures could hurt Facebook’s growth, or provoke a political backlash ... Some disillusioned employees have quit, saying they could no longer stomach working for a company whose products they considered harmful.
The article is an insightful look at the struggle inside Facebook on recommender systems for news, metrics, and short vs. long-term metrics and growth. Key is fear of harming short-term metrics like sessions per user and engagement.

Any attempt to increase quality of news or ads is going to result in a short-term reduction in metrics engagement, usage, and revenue. That's obvious and not the question to ask. The question to ask is, does it pay off in the long-term?

It's unsurprising that once you've kicked off all users who hate what Facebook has become and addicted the rest to clickbait, the remainder will use Facebook less in the short-term if you improve the quality of content.

This is just like any other investment. If you invest in any large expense, you expect your short-term profits to drop, but you're betting that your long-term profits will rise. In this case, increased news quality is an investment in bringing back lapsed users.

Even measured over weeks, sessions per user is going to take a hit with a change to news quality because users who like higher quality news already disengaged and abandoned and current heavy users won't like the change. It will take a long time to pay off.

For Facebook, reducing disinformation probably would also be an investment in other areas. Facebook is polluting society with disinformation, externalizing costs; cutting disinformation is an investment in reducing regulation risk from governments. And Facebook wants good people, and many good people are leaving ([1]) or won't even consider working there because of their practices, a considerable long-term cost on the company; cutting disinformation is an investment in recruiting and retention. So Facebook probably would see benefits beyond lapsed users.

Facebook and others need to think of reducing disinformation as an investment in the future. Eliminating scams, low quality ads, clickbait, and disinformation often will reduce short-term metrics, but is a long-term investment in quality to reduce abandons, bring back lapsed users, and in other long-term business goals. These investments take a long-time to pay off, but that's why you make investments, for the long-term payoff.

No comments: