Sunday, April 30, 2023

Why did wisdom of the crowds fail?

Wisdom of the crowds summarizes the opinions of many people to produce useful results. Wisdom of the crowds algorithms -- like rankers, recommenders, and trending algorithms -- usefully do this at massive scale.

But several years ago, wisdom of the crowds on the internet started failing. Algorithms started recommending misinformation, scams, and disinformation. What happened?

Let's think about it in more detail. What changed that caused problems for wisdom of the crowds? Why did it change? What can we can we do about it?

Importantly, did anyone find ways to mitigate the problems? If some did fix their algorithms from amplifying misinformation on their platforms, how did they do that? And why didn't everyone fix their wisdom of the crowd algorithms to prevent them from amplifying misinformation?

I have my own answers to these questions, but I'm curious to hear others. If you have thoughts, I'm most curious to hear about whether you think anyone (at least partially) addressed the problems aggravating misinformation on the internet and, if so, why you think others have not.

2 comments:

Mark Verber said...

my first reaction is that it stops being wisdom of the crowd when people know there is an algorithm which they can manipulate and profit from. Would be interested in your thoughts... but I guess that would come in the form of a book you are writing? I think one thing that would help is verified identifities and insuring a single vote / individual.

Richard Reisman - Independent Media-Tech Innovator said...

I had missed this post, but on 1/1/23, I had sent you this email addressing the question (since then I hit on the terms eigentrust or eigenreputation as nice encapsulation):

Hi Greg,

Since you are “writing a book on recommender algorithms gone wrong and how we can fix it,” you might be interested on some strategies I have been suggesting over several decades.

At the highest level, this has to do with user choice and control over the algorithms -- by delegating that to user mediator agents, not leaving it in the hands of media or commerce platforms. I am calling it freedom of impression.

At a more algorithmic level, I have been suggesting PageRank-like strategies for collecting crowdsourced signals of human judgment and refining it based on recursive reputation – “the augmented wisdom of crowds.” …And users should be able to choose how that augmenting is done.

Both are addressed in the items on this list (https://ucm.teleshuttle.com/p/items.html), in Tech Policy Press, my blog, and elsewhere.

In particular, on delegation of user choice, you might want to look at the Delegation series (https://techpolicy.press/delegation-or-the-twenty-nine-words-that-the-internet-forgot), and this new piece (https://techpolicy.press/into-the-plativerse-through-fiddleware). And on the algorithms, there is Part 1 of this piece (https://techpolicy.press/the-internet-beyond-social-media-thought-robber-barons), and fuller detail (https://ucm.teleshuttle.com/2018/07/the-augmented-wisdom-of-crowds-rate.html) in this older one.

Happy to discuss. (I had some good chats with Michael Schrage a while back.)