The World Wide Web is an enormous pool of community-generated content. People everywhere and anywhere can write a web page and add it to the Web.
Recently, there have been more and more attempts to build playgrounds of user-generated content. These sites are built around tools that make it easy to create content than writing an HTML page and often have constraints on the purpose or the type of content. Examples include Wikipedia, Yahoo Answers, My Space, YouTube, and the various weblog creation and searching tools.
When looking at the future of these new community-generated content sandboxes, it is useful to look at the experience of the Web as a whole. In the idealistic early 1990's, all websites tended to be academic and useful. There was little point of doing anything else. A smattering of geeks at universities were the only users of the fledging World Wide Web.
As the Web grew, so did the profit motive. More people were on the web, so there was more money to be made if a fraction of Web users could be fooled. Junk started to appear.
Today, the greatest pool of user-generated content ever created, the World Wide Web, is full of crap and spam. Search engines or other applications that seeks to get value from the Web needs to be built with filtering in mind. The Web is a cesspool, but sufficiently powerful tools can pluck gems from the sea of goo.
Now, many are building their own playgrounds for community-generated content, miniature versions of the World Wide Web. As the experience of the Web shows, we cannot expect the crowds to selflessly combine their skills and knowledge to deliver wisdom, not once the sites attract the mainstream. Profit motive combined with indifference will swamp the good under a pool of muck. To survive, the goal will have to shift from gathering content to uncovering good content.
The experience of the World Wide Web as a whole should serve as a lesson to those building the next generation of community-powered websites. At scale, it is no longer about aggregating knowledge, it is about filtering crap. We need seek the signal in the noise and pull the wisdom from the din.
Tuesday, July 11, 2006
Subscribe to:
Post Comments (Atom)
4 comments:
Ultimately, there needs to be some type of editorial control somewhere in the system. It can be automated or manual, but it isn't optional. Eric.
Funny you should say that. I was just thinking the same thing.
Greg writes: "Search engines or other applications that seeks to get value from the Web needs to be built with filtering in mind."
I would almost argue that search engines, particularly ones that led the monetization revolution, are largely, even if not directly, responsible for the increase in crap content. Were it not for the success of current search engine advertising models (both results-page advertising as well as site-based advertising), I don't think we'd see hardly as much junk appearing on the web as we currently do. It was the success of those ad models that bred the profit motive for web spam.
It's ironic that we now have to turn back to these same searching engines to get rid of the spam. Almost counter-intuitive. And maybe even counter productive. Because the more successful "filters" become, the higher the profit margin is going to be for any successful web "spammer". It's driving a vicious circle.
What a great article. Greg, I've read some of your stuff before--I just get a kick out of it because your last name is "Linden" and I work for Linden Lab! Anyway, time for me to share this with coworkers. Good stuff. :)
Post a Comment