His theory is that spammers and cheaters have turned MTurk into a market for lemons. The quality is now so bad that buyers demand a risk premium and require redundant work for quality checks, splitting what might be a risk-reduced fair wage three to five ways among the workers.
An excerpt from his post:
A market for lemons is a market where the sellers cannot evaluate beforehand the quality of the goods that they are buying. So, if you have two types of products (say good workers and low quality workers) and cannot tell who is whom, the price that the buyer is willing to pay will be proportional to the average quality of the worker.If you like Panos' post, you might also be interested in GWAP guru and CMU Professor Luis von Ahn's recent post, "Work and the Internet", where Luis bemoans the low wages on MTurk and questions whether they amount to exploitation. Panos' post is a response to Luis'.
So the offered price will be between the price of a good worker and a low quality worker. What a good worker would do? Given that good workers will not get enough payment for their true quality, they leave the market. This leads the buyer to lower the price even more towards the price for low quality workers. At the end, we only have low quality workers in the market (or workers willing to work for similar wages) and the offered price reflects that.
This is exactly what is happening on Mechanical Turk today. Requesters pay everyone as if they are low quality workers, assuming that extra quality assurance techniques will be required on top of Mechanical Turk.
So, how can someone resolve such issues? The basic solution is the concept of signalling. Good workers need a method to signal to the buyer their higher quality. In this way, they can differentiate themselves from low quality workers.
Unfortunately, Amazon has not implemented a good reputation mechanism. The "number of HITs worked" and the "acceptance percentage" are simply not sufficient signalling mechanisms.
Please see also my 2005 post, "Amazon Mechanical Turk?", where I wrote, "If I scale up by doing cheaper answers, I won't be able to filter experts as carefully, and quality of the answers will be low. Many of the answers will be utter crap, just made up, quick bluffs in an attempt to earn money from little or no work. How will they deal with this?"
3 comments:
Are you familiar with Crowdflower ( http://crowdflower.com/about ), founded by Lukas Biewald and Chris Van Pelt, two of my former colleagues? Among other things, Crowdflower provides worker reputation management services over AMT and other crowdsourcing systems.
Hi, Chad. There's a few startups with products that layer on top of MTurk to manage issuing redundant tasks, help structure the tasks so they yield good results, and provide better quality, yep.
But that kind of misses the point. The point isn't that there aren't ways for people to get productive work out of MTurkers for some tasks. The point is that wages are depressed on MTurk because of the risk requesters take on of getting no useful work and because requesters have to put so much effort into anti-cheating measures, including paying for a lot of redundant work.
I agree with Panos that the solution likely is around signaling and reputation. I also think those are things Amazon should have implemented from the beginning and that the current problems were predictable from the beginning.
I think the question of exploitation is ridiculous. All contribution is voluntary. If anything, MTurk offers some alternatives for those who currently are not productive enough to be employed at minimum wage.
Also, risk premiums are a perfectly reasonable solution, although they are not the only one.
Just to think of one example, people could organize a reputation or certification system, where people who do good work get labelled. Those people should then be able to get higher fees for this kind of task.
Post a Comment