Monday, August 30, 2010

What is the benefit of freaking customers out?

Miguel Helft and Tanzina Vega at the New York Times have a front page article today, "Retargeting Ads Follow Surfers to Other Sites", on a form of personalized web advertising now being called retargeting.

An excerpt:
People have grown accustomed to being tracked online and shown ads for categories of products they have shown interest in, be it tennis or bank loans.

Increasingly, however, the ads tailored to them are for specific products that they have perused online. While the technique, which the ad industry calls personalized retargeting or remarketing, is not new, it is becoming more pervasive as companies like Google and Microsoft have entered the field. And retargeting has reached a level of precision that is leaving consumers with the palpable feeling that they are being watched as they roam the virtual aisles of online stores.

In remarketing, when a person visits an e-commerce site and looks at say, an Etienne Aigner Athena satchel on eBags.com, a cookie is placed into that person’s browser, linking it with the handbag. When that person, or someone using the same computer, visits another site, the advertising system creates an ad for that very purse.
The article later goes on to contrast this technique of following you around with products you looked at before with behavioral targeting like Google is doing, which learns your broader category interests and shows ads from those categories.

If the goal of the advertising is to be useful and relevant, though, I think both of these are missing the mark. What you want to do is help people discover something they want to buy. Since the item they looked at before obviously wasn't quite right -- they didn't buy it after all -- showing that again doesn't help. Showing closely related alternatives, items that people might buy after rejecting the first item, could be quite useful though.

As marketing exec Alan Pearlstein says at the end of the NYT article, "What is the benefit of freaking customers out?" Remarketing freaks people out. If we are going to do personalized advertising, the goal should be to have the advertising be useful, either by sharing value with consumers using coupons as Pearlstein suggests, or by helping consumers find something interesting that they wouldn't have discovered on their own.

But, publishers should be careful when working with these new ad startups. A startup has a huge incentive to maximize short-term revenue and little incentive to maximize relevance. For the startup, as long as it brings in more immediate revenue, it is perfectly fine to show annoying ads that freak customers out and drive many away. Publishers need to force the focus to be on the value of the ads to the consumer so their customers are happy, satisfied, and keep coming back.

Thursday, August 19, 2010

Measuring online brand advertising without experiments

A few Googlers recently published a paper with a terribly dull title, "Evaluating Online Ad Campaigns in a Pipeline: Causal Models at Scale" (abstract, PDF), at the KDD 2010 conference.

The paper turns out to be a quite interesting attempt to measure the impact of online display advertising -- a notoriously difficult problem -- by looking at how it changes people's searching and browsing online. That's hard enough, but these crazy Googlers also are trying to do this without using A/B testing. To do that last trick, they separate people into those the exposed who have seen the ad and the controls who have not seen the ad while carefully limiting the controls only to people who are similar to the exposed.

From the paper:
Traditionally, online campaign effectiveness has been measured by "clicks" ... However, many display ads are not click-able ... and some campaigns hope to build longer-term interest in the brand rather than drive immediate response. Counting clicks alone then misses much of the value of the campaign.

Better measures of campaign effectiveness are based on the change in online brand awareness ... [due] to the display ad campaign alone. We ... [find] the change in probability that a user searches for brand terms or navigates to brand sites that can be attributed to an online ad campaign.

Randomized experiments ... are the gold standard for estimating treatment effects ... [but it] requires an advertiser to forego showing ads to some users ... [which] advertisers are not keen to [do] ... Estimation without randomization is more difficult but not always impossible .... Simply put, the controls [we pick] were eligible to be served campaign ads but were not.

Our estimates require summary (not personally identifiable) data on exposed and controls. The summary data are obtained from several sources, including the advertiser's own campaign information, ad serving logs, and sampled data from users who have installed Google toolbar and opted in to enhanced features.
By the way, some have speculated in the past ([1] [2]) that Google toolbar data is being used for Google's advertising, but there was no public confirmation of that from Google. To my knowledge, this is the first public confirmation that data from Google's ubiquitous toolbar is being used by them in at least some way in their advertising.

For more on related topics, please see also my November 2008 post, "Measuring offline ads by their online impact", and my July 2008 post, "Google Toolbar data and the actual surfer model".

Monday, August 16, 2010

Human computation and lemons

NYU Professor Panos Ipeirotis has an insightful post, "Mechanical Turk, Low Wages, and the Market for Lemons", that looks at why wages are so low, usually well below minimum wage, on Amazon's MTurk.

His theory is that spammers and cheaters have turned MTurk into a market for lemons. The quality is now so bad that buyers demand a risk premium and require redundant work for quality checks, splitting what might be a risk-reduced fair wage three to five ways among the workers.

An excerpt from his post:
A market for lemons is a market where the sellers cannot evaluate beforehand the quality of the goods that they are buying. So, if you have two types of products (say good workers and low quality workers) and cannot tell who is whom, the price that the buyer is willing to pay will be proportional to the average quality of the worker.

So the offered price will be between the price of a good worker and a low quality worker. What a good worker would do? Given that good workers will not get enough payment for their true quality, they leave the market. This leads the buyer to lower the price even more towards the price for low quality workers. At the end, we only have low quality workers in the market (or workers willing to work for similar wages) and the offered price reflects that.

This is exactly what is happening on Mechanical Turk today. Requesters pay everyone as if they are low quality workers, assuming that extra quality assurance techniques will be required on top of Mechanical Turk.

So, how can someone resolve such issues? The basic solution is the concept of signalling. Good workers need a method to signal to the buyer their higher quality. In this way, they can differentiate themselves from low quality workers.

Unfortunately, Amazon has not implemented a good reputation mechanism. The "number of HITs worked" and the "acceptance percentage" are simply not sufficient signalling mechanisms.
If you like Panos' post, you might also be interested in GWAP guru and CMU Professor Luis von Ahn's recent post, "Work and the Internet", where Luis bemoans the low wages on MTurk and questions whether they amount to exploitation. Panos' post is a response to Luis'.

Please see also my 2005 post, "Amazon Mechanical Turk?", where I wrote, "If I scale up by doing cheaper answers, I won't be able to filter experts as carefully, and quality of the answers will be low. Many of the answers will be utter crap, just made up, quick bluffs in an attempt to earn money from little or no work. How will they deal with this?"